Sample records for minimum probability flow

  1. Probable flood predictions in ungauged coastal basins of El Salvador

    USGS Publications Warehouse

    Friedel, M.J.; Smith, M.E.; Chica, A.M.E.; Litke, D.

    2008-01-01

    A regionalization procedure is presented and used to predict probable flooding in four ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. The flood-prediction problem is sequentially solved for two regions: upstream mountains and downstream alluvial plains. In the upstream mountains, a set of rainfall-runoff parameter values and recurrent peak-flow discharge hydrographs are simultaneously estimated for 20 tributary-basin models. Application of dissimilarity equations among tributary basins (soft prior information) permitted development of a parsimonious parameter structure subject to information content in the recurrent peak-flow discharge values derived using regression equations based on measurements recorded outside the ungauged study basins. The estimated joint set of parameter values formed the basis from which probable minimum and maximum peak-flow discharge limits were then estimated revealing that prediction uncertainty increases with basin size. In the downstream alluvial plain, model application of the estimated minimum and maximum peak-flow hydrographs facilitated simulation of probable 100-year flood-flow depths in confined canyons and across unconfined coastal alluvial plains. The regionalization procedure provides a tool for hydrologic risk assessment and flood protection planning that is not restricted to the case presented herein. ?? 2008 ASCE.

  2. Advancements in dynamic kill calculations for blowout wells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kouba, G.E.; MacDougall, G.R.; Schumacher, B.W.

    1993-09-01

    This paper addresses the development, interpretation, and use of dynamic kill equations. To this end, three simple calculation techniques are developed for determining the minimum dynamic kill rate. Two techniques contain only single-phase calculations and are independent of reservoir inflow performance. Despite these limitations, these two methods are useful for bracketing the minimum flow rates necessary to kill a blowing well. For the third technique, a simplified mechanistic multiphase-flow model is used to determine a most-probable minimum kill rate.

  3. Stream gage descriptions and streamflow statistics for sites in the Tigris River and Euphrates River Basins, Iraq

    USGS Publications Warehouse

    Saleh, Dina K.

    2010-01-01

    Statistical summaries of streamflow data for all long-term streamflow-gaging stations in the Tigris River and Euphrates River Basins in Iraq are presented in this report. The summaries for each streamflow-gaging station include (1) a station description, (2) a graph showing annual mean discharge for the period of record, (3) a table of extremes and statistics for monthly and annual mean discharge, (4) a graph showing monthly maximum, minimum, and mean discharge, (5) a table of monthly and annual mean discharges for the period of record, (6) a graph showing annual flow duration, (7) a table of monthly and annual flow duration, (8) a table of high-flow frequency data (maximum mean discharge for 3-, 7-, 15-, and 30-day periods for selected exceedance probabilities), and (9) a table of low-flow frequency data (minimum mean discharge for 3-, 7-, 15-, 30-, 60-, 90-, and 183-day periods for selected non-exceedance probabilities).

  4. Fire behavior simulation in Mediterranean forests using the minimum travel time algorithm

    Treesearch

    Kostas Kalabokidis; Palaiologos Palaiologou; Mark A. Finney

    2014-01-01

    Recent large wildfires in Greece exemplify the need for pre-fire burn probability assessment and possible landscape fire flow estimation to enhance fire planning and resource allocation. The Minimum Travel Time (MTT) algorithm, incorporated as FlamMap's version five module, provide valuable fire behavior functions, while enabling multi-core utilization for the...

  5. A Max-Flow Based Algorithm for Connected Target Coverage with Probabilistic Sensors

    PubMed Central

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao; Wang, Wensheng

    2017-01-01

    Coverage is a fundamental issue in the research field of wireless sensor networks (WSNs). Connected target coverage discusses the sensor placement to guarantee the needs of both coverage and connectivity. Existing works largely leverage on the Boolean disk model, which is only a coarse approximation to the practical sensing model. In this paper, we focus on the connected target coverage issue based on the probabilistic sensing model, which can characterize the quality of coverage more accurately. In the probabilistic sensing model, sensors are only be able to detect a target with certain probability. We study the collaborative detection probability of target under multiple sensors. Armed with the analysis of collaborative detection probability, we further formulate the minimum ϵ-connected target coverage problem, aiming to minimize the number of sensors satisfying the requirements of both coverage and connectivity. We map it into a flow graph and present an approximation algorithm called the minimum vertices maximum flow algorithm (MVMFA) with provable time complex and approximation ratios. To evaluate our design, we analyze the performance of MVMFA theoretically and also conduct extensive simulation studies to demonstrate the effectiveness of our proposed algorithm. PMID:28587084

  6. A Max-Flow Based Algorithm for Connected Target Coverage with Probabilistic Sensors.

    PubMed

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao; Wang, Wensheng

    2017-05-25

    Coverage is a fundamental issue in the research field of wireless sensor networks (WSNs). Connected target coverage discusses the sensor placement to guarantee the needs of both coverage and connectivity. Existing works largely leverage on the Boolean disk model, which is only a coarse approximation to the practical sensing model. In this paper, we focus on the connected target coverage issue based on the probabilistic sensing model, which can characterize the quality of coverage more accurately. In the probabilistic sensing model, sensors are only be able to detect a target with certain probability. We study the collaborative detection probability of target under multiple sensors. Armed with the analysis of collaborative detection probability, we further formulate the minimum ϵ -connected target coverage problem, aiming to minimize the number of sensors satisfying the requirements of both coverage and connectivity. We map it into a flow graph and present an approximation algorithm called the minimum vertices maximum flow algorithm (MVMFA) with provable time complex and approximation ratios. To evaluate our design, we analyze the performance of MVMFA theoretically and also conduct extensive simulation studies to demonstrate the effectiveness of our proposed algorithm.

  7. Low-flow characteristics of streams in South Carolina

    USGS Publications Warehouse

    Feaster, Toby D.; Guimaraes, Wladmir B.

    2017-09-22

    An ongoing understanding of streamflow characteristics of the rivers and streams in South Carolina is important for the protection and preservation of the State’s water resources. Information concerning the low-flow characteristics of streams is especially important during critical flow periods, such as during the historic droughts that South Carolina has experienced in the past few decades.Between 2008 and 2016, the U.S. Geological Survey, in cooperation with the South Carolina Department of Health and Environmental Control, updated low-flow statistics at 106 continuous-record streamgages operated by the U.S. Geological Survey for the eight major river basins in South Carolina. The low-flow frequency statistics included the annual minimum 1-, 3-, 7-, 14-, 30-, 60-, and 90-day mean flows with recurrence intervals of 2, 5, 10, 20, 30, and 50 years, depending on the length of record available at the streamflow-gaging station. Computations of daily mean flow durations for the 5-, 10-, 25-, 50-, 75-, 90-, and 95-percent probability of exceedance also were included.This report summarizes the findings from publications generated during the 2008 to 2016 investigations. Trend analyses for the annual minimum 7-day average flows are provided as well as trend assessments of long-term annual precipitation data. Statewide variability in the annual minimum 7-day average flow is assessed at eight long-term (record lengths from 55 to 78 years) streamgages. If previous low-flow statistics were available, comparisons with the updated annual minimum 7-day average flow, having a 10-year recurrence interval, were made. In addition, methods for estimating low-flow statistics at ungaged locations near a gaged location are described.

  8. Modeling of Critically-Stratified Gravity Flows: Application to the Eel River Continental Shelf, Northern California

    NASA Astrophysics Data System (ADS)

    Scully, Malcolm E.

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  9. Riverscape and Groundwater Preservation: A Choice Experiment

    NASA Astrophysics Data System (ADS)

    Tempesta, T.; Vecchiato, D.

    2013-12-01

    This study presents a quantitative approach to support policy decision making for the preservation of riverscapes, taking into account the EC Water Framework Directive (2000/60/EC) and the EC Nitrates Directive (91/676/EEC) concerning the protection of waters against nitrate pollution from agricultural sources. A choice experiment was applied to evaluate the benefits, as perceived by inhabitants, of the implementation of policies aiming to reduce the concentration of nitrates in groundwater, preserve the riverscape by maintaining a minimum water flow and increasing hedges and woods along the Serio River in central northern Italy. Findings suggested that people were particularly concerned about groundwater quality, probably because it is strongly linked to human health. Nevertheless, it was interesting to observe that people expressed a high willingness to pay for actions that affect the riverscape as a whole (such as the minimum water flow maintenance plus reforestation). This is probably due to the close connection between the riverscape and the functions of the river area for recreation, health purposes, and biodiversity preservation.

  10. Net Surface Flux Budget Over Tropical Oceans Estimated from the Tropical Rainfall Measuring Mission (TRMM)

    NASA Astrophysics Data System (ADS)

    Fan, Tai-Fang

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  11. Magneto - Optical Imaging of Superconducting MgB2 Thin Films

    NASA Astrophysics Data System (ADS)

    Hummert, Stephanie Maria

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  12. Open Markov Processes and Reaction Networks

    NASA Astrophysics Data System (ADS)

    Swistock Pollard, Blake Stephen

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  13. Boron Carbide Filled Neutron Shielding Textile Polymers

    NASA Astrophysics Data System (ADS)

    Manzlak, Derrick Anthony

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  14. Parallel Unstructured Grid Generation for Complex Real-World Aerodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Zagaris, George

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  15. Polymeric Radiation Shielding for Applications in Space: Polyimide Synthesis and Modeling of Multi-Layered Polymeric Shields

    NASA Astrophysics Data System (ADS)

    Schiavone, Clinton Cleveland

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  16. Processing and Conversion of Algae to Bioethanol

    NASA Astrophysics Data System (ADS)

    Kampfe, Sara Katherine

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  17. The Development of the CALIPSO LiDAR Simulator

    NASA Astrophysics Data System (ADS)

    Powell, Kathleen A.

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  18. Exploring a Novel Approach to Technical Nuclear Forensics Utilizing Atomic Force Microscopy

    NASA Astrophysics Data System (ADS)

    Peeke, Richard Scot

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  19. Production of Cyclohexylene-Containing Diamines in Pursuit of Novel Radiation Shielding Materials

    NASA Astrophysics Data System (ADS)

    Bate, Norah G.

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  20. Development of Boron-Containing Polyimide Materials and Poly(arylene Ether)s for Radiation Shielding

    NASA Astrophysics Data System (ADS)

    Collins, Brittani May

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  1. Magnetization Dynamics and Anisotropy in Ferromagnetic/Antiferromagnetic Ni/NiO Bilayers

    NASA Astrophysics Data System (ADS)

    Petersen, Andreas

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  2. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  3. Study on Effects of the Stochastic Delay Probability for 1d CA Model of Traffic Flow

    NASA Astrophysics Data System (ADS)

    Xue, Yu; Chen, Yan-Hong; Kong, Ling-Jiang

    Considering the effects of different factors on the stochastic delay probability, the delay probability has been classified into three cases. The first case corresponding to the brake state has a large delay probability if the anticipant velocity is larger than the gap between the successive cars. The second one corresponding to the following-the-leader rule has intermediate delay probability if the anticipant velocity is equal to the gap. Finally, the third case is the acceleration, which has minimum delay probability. The fundamental diagram obtained by numerical simulation shows the different properties compared to that by the NaSch model, in which there exist two different regions, corresponding to the coexistence state, and jamming state respectively.

  4. Contrasts between estimates of baseflow help discern multiple sources of water contributing to rivers

    NASA Astrophysics Data System (ADS)

    Cartwright, I.; Gilfedder, B.; Hofmann, H.

    2014-01-01

    This study compares baseflow estimates using chemical mass balance, local minimum methods, and recursive digital filters in the upper reaches of the Barwon River, southeast Australia. During the early stages of high-discharge events, the chemical mass balance overestimates groundwater inflows, probably due to flushing of saline water from wetlands and marshes, soils, or the unsaturated zone. Overall, however, estimates of baseflow from the local minimum and recursive digital filters are higher than those based on chemical mass balance using Cl calculated from continuous electrical conductivity measurements. Between 2001 and 2011, the baseflow contribution to the upper Barwon River calculated using chemical mass balance is between 12 and 25% of the annual discharge with a net baseflow contribution of 16% of total discharge. Recursive digital filters predict higher baseflow contributions of 19 to 52% of discharge annually with a net baseflow contribution between 2001 and 2011 of 35% of total discharge. These estimates are similar to those from the local minimum method (16 to 45% of annual discharge and 26% of total discharge). These differences most probably reflect how the different techniques characterise baseflow. The local minimum and recursive digital filters probably aggregate much of the water from delayed sources as baseflow. However, as many delayed transient water stores (such as bank return flow, floodplain storage, or interflow) are likely to be geochemically similar to surface runoff, chemical mass balance calculations aggregate them with the surface runoff component. The difference between the estimates is greatest following periods of high discharge in winter, implying that these transient stores of water feed the river for several weeks to months at that time. Cl vs. discharge variations during individual flow events also demonstrate that inflows of high-salinity older water occurs on the rising limbs of hydrographs followed by inflows of low-salinity water from the transient stores as discharge falls. The joint use of complementary techniques allows a better understanding of the different components of water that contribute to river flow, which is important for the management and protection of water resources.

  5. Statistical summaries of streamflow data for selected gaging stations on and near the Idaho National Engineering Laboratory, Idaho, through September 1990

    USGS Publications Warehouse

    Stone, M.A.J.; Mann, Larry J.; Kjelstrom, L.C.

    1993-01-01

    Statistical summaries and graphs of streamflow data were prepared for 13 gaging stations with 5 or more years of continuous record on and near the Idaho National Engineering Laboratory. Statistical summaries of streamflow data for the Big and Little Lost Rivers and Birch Creek were analyzed as a requisite for a comprehensive evaluation of the potential for flooding of facilities at the Idaho National Engineering Laboratory. The type of statistical analyses performed depended on the length of streamflow record for a gaging station. Streamflow statistics generated for stations with 5 to 9 years of record were: (1) magnitudes of monthly and annual flows; (2) duration of daily mean flows; and (3) maximum, median, and minimum daily mean flows. Streamflow statistics generated for stations with 10 or more years of record were: (1) magnitudes of monthly and annual flows; (2) magnitudes and frequencies of daily low, high, instantaneous peak (flood frequency), and annual mean flows; (3) duration of daily mean flows; (4) exceedance probabilities of annual low, high, instantaneous peak, and mean annual flows; (5) maximum, median, and minimum daily mean flows; and (6) annual mean and mean annual flows.

  6. High-Performance Nanocomposites Designed for Radiation Shielding in Space and an Application of GIS for Analyzing Nanopowder Dispersion in Polymer Matrixes

    NASA Astrophysics Data System (ADS)

    Auslander, Joseph Simcha

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  7. Time-Resolved Magneto-Optical Imaging of Superconducting YBCO Thin Films in the High-Frequency AC Current Regime

    NASA Astrophysics Data System (ADS)

    Frey, Alexander

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  8. Use of Remote Sensing to Identify Essential Habitat for Aeschynomene virginica (L.) BSP, a Threatened Tidal Freshwater Wetland Plant

    NASA Astrophysics Data System (ADS)

    Mountz, Elizabeth M.

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  9. Silver-Polyimide Nanocomposite Films: Single-Stage Synthesis and Analysis of Metalized Partially-Fluorinated Polyimide BTDA/4-BDAF Prepared from Silver(I) Complexes

    NASA Astrophysics Data System (ADS)

    Abelard, Joshua Erold Robert

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  10. Multifunctional Polymer Synthesis and Incorporation of Gadolinium Compounds and Modified Tungsten Nanoparticles for Improvement of Radiation Shielding for use in Outer Space

    NASA Astrophysics Data System (ADS)

    Harbert, Emily Grace

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  11. Fast Reliability Assessing Method for Distribution Network with Distributed Renewable Energy Generation

    NASA Astrophysics Data System (ADS)

    Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming

    2018-01-01

    This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.

  12. Derivation of low flow frequency distributions under human activities and its implications

    NASA Astrophysics Data System (ADS)

    Gao, Shida; Liu, Pan; Pan, Zhengke; Ming, Bo; Guo, Shenglian; Xiong, Lihua

    2017-06-01

    Low flow, refers to a minimum streamflow in dry seasons, is crucial to water supply, agricultural irrigation and navigation. Human activities, such as groundwater pumping, influence low flow severely. In order to derive the low flow frequency distribution functions under human activities, this study incorporates groundwater pumping and return flow as variables in the recession process. Steps are as follows: (1) the original low flow without human activities is assumed to follow a Pearson type three distribution, (2) the probability distribution of climatic dry spell periods is derived based on a base flow recession model, (3) the base flow recession model is updated under human activities, and (4) the low flow distribution under human activities is obtained based on the derived probability distribution of dry spell periods and the updated base flow recession model. Linear and nonlinear reservoir models are used to describe the base flow recession, respectively. The Wudinghe basin is chosen for the case study, with daily streamflow observations during 1958-2000. Results show that human activities change the location parameter of the low flow frequency curve for the linear reservoir model, while alter the frequency distribution function for the nonlinear one. It is indicated that alter the parameters of the low flow frequency distribution is not always feasible to tackle the changing environment.

  13. Low-flow analysis and selected flow statistics representative of 1930-2002 for streamflow-gaging stations in or near West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.

    2006-01-01

    Five time periods between 1930 and 2002 are identified as having distinct patterns of annual minimum daily mean flows (minimum flows). Average minimum flows increased around 1970 at many streamflow-gaging stations in West Virginia. Before 1930, however, there might have been a period of minimum flows greater than any period identified between 1930 and 2002. The effects of climate variability are probably the principal causes of the differences among the five time periods. Comparisons of selected streamflow statistics are made between values computed for the five identified time periods and values computed for the 1930-2002 interval for 15 streamflow-gaging stations. The average difference between statistics computed for the five time periods and the 1930-2002 interval decreases with increasing magnitude of the low-flow statistic. The greatest individual-station absolute difference was 582.5 percent greater for the 7-day 10-year low flow computed for 1970-1979 compared to the value computed for 1930-2002. The hydrologically based low flows indicate approximately equal or smaller absolute differences than biologically based low flows. The average 1-day 3-year biologically based low flow (1B3) and 4-day 3-year biologically based low flow (4B3) are less than the average 1-day 10-year hydrologically based low flow (1Q10) and 7-day 10-year hydrologic-based low flow (7Q10) respectively, and range between 28.5 percent less and 13.6 percent greater. Seasonally, the average difference between low-flow statistics computed for the five time periods and 1930-2002 is not consistent between magnitudes of low-flow statistics, and the greatest difference is for the summer (July 1-September 30) and fall (October 1-December 31) for the same time period as the greatest difference determined in the annual analysis. The greatest average difference between 1B3 and 4B3 compared to 1Q10 and 7Q10, respectively, is in the spring (April 1-June 30), ranging between 11.6 and 102.3 percent greater. Statistics computed for the individual station's record period may not represent the statistics computed for the period 1930 to 2002 because (1) station records are available predominantly after about 1970 when minimum flows were greater than the average between 1930 and 2002 and (2) some short-term station records are mostly during dry periods, whereas others are mostly during wet periods. A criterion-based sampling of the individual station's record periods at stations was taken to reduce the effects of statistics computed for the entire record periods not representing the statistics computed for 1930-2002. The criterion used to sample the entire record periods is based on a comparison between the regional minimum flows and the minimum flows at the stations. Criterion-based sampling of the available record periods was superior to record-extension techniques for this study because more stations were selected and areal distribution of stations was more widespread. Principal component and correlation analyses of the minimum flows at 20 stations in or near West Virginia identify three regions of the State encompassing stations with similar patterns of minimum flows: the Lower Appalachian Plateaus, the Upper Appalachian Plateaus, and the Eastern Panhandle. All record periods of 10 years or greater between 1930 and 2002 where the average of the regional minimum flows are nearly equal to the average for 1930-2002 are determined as representative of 1930-2002. Selected statistics are presented for the longest representative record period that matches the record period for 77 stations in West Virginia and 40 stations near West Virginia. These statistics can be used to develop equations for estimating flow in ungaged stream locations.

  14. Ictalurid populations in relation to the presence of a main-stem reservoir in a midwestern warmwater stream with emphasis on the threatened Neosho madtom

    USGS Publications Warehouse

    Wildhaber, M.L.; Tabor, V.M.; Whitaker, J.E.; Allert, A.L.; Mulhern, D.W.; Lamberson, Peter J.; Powell, K.L.

    2000-01-01

    Ictalurid populations, including those of the Neosho madtom Noturus placidus, have been monitored in the Neosho River basin since the U.S. Fish and Wildlife Service listed the Neosho madtom as threatened in 1991. The Neosho madtom presently occurs only in the Neosho River basin, whose hydrologic regime, physical habitat, and water quality have been altered by the construction and operation of reservoirs. Our objective was to assess changes in ictalurid densities, habitat, water quality, and hydrology in relation to the presence of a main-stem reservoir in the Neosho River basin. Study sites were characterized using habitat quality as measured by substrate size, water quality as measured by standard physicochemical measures, and indicators of hydrologic alteration (IHA) as calculated from stream gauge information from the U.S. Geological Survey. Site estimates of ictalurid densities were collected by the U.S. Fish and Wildlife Service annually from 1991 to 1998, with the exception of 1993. Water quality and habitat measurements documented reduced turbidity and altered substrate composition in the Neosho River basin below John Redmond Dam. The effects of the dam on flow were indicated by changes in the short- and long-term minimum and maximum flows. Positive correlations between observed Neosho madtom densities and increases in minimum flow suggest that increased minimum flows could be used to enhance Neosho madtom populations. Positive correlations between Neosho madtom densities and increased flows in the winter and spring months as well as the date of the 1-d annual minimum flow indicate the potential importance of the timing of increased flows to Neosho madtoms. Because of the positive relationships that we found between the densities of Neosho madtoms and those of channel catfish Ictalurus punctatus, stonecats Noturus flavus, and other catfishes, alterations in flow that benefit Neosho madtom populations will probably benefit other members of the benthic fish community of the Neosho River.

  15. Paleomagnetism of San Cristobal Island, Galapagos

    USGS Publications Warehouse

    Cox, A.

    1971-01-01

    Isla San Cristobal, the most easterly of the Galapagos Islands, consists of two parts: a large volcano constitutes the southwest half of the island and an irregular apron of small cones and flows makes up the northeast half. As some of the younger flows on the flanks of the large volcano are reversely magnetized, the minimum age of the volcano is 0.7 my, which is the age of the Brunhes-Matuyama reversal boundary. The true age is probably several times greater. The cones and flows to the northeast are all normally magnetized. The between-site angular dispersion of virtual poles is 11.3?? - a value consistent with mathematical models for the latitude dependence of geomagnetic secular variation. ?? 1971.

  16. Reliability of a Parallel Pipe Network

    NASA Technical Reports Server (NTRS)

    Herrera, Edgar; Chamis, Christopher (Technical Monitor)

    2001-01-01

    The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a parallel pipe network. The observed responses are the head delivered by a main pump and the head values of two parallel lines at certain flow rates. The probability that the flow rates in the lines will be less than their specified minimums will be discussed.

  17. A compositional framework for Markov processes

    NASA Astrophysics Data System (ADS)

    Baez, John C.; Fong, Brendan; Pollard, Blake S.

    2016-03-01

    We define the concept of an "open" Markov process, or more precisely, continuous-time Markov chain, which is one where probability can flow in or out of certain states called "inputs" and "outputs." One can build up a Markov process from smaller open pieces. This process is formalized by making open Markov processes into the morphisms of a dagger compact category. We show that the behavior of a detailed balanced open Markov process is determined by a principle of minimum dissipation, closely related to Prigogine's principle of minimum entropy production. Using this fact, we set up a functor mapping open detailed balanced Markov processes to open circuits made of linear resistors. We also describe how to "black box" an open Markov process, obtaining the linear relation between input and output data that holds in any steady state, including nonequilibrium steady states with a nonzero flow of probability through the system. We prove that black boxing gives a symmetric monoidal dagger functor sending open detailed balanced Markov processes to Lagrangian relations between symplectic vector spaces. This allows us to compute the steady state behavior of an open detailed balanced Markov process from the behaviors of smaller pieces from which it is built. We relate this black box functor to a previously constructed black box functor for circuits.

  18. Methodology for Collision Risk Assessment of an Airspace Flow Corridor Concept

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    This dissertation presents a methodology to estimate the collision risk associated with a future air-transportation concept called the flow corridor. The flow corridor is a Next Generation Air Transportation System (NextGen) concept to reduce congestion and increase throughput in en-route airspace. The flow corridor has the potential to increase throughput by reducing the controller workload required to manage aircraft outside the corridor and by reducing separation of aircraft within corridor. The analysis in this dissertation is a starting point for the safety analysis required by the Federal Aviation Administration (FAA) to eventually approve and implement the corridor concept. This dissertation develops a hybrid risk analysis methodology that combines Monte Carlo simulation with dynamic event tree analysis. The analysis captures the unique characteristics of the flow corridor concept, including self-separation within the corridor, lane change maneuvers, speed adjustments, and the automated separation assurance system. Monte Carlo simulation is used to model the movement of aircraft in the flow corridor and to identify precursor events that might lead to a collision. Since these precursor events are not rare, standard Monte Carlo simulation can be used to estimate these occurrence rates. Dynamic event trees are then used to model the subsequent series of events that may lead to collision. When two aircraft are on course for a near-mid-air collision (NMAC), the on-board automated separation assurance system provides a series of safety layers to prevent the impending NNAC or collision. Dynamic event trees are used to evaluate the potential failures of these layers in order to estimate the rare-event collision probabilities. The results show that the throughput can be increased by reducing separation to 2 nautical miles while maintaining the current level of safety. A sensitivity analysis shows that the most critical parameters in the model related to the overall collision probability are the minimum separation, the probability that both flights fail to respond to traffic collision avoidance system, the probability that an NMAC results in a collision, the failure probability of the automatic dependent surveillance broadcast in receiver, and the conflict detection probability.

  19. 14 CFR 25.1443 - Minimum mass flow of supplemental oxygen.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Minimum mass flow of supplemental oxygen... § 25.1443 Minimum mass flow of supplemental oxygen. (a) If continuous flow equipment is installed for use by flight crewmembers, the minimum mass flow of supplemental oxygen required for each crewmember...

  20. 14 CFR 25.1443 - Minimum mass flow of supplemental oxygen.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Minimum mass flow of supplemental oxygen... § 25.1443 Minimum mass flow of supplemental oxygen. (a) If continuous flow equipment is installed for use by flight crewmembers, the minimum mass flow of supplemental oxygen required for each crewmember...

  1. 14 CFR 25.1443 - Minimum mass flow of supplemental oxygen.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Minimum mass flow of supplemental oxygen... § 25.1443 Minimum mass flow of supplemental oxygen. (a) If continuous flow equipment is installed for use by flight crewmembers, the minimum mass flow of supplemental oxygen required for each crewmember...

  2. 14 CFR 25.1443 - Minimum mass flow of supplemental oxygen.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Minimum mass flow of supplemental oxygen... § 25.1443 Minimum mass flow of supplemental oxygen. (a) If continuous flow equipment is installed for use by flight crewmembers, the minimum mass flow of supplemental oxygen required for each crewmember...

  3. Identification of debris-flow hazards in warm deserts through analyzing past occurrences: Case study in South Mountain, Sonoran Desert, USA

    NASA Astrophysics Data System (ADS)

    Dorn, Ronald I.

    2016-11-01

    After recognition that debris flows co-occur with human activities, the next step in a hazards analysis involves estimating debris-flow probability. Prior research published in this journal in 2010 used varnish microlamination (VML) dating to determine a minimum occurrence of 5 flows per century over the last 8100 years in a small mountain range of South Mountain adjacent to neighborhoods of Phoenix, Arizona. This analysis led to the conclusion that debris flows originating in small mountain ranges in arid regions like the Sonoran Desert could pose a hazard. Two major precipitation events in the summer of 2014 generated 35 debris flows in the same study area of South Mountain-providing support for the importance of probability analysis as a key step in a hazards analysis in warm desert settings. Two distinct mechanisms generated the 2014 debris flows: intense precipitation on steep slopes in the first storm; and a firehose effect whereby runoff from the second storm was funneled rapidly by cleaned-out debris-flow chutes to remobilize Pleistocene debris-flow deposits. When compared to a global database on debris flows, the 2014 storms were among the most intense to generate desert debris flows - indicating that storms of lesser intensity are capable of generating debris flows in warm desert settings. The 87Sr/86Sr analyses of fines and clasts in South Mountain debris flows of different ages reveal that desert dust supplies the fines. Thus, wetter climatic periods of intense rock decay are not needed to resupply desert slopes with fines; instead, a combination of dust deposition supplying fines and dirt cracking generating coarse clasts can re-arm chutes in a warm desert setting with abundant dust.

  4. On the formation of granulites

    USGS Publications Warehouse

    Bohlen, S.R.

    1991-01-01

    The tectonic settings for the formation and evolution of regional granulite terranes and the lowermost continental crust can be deduced from pressure-temperature-time (P-T-time) paths and constrained by petrological and geophysical considerations. P-T conditions deduced for regional granulites require transient, average geothermal gradients of greater than 35??C km-1, implying minimum heat flow in excess of 100 mW m-2. Such high heat flow is probably caused by magmatic heating. Tectonic settings wherein such conditions are found include convergent plate margins, continental rifts, hot spots and at the margins of large, deep-seated batholiths. Cooling paths can be constrained by solid-solid and devolatilization equilibria and geophysical modelling. -from Author

  5. Autonomous learning derived from experimental modeling of physical laws.

    PubMed

    Grabec, Igor

    2013-05-01

    This article deals with experimental description of physical laws by probability density function of measured data. The Gaussian mixture model specified by representative data and related probabilities is utilized for this purpose. The information cost function of the model is described in terms of information entropy by the sum of the estimation error and redundancy. A new method is proposed for searching the minimum of the cost function. The number of the resulting prototype data depends on the accuracy of measurement. Their adaptation resembles a self-organized, highly non-linear cooperation between neurons in an artificial NN. A prototype datum corresponds to the memorized content, while the related probability corresponds to the excitability of the neuron. The method does not include any free parameters except objectively determined accuracy of the measurement system and is therefore convenient for autonomous execution. Since representative data are generally less numerous than the measured ones, the method is applicable for a rather general and objective compression of overwhelming experimental data in automatic data-acquisition systems. Such compression is demonstrated on analytically determined random noise and measured traffic flow data. The flow over a day is described by a vector of 24 components. The set of 365 vectors measured over one year is compressed by autonomous learning to just 4 representative vectors and related probabilities. These vectors represent the flow in normal working days and weekends or holidays, while the related probabilities correspond to relative frequencies of these days. This example reveals that autonomous learning yields a new basis for interpretation of representative data and the optimal model structure. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Speciation has a spatial scale that depends on levels of gene flow.

    PubMed

    Kisel, Yael; Barraclough, Timothy G

    2010-03-01

    Area is generally assumed to affect speciation rates, but work on the spatial context of speciation has focused mostly on patterns of range overlap between emerging species rather than on questions of geographical scale. A variety of geographical theories of speciation predict that the probability of speciation occurring within a given region should (1) increase with the size of the region and (2) increase as the spatial extent of intraspecific gene flow becomes smaller. Using a survey of speciation events on isolated oceanic islands for a broad range of taxa, we find evidence for both predictions. The probability of in situ speciation scales with island area in bats, carnivorous mammals, birds, flowering plants, lizards, butterflies and moths, and snails. Ferns are an exception to these findings, but they exhibit high frequencies of polyploid and hybrid speciation, which are expected to be scale independent. Furthermore, the minimum island size for speciation correlates across groups with the strength of intraspecific gene flow, as is estimated from a meta-analysis of published population genetic studies. These results indicate a general geographical model of speciation rates that are dependent on both area and gene flow. The spatial scale of population divergence is an important but neglected determinant of broad-scale diversity patterns.

  7. Effects of combined-sewer overflows and urban runoff on the water quality of Fall Creek, Indianapolis, Indiana

    USGS Publications Warehouse

    Martin, Jeffrey D.

    1995-01-01

    Concentrations of dissolved oxygen measured at the station in the middle of the combined-sewer overflows were less than the Indiana minimum ambient water-quality standard of 4.0 milligrams per liter during all storms. Concentrations of ammonia, oxygen demand, copper, lead, zinc, and fecal coliform bacteria at the stations downstream from the combined-sewer overflows were much higher in storm runoff than in base flow. Increased concentrations of oxygen demand in runoff probably were caused by combined-sewer overflows, urban runoff, and the resuspension of organic material deposited on the streambed. Some of the increased concentrations of lead, zinc, and probably copper can be attributed to the discharge and resuspension of filter backwash

  8. Simulation of hydrodynamics, temperature, and dissolved oxygen in Beaver Lake, Arkansas, 1994-1995

    USGS Publications Warehouse

    Haggard, Brian; Green, W. Reed

    2002-01-01

    The tailwaters of Beaver Lake and other White River reservoirs support a cold-water trout fishery of significant economic yield in northwestern Arkansas. The Arkansas Game and Fish Commission has requested an increase in existing minimum flows through the Beaver Lake dam to increase the amount of fishable waters downstream. Information is needed to assess the impact of additional minimum flows on temperature and dissolved-oxygen qualities of reservoir water above the dam and the release water. A two-dimensional, laterally averaged hydrodynamic, thermal and dissolved-oxygen model was developed and calibrated for Beaver Lake, Arkansas. The model simulates surface-water elevation, currents, heat transport and dissolved-oxygen dynamics. The model was developed to assess the impacts of proposed increases in minimum flows from 1.76 cubic meters per second (the existing minimum flow) to 3.85 cubic meters per second (the additional minimum flow). Simulations included assessing (1) the impact of additional minimum flows on tailwater temperature and dissolved-oxygen quality and (2) increasing initial water-surface elevation 0.5 meter and assessing the impact of additional minimum flow on tailwater temperatures and dissolved-oxygen concentrations. The additional minimum flow simulation (without increasing initial pool elevation) appeared to increase the water temperature (<0.9 degrees Celsius) and decrease dissolved oxygen concentration (<2.2 milligrams per liter) in the outflow discharge. Conversely, the additional minimum flow plus initial increase in pool elevation (0.5 meter) simulation appeared to decrease outflow water temperature (0.5 degrees Celsius) and increase dissolved oxygen concentration (<1.2 milligrams per liter) through time. However, results from both minimum flow scenarios for both water temperature and dissolved oxygen concentration were within the boundaries or similar to the error between measured and simulated water column profile values.

  9. A potential approach for low flow selection in water resource supply and management

    NASA Astrophysics Data System (ADS)

    Ouyang, Ying

    2012-08-01

    SummaryLow flow selections are essential to water resource management, water supply planning, and watershed ecosystem restoration. In this study, a new approach, namely the frequent-low (FL) approach (or frequent-low index), was developed based on the minimum frequent-low flow or level used in minimum flows and/or levels program in northeast Florida, USA. This FL approach was then compared to the conventional 7Q10 approach for low flow selections prior to its applications, using the USGS flow data from the freshwater environment (Big Sunflower River, Mississippi) as well as from the estuarine environment (St. Johns River, Florida). Unlike the FL approach that is associated with the biological and ecological impacts, the 7Q10 approach could lead to the selections of extremely low flows (e.g., near-zero flows) that may hinder its use for establishing criteria to prevent streams from significant harm to biological and ecological communities. Additionally, the 7Q10 approach could not be used when the period of data records is less than 10 years by definition while this may not the case for the FL approach. Results from both approaches showed that the low flows from the Big Sunflower River and the St. Johns River decreased as time elapsed, demonstrating that these two rivers have become drier during the last several decades with a potential of salted water intrusion to the St. Johns River. Results from the FL approach further revealed that the recurrence probability of low flow increased while the recurrence interval of low flow decreased as time elapsed in both rivers, indicating that low flows occurred more frequent in these rivers as time elapsed. This report suggests that the FL approach, developed in this study, is a useful alternative for low flow selections in addition to the 7Q10 approach.

  10. Routing Algorithm based on Minimum Spanning Tree and Minimum Cost Flow for Hybrid Wireless-optical Broadband Access Network

    NASA Astrophysics Data System (ADS)

    Le, Zichun; Suo, Kaihua; Fu, Minglei; Jiang, Ling; Dong, Wen

    2012-03-01

    In order to minimize the average end to end delay for data transporting in hybrid wireless optical broadband access network, a novel routing algorithm named MSTMCF (minimum spanning tree and minimum cost flow) is devised. The routing problem is described as a minimum spanning tree and minimum cost flow model and corresponding algorithm procedures are given. To verify the effectiveness of MSTMCF algorithm, extensively simulations based on OWNS have been done under different types of traffic source.

  11. Low-flow frequency and flow duration of selected South Carolina streams in the Catawba-Wateree and Santee River Basins through March 2012

    USGS Publications Warehouse

    Feaster, Toby D.; Guimaraes, Wladmir B.

    2014-01-01

    Part of the mission of both the South Carolina Department of Health and Environmental Control and the South Carolina Department of Natural Resources is to protect and preserve South Carolina’s water resources. Doing so requires an ongoing understanding of streamflow characteristics of the rivers and streams in South Carolina. A particular need is information concerning the low-flow characteristics of streams, which is especially important for effectively managing the State’s water resources during critical flow periods, such as during the historic droughts that South Carolina has experienced in the past few decades. In 2008, the U.S. Geological Survey, in cooperation with the South Carolina Department of Health and Environmental Control, initiated a study to update low-flow statistics at continuous-record streamgaging stations operated by the U.S. Geological Survey in South Carolina. This report presents the low-flow statistics for 11 selected streamgaging stations in the Catawba-Wateree and Santee River Basins in South Carolina and 2 in North Carolina. For five of the streamgaging stations, low-flow statistics include daily mean flow durations or the 5-, 10-, 25-, 50-, 75-, 90-, and 95-percent probability of exceedance and the annual minimum 1-, 3-, 7-, 14-, 30-, 60-, and 90-day mean flows with recurrence intervals of 2, 5, 10, 20, 30, and 50 years, depending on the length of record available at the streamgaging station. For the other eight streamgaging stations, only daily mean flow durations and (or) exceedance percentiles of annual minimum 7-day average flows are provided due to regulation. In either case, the low-flow statistics were computed from records available through March 31, 2012. Of the five streamgaging stations for which recurrence interval computations were made, three streamgaging stations in South Carolina were compared to low-flow statistics that were published in previous U.S. Geological Survey reports. A comparison of the low-flow statistics for the annual minimum 7-day average streamflow with a 10-year recurrence interval (7Q10) from this study with the most recently published values indicated that two of the streamgaging stations had values lower than the previous values and the 7Q10 for the third station remained unchanged at zero. Low-flow statistics are influenced by length of record, hydrologic regime under which the data were collected, analytical techniques used, and other factors, such as urbanization, diversions, and droughts that may have occurred in the basin.

  12. Low-flow characteristics of the Mississippi River upstream from the Twin Cities Metropolitan Area, Minnesota, 1932-2007

    USGS Publications Warehouse

    Kessler, Erich; Lorenz, David L.

    2010-01-01

    The U.S. Geological Survey, in cooperation with the Metropolitan Council, conducted a study to characterize regional low flows during 1932?2007 in the Mississippi River upstream from the Twin Cities metropolitan area in Minnesota and to describe the low-flow profile of the Mississippi River between the confluence of the Crow River and St. Anthony Falls. Probabilities of extremely low flow were estimated for the streamflow-gaging station (Mississippi River near Anoka) and the coincidence of low-flow periods, defined as the extended periods (at least 7 days) when all the daily flows were less than the 10th percentile of daily mean flows for the entire period of record, at four selected streamflow-gaging stations located upstream. The likelihood of extremely low flows was estimated by a superposition method for the Mississippi River near Anoka that created 5,776 synthetic hydrographs resulting in a minimum synthetic low flow of 398 cubic feet per second at a probability of occurrence of 0.0002 per year. Low-flow conditions at the Mississippi River above Anoka were associated with low-flow conditions at two or fewer of four upstream streamflow-gaging stations 42 percent of the time, indicating that sufficient water is available within the basin for many low flows and the occurrence of extremely low-flows is small. However, summer low-flow conditions at the Mississippi River above Anoka were almost always associated with low-stage elevations in three or more of the six upper basin reservoirs. A low-flow profile of the Mississippi River between the confluence of the Crow River and St. Anthony Falls was completed using a real-time kinematic global positioning system, and the water-surface profile was mapped during October 8?9, 2008, and annotated with local landmarks. This was done so that water-use planners could relate free-board elevations of selected water utility structures to the lowest flow conditions during 2008.

  13. The minimum or natural rate of flow and droplet size ejected by Taylor cone-jets: physical symmetries and scaling laws

    NASA Astrophysics Data System (ADS)

    Gañán-Calvo, A. M.; Rebollo-Muñoz, N.; Montanero, J. M.

    2013-03-01

    We aim to establish the scaling laws for both the minimum rate of flow attainable in the steady cone-jet mode of electrospray, and the size of the resulting droplets in that limit. Use is made of a small body of literature on Taylor cone-jets reporting precise measurements of the transported electric current and droplet size as a function of the liquid properties and flow rate. The projection of the data onto an appropriate non-dimensional parameter space maps a region bounded by the minimum rate of flow attainable in the steady state. To explain these experimental results, we propose a theoretical model based on the generalized concept of physical symmetry, stemming from the system time invariance (steadiness). A group of symmetries rising at the cone-to-jet geometrical transition determines the scaling for the minimum flow rate and related variables. If the flow rate is decreased below that minimum value, those symmetries break down, which leads to dripping. We find that the system exhibits two instability mechanisms depending on the nature of the forces arising against the flow: one dominated by viscosity and the other by the liquid polarity. In the former case, full charge relaxation is guaranteed down to the minimum flow rate, while in the latter the instability condition becomes equivalent to the symmetry breakdown by charge relaxation or separation. When cone-jets are formed without artificially imposing a flow rate, a microjet is issued quasi-steadily. The flow rate naturally ejected this way coincides with the minimum flow rate studied here. This natural flow rate determines the minimum droplet size that can be steadily produced by any electrohydrodynamic means for a given set of liquid properties.

  14. Probability-of-success studies for geothermal projects: from subsurface data to geological risk analysis

    NASA Astrophysics Data System (ADS)

    Schumacher, Sandra; Pierau, Roberto; Wirth, Wolfgang

    2017-04-01

    In recent years, the development of geothermal plants in Germany has increased significantly due to a favorable political setting and resulting financial incentives. However, most projects are developed by local communities or private investors, which cannot afford a project to fail. To cover the risk of total loss if the geothermal well should not provide the energy output necessary for an economically viable project, investors try to procure insurances for this worst case scenario. In order to issue such insurances, the insurance companies insist on so called probability-of-success studies (POS studies), in which the geological risk for not achieving the necessary temperatures and/or flow rates for an economically successful project is quantified. Quantifying the probability of reaching a minimum temperature, which has to be defined by the project investors, is relatively straight forward as subsurface temperatures in Germany are comparatively well known due tens of thousands of hydrocarbon wells. Moreover, for the German Molasse Basin a method to characterize the hydraulic potential of a site based on pump test analysis has been developed and refined in recent years. However, to quantify the probability of reaching a given flow rate with a given drawdown is much more challenging in areas where pump test data are generally not available (e.g. the North German Basin). Therefore, a new method based on log and core derived porosity and permeability data was developed to quantify the geological risk of reaching a determined flow rate in such areas. We present both methods for POS studies and show how subsurface data such as pump tests or log and core measurements can be used to predict the chances of a potential geothermal project from a geological point of view.

  15. A robust approach to chance constrained optimal power flow with renewable generation

    DOE PAGES

    Lubin, Miles; Dvorkin, Yury; Backhaus, Scott N.

    2016-09-01

    Optimal Power Flow (OPF) dispatches controllable generation at minimum cost subject to operational constraints on generation and transmission assets. The uncertainty and variability of intermittent renewable generation is challenging current deterministic OPF approaches. Recent formulations of OPF use chance constraints to limit the risk from renewable generation uncertainty, however, these new approaches typically assume the probability distributions which characterize the uncertainty and variability are known exactly. We formulate a robust chance constrained (RCC) OPF that accounts for uncertainty in the parameters of these probability distributions by allowing them to be within an uncertainty set. The RCC OPF is solved usingmore » a cutting-plane algorithm that scales to large power systems. We demonstrate the RRC OPF on a modified model of the Bonneville Power Administration network, which includes 2209 buses and 176 controllable generators. In conclusion, deterministic, chance constrained (CC), and RCC OPF formulations are compared using several metrics including cost of generation, area control error, ramping of controllable generators, and occurrence of transmission line overloads as well as the respective computational performance.« less

  16. Simulation of hydrodynamics, temperature, and dissolved oxygen in Table Rock Lake, Missouri, 1996-1997

    USGS Publications Warehouse

    Green, W. Reed; Galloway, Joel M.; Richards, Joseph M.; Wesolowski, Edwin A.

    2003-01-01

    Outflow from Table Rock Lake and other White River reservoirs support a cold-water trout fishery of substantial economic yield in south-central Missouri and north-central Arkansas. The Missouri Department of Conservation has requested an increase in existing minimum flows through the Table Rock Lake Dam from the U.S. Army Corps of Engineers to increase the quality of fishable waters downstream in Lake Taneycomo. Information is needed to assess the effect of increased minimum flows on temperature and dissolved- oxygen concentrations of reservoir water and the outflow. A two-dimensional, laterally averaged, hydrodynamic, temperature, and dissolved-oxygen model, CE-QUAL-W2, was developed and calibrated for Table Rock Lake, located in Missouri, north of the Arkansas-Missouri State line. The model simulates water-surface elevation, heat transport, and dissolved-oxygen dynamics. The model was developed to assess the effects of proposed increases in minimum flow from about 4.4 cubic meters per second (the existing minimum flow) to 11.3 cubic meters per second (the increased minimum flow). Simulations included assessing the effect of (1) increased minimum flows and (2) increased minimum flows with increased water-surface elevations in Table Rock Lake, on outflow temperatures and dissolved-oxygen concentrations. In both minimum flow scenarios, water temperature appeared to stay the same or increase slightly (less than 0.37 ?C) and dissolved oxygen appeared to decrease slightly (less than 0.78 mg/L) in the outflow during the thermal stratification season. However, differences between the minimum flow scenarios for water temperature and dissolved- oxygen concentration and the calibrated model were similar to the differences between measured and simulated water-column profile values.

  17. Fuel control for gas turbine with continuous pilot flame

    DOEpatents

    Swick, Robert M.

    1983-01-01

    An improved fuel control for a gas turbine engine having a continuous pilot flame and a fuel distribution system including a pump drawing fuel from a source and supplying a line to the main fuel nozzle of the engine, the improvement being a control loop between the pump outlet and the pump inlet to bypass fuel, an electronically controlled throttle valve to restrict flow in the control loop when main nozzle demand exists and to permit substantially unrestricted flow without main nozzle demand, a minimum flow valve in the control loop downstream of the throttle valve to maintain a minimum pressure in the loop ahead of the flow valve, a branch tube from the pilot flame nozzle to the control loop between the throttle valve and the minimum flow valve, an orifice in the branch tube, and a feedback tube from the branch tube downstream of the orifice to the minimum flow valve, the minimum flow valve being operative to maintain a substantially constant pressure differential across the orifice to maintain constant fuel flow to the pilot flame nozzle.

  18. Target intersection probabilities for parallel-line and continuous-grid types of search

    USGS Publications Warehouse

    McCammon, R.B.

    1977-01-01

    The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an elliptically shaped target can be approximated by treating the ellipse as intermediate between a circle and a line. A search conducted along a continuous rectangular grid can be represented as intermediate between a search along parallel lines and along a continuous square grid. On this basis, an upper and lower bound for the probability of intersection of an elliptically shaped target for a continuous rectangular grid can be calculated. Charts have been constructed that permit the values for these probabilities to be obtained graphically. The use of conditional probability allows the explorationist greater flexibility in considering alternate search strategies for locating hidden targets. ?? 1977 Plenum Publishing Corp.

  19. 75 FR 40797 - Upper Peninsula Power Company; Notice of Application for Temporary Amendment of License and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-14

    ... for drought-based temporary variance of the reservoir elevations and minimum flow releases at the Dead... temporary variance to the reservoir elevation and minimum flow requirements at the Hoist Development. The...: (1) Releasing a minimum flow of 75 cubic feet per second (cfs) from the Hoist Reservoir, instead of...

  20. Universal inverse power-law distribution for temperature and rainfall in the UK region

    NASA Astrophysics Data System (ADS)

    Selvam, A. M.

    2014-06-01

    Meteorological parameters, such as temperature, rainfall, pressure, etc., exhibit selfsimilar space-time fractal fluctuations generic to dynamical systems in nature such as fluid flows, spread of forest fires, earthquakes, etc. The power spectra of fractal fluctuations display inverse power-law form signifying long-range correlations. A general systems theory model predicts universal inverse power-law form incorporating the golden mean for the fractal fluctuations. The model predicted distribution was compared with observed distribution of fractal fluctuations of all size scales (small, large and extreme values) in the historic month-wise temperature (maximum and minimum) and total rainfall for the four stations Oxford, Armagh, Durham and Stornoway in the UK region, for data periods ranging from 92 years to 160 years. For each parameter, the two cumulative probability distributions, namely cmax and cmin starting from respectively maximum and minimum data value were used. The results of the study show that (i) temperature distributions (maximum and minimum) follow model predicted distribution except for Stornowy, minimum temperature cmin. (ii) Rainfall distribution for cmin follow model predicted distribution for all the four stations. (iii) Rainfall distribution for cmax follows model predicted distribution for the two stations Armagh and Stornoway. The present study suggests that fractal fluctuations result from the superimposition of eddy continuum fluctuations.

  1. Modeling flow around bluff bodies and predicting urban dispersion using large eddy simulation.

    PubMed

    Tseng, Yu-Heng; Meneveau, Charles; Parlange, Marc B

    2006-04-15

    Modeling air pollutant transport and dispersion in urban environments is especially challenging due to complex ground topography. In this study, we describe a large eddy simulation (LES) tool including a new dynamic subgrid closure and boundary treatment to model urban dispersion problems. The numerical model is developed, validated, and extended to a realistic urban layout. In such applications fairly coarse grids must be used in which each building can be represented using relatively few grid-points only. By carrying out LES of flow around a square cylinder and of flow over surface-mounted cubes, the coarsest resolution required to resolve the bluff body's cross section while still producing meaningful results is established. Specifically, we perform grid refinement studies showing that at least 6-8 grid points across the bluff body are required for reasonable results. The performance of several subgrid models is also compared. Although effects of the subgrid models on the mean flow are found to be small, dynamic Lagrangian models give a physically more realistic subgrid-scale (SGS) viscosity field. When scale-dependence is taken into consideration, these models lead to more realistic resolved fluctuating velocities and spectra. These results set the minimum grid resolution and subgrid model requirements needed to apply LES in simulations of neutral atmospheric boundary layer flow and scalar transport over a realistic urban geometry. The results also illustrate the advantages of LES over traditional modeling approaches, particularly its ability to take into account the complex boundary details and the unsteady nature of atmospheric boundary layer flow. Thus LES can be used to evaluate probabilities of extreme events (such as probabilities of exceeding threshold pollutant concentrations). Some comments about computer resources required for LES are also included.

  2. Is the difference between chemical and numerical estimates of baseflow meaningful?

    NASA Astrophysics Data System (ADS)

    Cartwright, Ian; Gilfedder, Ben; Hofmann, Harald

    2014-05-01

    Both chemical and numerical techniques are commonly used to calculate baseflow inputs to gaining rivers. In general the chemical methods yield lower estimates of baseflow than the numerical techniques. In part, this may be due to the techniques assuming two components (event water and baseflow) whereas there may also be multiple transient stores of water. Bank return waters, interflow, or waters stored on floodplains are delayed components that may be geochemically similar to the surface water from which they are derived; numerical techniques may record these components as baseflow whereas chemical mass balance studies are likely to aggregate them with the surface water component. This study compares baseflow estimates using chemical mass balance, local minimum methods, and recursive digital filters in the upper reaches of the Barwon River, southeast Australia. While more sophisticated techniques exist, these methods of estimating baseflow are readily applied with the available data and have been used widely elsewhere. During the early stages of high-discharge events, chemical mass balance overestimates groundwater inflows, probably due to flushing of saline water from wetlands and marshes, soils, or the unsaturated zone. Overall, however, estimates of baseflow from the local minimum and recursive digital filters are higher than those from chemical mass balance using Cl calculated from continuous electrical conductivity. Between 2001 and 2011, the baseflow contribution to the upper Barwon River calculated using chemical mass balance is between 12 and 25% of annual discharge. Recursive digital filters predict higher baseflow contributions of 19 to 52% of annual discharge. These estimates are similar to those from the local minimum method (16 to 45% of annual discharge). These differences most probably reflect how the different techniques characterise the transient water sources in this catchment. The local minimum and recursive digital filters aggregate much of the water from delayed sources as baseflow. However, as many of these delayed transient water stores (such as bank return flow, floodplain storage, or interflow) have Cl concentrations that are similar to surface runoff, chemical mass balance calculations aggregate them with the surface runoff component. The difference between the estimates is greatest following periods of high discharge in winter, implying that these transient stores of water feed the river for several weeks to months at that time. Cl vs. discharge variations during individual flow events also demonstrate that inflows of high-salinity older water occurs on the rising limbs of hydrographs followed by inflows of low-salinity water from the transient stores as discharge falls. The use of complementary techniques allows a better understanding of the different components of water that contribute to river flow, which is important for the management and protection of water resources.

  3. Statistics of the relative velocity of particles in turbulent flows: Monodisperse particles.

    PubMed

    Bhatnagar, Akshay; Gustavsson, K; Mitra, Dhrubaditya

    2018-02-01

    We use direct numerical simulations to calculate the joint probability density function of the relative distance R and relative radial velocity component V_{R} for a pair of heavy inertial particles suspended in homogeneous and isotropic turbulent flows. At small scales the distribution is scale invariant, with a scaling exponent that is related to the particle-particle correlation dimension in phase space, D_{2}. It was argued [K. Gustavsson and B. Mehlig, Phys. Rev. E 84, 045304 (2011)PLEEE81539-375510.1103/PhysRevE.84.045304; J. Turbul. 15, 34 (2014)1468-524810.1080/14685248.2013.875188] that the scale invariant part of the distribution has two asymptotic regimes: (1) |V_{R}|≪R, where the distribution depends solely on R, and (2) |V_{R}|≫R, where the distribution is a function of |V_{R}| alone. The probability distributions in these two regimes are matched along a straight line: |V_{R}|=z^{*}R. Our simulations confirm that this is indeed correct. We further obtain D_{2} and z^{*} as a function of the Stokes number, St. The former depends nonmonotonically on St with a minimum at about St≈0.7 and the latter has only a weak dependence on St.

  4. Statistics of the relative velocity of particles in turbulent flows: Monodisperse particles

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Akshay; Gustavsson, K.; Mitra, Dhrubaditya

    2018-02-01

    We use direct numerical simulations to calculate the joint probability density function of the relative distance R and relative radial velocity component VR for a pair of heavy inertial particles suspended in homogeneous and isotropic turbulent flows. At small scales the distribution is scale invariant, with a scaling exponent that is related to the particle-particle correlation dimension in phase space, D2. It was argued [K. Gustavsson and B. Mehlig, Phys. Rev. E 84, 045304 (2011), 10.1103/PhysRevE.84.045304; J. Turbul. 15, 34 (2014), 10.1080/14685248.2013.875188] that the scale invariant part of the distribution has two asymptotic regimes: (1) | VR|≪R , where the distribution depends solely on R , and (2) | VR|≫R , where the distribution is a function of | VR| alone. The probability distributions in these two regimes are matched along a straight line: | VR|= z*R . Our simulations confirm that this is indeed correct. We further obtain D2 and z* as a function of the Stokes number, St. The former depends nonmonotonically on St with a minimum at about St≈0.7 and the latter has only a weak dependence on St.

  5. Minimum flow unit installation at the South Edwards Hydro Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernhardt, P.; Bates, D.

    1995-12-31

    Niagara Mohawk Power Corp. owns and operates the 3.3 MW South Edwards Hydro Plant in Northern New York. The FERC license for this plant requires a minimum flow release in the bypass region of the river. NMPC submitted a license amendment to the FERC to permit the addition of a minimum flow unit to take advantage of this flow. The amendment was accepted, permitting the installation of the 236 kw, 60 cfs unit to proceed. The unit was installed and commissioned in 1994.

  6. Analytic expressions for Atomic Layer Deposition: coverage, throughput, and materials utilization in cross-flow, particle coating, and spatial ALD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yanguas-Gil, Angel; Elam, Jeffrey W.

    2014-05-01

    In this work, the authors present analytic models for atomic layer deposition (ALD) in three common experimental configurations: cross-flow, particle coating, and spatial ALD. These models, based on the plug-flow and well-mixed approximations, allow us to determine the minimum dose times and materials utilization for all three configurations. A comparison between the three models shows that throughput and precursor utilization can each be expressed by universal equations, in which the particularity of the experimental system is contained in a single parameter related to the residence time of the precursor in the reactor. For the case of cross-flow reactors, the authorsmore » show how simple analytic expressions for the reactor saturation profiles agree well with experimental results. Consequently, the analytic model can be used to extract information about the ALD surface chemistry (e. g., the reaction probability) by comparing the analytic and experimental saturation profiles, providing a useful tool for characterizing new and existing ALD processes. (C) 2014 American Vacuum Society« less

  7. Low-flow characteristics of streams in Ohio through water year 1997

    USGS Publications Warehouse

    Straub, David E.

    2001-01-01

    This report presents selected low-flow and flow-duration characteristics for 386 sites throughout Ohio. These sites include 195 long-term continuous-record stations with streamflow data through water year 1997 (October 1 to September 30) and for 191 low-flow partial-record stations with measurements into water year 1999. The characteristics presented for the long-term continuous-record stations are minimum daily streamflow; average daily streamflow; harmonic mean flow; 1-, 7-, 30-, and 90-day minimum average low flow with 2-, 5-, 10-, 20-, and 50-year recurrence intervals; and 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, 50-, 40-, 30-, 20-, and 10-percent daily duration flows. The characteristics presented for the low-flow partial-record stations are minimum observed streamflow; estimated 1-, 7-, 30-, and 90-day minimum average low flow with 2-, 10-, and 20-year recurrence intervals; and estimated 98-, 95-, 90-, 85- and 80-percent daily duration flows. The low-flow frequency and duration analyses were done for three seasonal periods (warm weather, May 1 to November 30; winter, December 1 to February 28/29; and autumn, September 1 to November 30), plus the annual period based on the climatic year (April 1 to March 31).

  8. 40 CFR Table 3 to Subpart Ec of... - Operating Parameters To Be Monitored and Minimum Measurement and Recording Frequencies

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... scrubber followed by fabric filter Wet scrubber Dry scrubber followed by fabric filter and wet scrubber... flow rate Hourly 1×hour ✔ ✔ Minimum pressure drop across the wet scrubber or minimum horsepower or amperage to wet scrubber Continuous 1×minute ✔ ✔ Minimum scrubber liquor flow rate Continuous 1×minute...

  9. Exact Scheffé-type confidence intervals for output from groundwater flow models: 1. Use of hydrogeologic information

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.

  10. 14 CFR 23.1443 - Minimum mass flow of supplemental oxygen.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... discretion. (c) If first-aid oxygen equipment is installed, the minimum mass flow of oxygen to each user may... upon an average flow rate of 3 liters per minute per person for whom first-aid oxygen is required. (d...

  11. Relation between minimum-error discrimination and optimum unambiguous discrimination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu Daowen; SQIG-Instituto de Telecomunicacoes, Departamento de Matematica, Instituto Superior Tecnico, Universidade Tecnica de Lisboa, Avenida Rovisco Pais PT-1049-001, Lisbon; Li Lvjun

    2010-09-15

    In this paper, we investigate the relationship between the minimum-error probability Q{sub E} of ambiguous discrimination and the optimal inconclusive probability Q{sub U} of unambiguous discrimination. It is known that for discriminating two states, the inequality Q{sub U{>=}}2Q{sub E} has been proved in the literature. The main technical results are as follows: (1) We show that, for discriminating more than two states, Q{sub U{>=}}2Q{sub E} may not hold again, but the infimum of Q{sub U}/Q{sub E} is 1, and there is no supremum of Q{sub U}/Q{sub E}, which implies that the failure probabilities of the two schemes for discriminating somemore » states may be narrowly or widely gapped. (2) We derive two concrete formulas of the minimum-error probability Q{sub E} and the optimal inconclusive probability Q{sub U}, respectively, for ambiguous discrimination and unambiguous discrimination among arbitrary m simultaneously diagonalizable mixed quantum states with given prior probabilities. In addition, we show that Q{sub E} and Q{sub U} satisfy the relationship that Q{sub U{>=}}(m/m-1)Q{sub E}.« less

  12. Dynamic analysis of pedestrian crossing behaviors on traffic flow at unsignalized mid-block crosswalks

    NASA Astrophysics Data System (ADS)

    Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping

    2015-05-01

    It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.

  13. Intelligent driving in traffic systems with partial lane discipline

    NASA Astrophysics Data System (ADS)

    Assadi, Hamid; Emmerich, Heike

    2013-04-01

    It is a most common notion in traffic theory that driving in lanes and keeping lane changes to a minimum leads to smooth and laminar traffic flow, and hence to increased traffic capacity. On the other hand, there exist persistent vehicular traffic systems that are characterised by habitual disregarding of lane markings, and partial or complete loss of laminar traffic flow. Here, we explore the stability of such systems through a microscopic traffic flow model, where the degree of lane-discipline is taken as a variable, represented by the fraction of drivers that disregard lane markings completely. The results show that lane-free traffic may win over completely ordered traffic at high densities, and that partially ordered traffic leads to the poorest overall flow, while not considering the crash probability. Partial order in a lane-free system is similar to partial disorder in a lane-disciplined system in that both lead to decreased traffic capacity. This could explain the reason why standard enforcement methods, which rely on continuous increase of order, often fail to incur order to lane-free traffic systems. The results also provide an insight into the cooperative phenomena in open systems with self-driven particles.

  14. A Potential Approach for Low Flow Selection in Water Resource Supply and Management

    Treesearch

    Ying Ouyang

    2012-01-01

    Low flow selections are essential to water resource management, water supply planning, and watershed ecosystem restoration. In this study, a new approach, namely the frequent-low (FL) approach (or frequent-low index), was developed based on the minimum frequent-low flow or level used in minimum flows and/or levels program in northeast Florida, USA. This FL approach was...

  15. Low volume flow meter

    DOEpatents

    Meixler, Lewis D.

    1993-01-01

    The low flow monitor provides a means for determining if a fluid flow meets a minimum threshold level of flow. The low flow monitor operates with a minimum of intrusion by the flow detection device into the flow. The electrical portion of the monitor is externally located with respect to the fluid stream which allows for repairs to the monitor without disrupting the flow. The electronics provide for the adjustment of the threshold level to meet the required conditions. The apparatus can be modified to provide an upper limit to the flow monitor by providing for a parallel electronic circuit which provides for a bracketing of the desired flow rate.

  16. Estimation of mussel population response to hydrologic alteration in a southeastern U.S. stream

    USGS Publications Warehouse

    Peterson, J.T.; Wisniewski, J.M.; Shea, C.P.; Rhett, Jackson C.

    2011-01-01

    The southeastern United States has experienced severe, recurrent drought, rapid human population growth, and increasing agricultural irrigation during recent decades, resulting in greater demand for the water resources. During the same time period, freshwater mussels (Unioniformes) in the region have experienced substantial population declines. Consequently, there is growing interest in determining how mussel population declines are related to activities associated with water resource development. Determining the causes of mussel population declines requires, in part, an understanding of the factors influencing mussel population dynamics. We developed Pradel reverse-time, tag-recapture models to estimate survival, recruitment, and population growth rates for three federally endangered mussel species in the Apalachicola- Chattahoochee-Flint River Basin, Georgia. The models were parameterized using mussel tag-recapture data collected over five consecutive years from Sawhatchee Creek, located in southwestern Georgia. Model estimates indicated that mussel survival was strongly and negatively related to high flows during the summer, whereas recruitment was strongly and positively related to flows during the spring and summer. Using these models, we simulated mussel population dynamics under historic (1940-1969) and current (1980-2008) flow regimes and under increasing levels of water use to evaluate the relative effectiveness of alternative minimum flow regulations. The simulations indicated that the probability of simulated mussel population extinction was at least 8 times greater under current hydrologic regimes. In addition, simulations of mussel extinction under varying levels of water use indicated that the relative risk of extinction increased with increased water use across a range of minimum flow regulations. The simulation results also indicated that our estimates of the effects of water use on mussel extinction were influenced by the assumptions about the dynamics of the system, highlighting the need for further study of mussel population dynamics. ?? 2011 Springer Science+Business Media, LLC (outside the USA).

  17. Optimization of memory use of fragment extension-based protein-ligand docking with an original fast minimum cost flow algorithm.

    PubMed

    Yanagisawa, Keisuke; Komine, Shunta; Kubota, Rikuto; Ohue, Masahito; Akiyama, Yutaka

    2018-06-01

    The need to accelerate large-scale protein-ligand docking in virtual screening against a huge compound database led researchers to propose a strategy that entails memorizing the evaluation result of the partial structure of a compound and reusing it to evaluate other compounds. However, the previous method required frequent disk accesses, resulting in insufficient acceleration. Thus, more efficient memory usage can be expected to lead to further acceleration, and optimal memory usage could be achieved by solving the minimum cost flow problem. In this research, we propose a fast algorithm for the minimum cost flow problem utilizing the characteristics of the graph generated for this problem as constraints. The proposed algorithm, which optimized memory usage, was approximately seven times faster compared to existing minimum cost flow algorithms. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Low-flow frequency and flow duration of selected South Carolina streams in the Savannah and Salkehatchie River Basins through March 2014

    USGS Publications Warehouse

    Feaster, Toby D.; Guimaraes, Wladmir B.

    2016-07-14

    An ongoing understanding of streamflow characteristics of the rivers and streams in South Carolina is important for the protection and preservation of the State’s water resources. Information concerning the low-flow characteristics of streams is especially important during critical flow periods, such as during the historic droughts that South Carolina has experienced in the past few decades.In 2008, the U.S. Geological Survey, in cooperation with the South Carolina Department of Health and Environmental Control, initiated a study to update low-flow statistics at continuous-record streamgaging stations operated by the U.S. Geological Survey in South Carolina. This report presents the low-flow statistics for 28 selected streamgaging stations in the Savannah and Salkehatchie River Basins in South Carolina. The low-flow statistics include daily mean flow durations for the 5-, 10-, 25-, 50-, 75-, 90-, and 95-percent probability of exceedance and the annual minimum 1-, 3-, 7-, 14-, 30-, 60-, and 90-day mean flows with recurrence intervals of 2, 5, 10, 20, 30, and 50 years, depending on the length of record available at the streamgaging station. The low-flow statistics were computed from records available through March 31, 2014.Low-flow statistics are influenced by length of record, hydrologic regime under which the data were collected, analytical techniques used, and other factors, such as urbanization, diversions, and droughts that may have occurred in the basin. To assess changes in the low-flow statistics from the previously published values, a comparison of the low-flow statistics for the annual minimum 7-day average streamflow with a 10-year recurrence interval (7Q10) from this study was made with the most recently published values. Of the 28 streamgaging stations for which recurrence interval computations were made, 14 streamgaging stations were suitable for comparing to low-flow statistics that were previously published in U.S. Geological Survey reports. These comparisons indicated that seven of the streamgaging stations had values lower than the previous values, two streamgaging stations had values higher than the previous values, and two streamgaging stations had values that were unchanged from previous values. The remaining three stations for which previous 7Q10 values were computed, which are located on the main stem of the Savannah River, were not compared with current estimates because of differences in the way the pre-regulation and regulated flow data were analyzed.

  19. Modeled intermittency risk for small streams in the Upper Colorado River Basin under climate change

    USGS Publications Warehouse

    Reynolds, Lindsay V.; Shafroth, Patrick B.; Poff, N. LeRoy

    2015-01-01

    Longer, drier summers projected for arid and semi-arid regions of western North America under climate change are likely to have enormous consequences for water resources and river-dependent ecosystems. Many climate change scenarios for this region involve decreases in mean annual streamflow, late summer precipitation and late-summer streamflow in the coming decades. Intermittent streams are already common in this region, and it is likely that minimum flows will decrease and some perennial streams will shift to intermittent flow under climate-driven changes in timing and magnitude of precipitation and runoff, combined with increases in temperature. To understand current intermittency among streams and analyze the potential for streams to shift from perennial to intermittent under a warmer climate, we analyzed historic flow records from streams in the Upper Colorado River Basin (UCRB). Approximately two-thirds of 115 gaged stream reaches included in our analysis are currently perennial and the rest have some degree of intermittency. Dry years with combinations of high temperatures and low precipitation were associated with more zero-flow days. Mean annual flow was positively related to minimum flows, suggesting that potential future declines in mean annual flows will correspond with declines in minimum flows. The most important landscape variables for predicting low flow metrics were precipitation, percent snow, potential evapotranspiration, soils, and drainage area. Perennial streams in the UCRB that have high minimum-flow variability and low mean flows are likely to be most susceptible to increasing streamflow intermittency in the future.

  20. An analysis of potential water availability from the Atwood, Leesville, and Tappan Lakes in the Muskingum River Watershed, Ohio

    USGS Publications Warehouse

    Koltun, G.F.

    2013-01-01

    This report presents the results of a study to assess potential water availability from the Atwood, Leesville, and Tappan Lakes, located within the Muskingum River Watershed, Ohio. The assessment was based on the criterion that water withdrawals should not appreciably affect maintenance of recreation-season pool levels in current use. To facilitate and simplify the assessment, it was assumed that historical lake operations were successful in maintaining seasonal pool levels, and that any discharges from lakes constituted either water that was discharged to prevent exceeding seasonal pool levels or discharges intended to meet minimum in-stream flow targets downstream from the lakes. It further was assumed that the volume of water discharged in excess of the minimum in-stream flow target is available for use without negatively impacting seasonal pool levels or downstream water uses and that all or part of it is subject to withdrawal. Historical daily outflow data for the lakes were used to determine the quantity of water that potentially could be withdrawn and the resulting quantity of water that would flow downstream (referred to as “flow-by”) on a daily basis as a function of all combinations of three hypothetical target minimum flow-by amounts (1, 2, and 3 times current minimum in-stream flow targets) and three pumping capacities (1, 2, and 3 million gallons per day). Using both U.S. Geological Survey streamgage data and lake-outflow data provided by the U.S. Army Corps of Engineers resulted in analytical periods ranging from 51 calendar years for the Atwood Lake to 73 calendar years for the Leesville and Tappan Lakes. The observed outflow time series and the computed time series of daily flow-by amounts and potential withdrawals were analyzed to compute and report order statistics (95th, 75th, 50th, 25th, 10th, and 5th percentiles) and means for the analytical period, in aggregate, and broken down by calendar month. In addition, surplus-water mass curve data were tabulated for each of the lakes. Monthly order statistics of computed withdrawals indicated that, for the three pumping capacities considered, increasing the target minimum flow-by amount tended to reduce the amount of water that can be withdrawn. The reduction was greatest in the lower percentiles of withdrawal; however, increasing the flow-by amount had no impact on potential withdrawals during high flow. In addition, for a given target minimum flow-by amount, increasing the pumping rate increased the total amount of water that could be withdrawn; however, that increase was less than a direct multiple of the increase in pumping rate for most flow statistics. Potential monthly withdrawals were observed to be more variable and more limited in some calendar months than others. Monthly order statistics and means of computed daily mean flow-by amounts indicated that flow-by amounts generally tended to be lowest during June–October and February. Increasing the target minimum flow-by amount for a given pumping rate resulted in some small increases in the magnitudes of the mean and 50th percentile and lower order statistics of computed mean flow-by, but had no effect on the magnitudes of the higher percentile statistics. Increasing the pumping rate for a given target minimum flow-by amount resulted in decreases in magnitudes of higher-percentile flow-by statistics by an amount equal to the flow equivalent of the increase in pumping rate; however, some lower percentile statistics remained unchanged.

  1. Establishing Minimum Flow Requirements Based on Benthic Vegetation: What are Some Issues Related to Identifying Quantity of Inflow and Tools Used to Quantify Ecosystem Response?

    NASA Astrophysics Data System (ADS)

    Hunt, M. J.; Nuttle, W. K.; Cosby, B. J.; Marshall, F. E.

    2005-05-01

    Establishing minimum flow requirements in aquatic ecosystems is one way to stipulate controls on water withdrawals in a watershed. The basis of the determination is to identify the amount of flow needed to sustain a threshold ecological function. To develop minimum flow criteria an understanding of ecological response in relation to flow is essential. Several steps are needed including: (1) identification of important resources and ecological functions, (2) compilation of available information, (3) determination of historical conditions, (4) establishment of technical relationships between inflow and resources, and (5) identification of numeric criteria that reflect the threshold at which resources are harmed. The process is interdisciplinary requiring the integration of hydrologic and ecologic principles with quantitative assessments. The tools used quantify the ecological response and key questions related to how the quantity of flow influences the ecosystem are examined by comparing minimum flow determination in two different aquatic systems in South Florida. Each system is characterized by substantial hydrologic alteration. The first, the Caloosahatchee River is a riverine system, located on the southwest coast of Florida. The second, the Everglades- Florida Bay ecotone, is a wetland mangrove ecosystem, located on the southern tip of the Florida peninsula. In both cases freshwater submerged aquatic vegetation (Vallisneria americana or Ruppia maritima), located in areas of the saltwater- freshwater interface has been identified as a basis for minimum flow criteria. The integration of field studies, laboratory studies, and literature review was required. From this information we developed ecological modeling tools to quantify and predict plant growth in response to varying environmental variables. Coupled with hydrologic modeling tools questions relating to the quantity and timing of flow and ecological consequences in relation to normal variability are addressed.

  2. Integrating expert opinion with modelling for quantitative multi-hazard risk assessment in the Eastern Italian Alps

    NASA Astrophysics Data System (ADS)

    Chen, Lixia; van Westen, Cees J.; Hussin, Haydar; Ciurean, Roxana L.; Turkington, Thea; Chavarro-Rincon, Diana; Shrestha, Dhruba P.

    2016-11-01

    Extreme rainfall events are the main triggering causes for hydro-meteorological hazards in mountainous areas, where development is often constrained by the limited space suitable for construction. In these areas, hazard and risk assessments are fundamental for risk mitigation, especially for preventive planning, risk communication and emergency preparedness. Multi-hazard risk assessment in mountainous areas at local and regional scales remain a major challenge because of lack of data related to past events and causal factors, and the interactions between different types of hazards. The lack of data leads to a high level of uncertainty in the application of quantitative methods for hazard and risk assessment. Therefore, a systematic approach is required to combine these quantitative methods with expert-based assumptions and decisions. In this study, a quantitative multi-hazard risk assessment was carried out in the Fella River valley, prone to debris flows and flood in the north-eastern Italian Alps. The main steps include data collection and development of inventory maps, definition of hazard scenarios, hazard assessment in terms of temporal and spatial probability calculation and intensity modelling, elements-at-risk mapping, estimation of asset values and the number of people, physical vulnerability assessment, the generation of risk curves and annual risk calculation. To compare the risk for each type of hazard, risk curves were generated for debris flows, river floods and flash floods. Uncertainties were expressed as minimum, average and maximum values of temporal and spatial probability, replacement costs of assets, population numbers, and physical vulnerability. These result in minimum, average and maximum risk curves. To validate this approach, a back analysis was conducted using the extreme hydro-meteorological event that occurred in August 2003 in the Fella River valley. The results show a good performance when compared to the historical damage reports.

  3. Edge Probability and Pixel Relativity-Based Speckle Reducing Anisotropic Diffusion.

    PubMed

    Mishra, Deepak; Chaudhury, Santanu; Sarkar, Mukul; Soin, Arvinder Singh; Sharma, Vivek

    2018-02-01

    Anisotropic diffusion filters are one of the best choices for speckle reduction in the ultrasound images. These filters control the diffusion flux flow using local image statistics and provide the desired speckle suppression. However, inefficient use of edge characteristics results in either oversmooth image or an image containing misinterpreted spurious edges. As a result, the diagnostic quality of the images becomes a concern. To alleviate such problems, a novel anisotropic diffusion-based speckle reducing filter is proposed in this paper. A probability density function of the edges along with pixel relativity information is used to control the diffusion flux flow. The probability density function helps in removing the spurious edges and the pixel relativity reduces the oversmoothing effects. Furthermore, the filtering is performed in superpixel domain to reduce the execution time, wherein a minimum of 15% of the total number of image pixels can be used. For performance evaluation, 31 frames of three synthetic images and 40 real ultrasound images are used. In most of the experiments, the proposed filter shows a better performance as compared to the state-of-the-art filters in terms of the speckle region's signal-to-noise ratio and mean square error. It also shows a comparative performance for figure of merit and structural similarity measure index. Furthermore, in the subjective evaluation, performed by the expert radiologists, the proposed filter's outputs are preferred for the improved contrast and sharpness of the object boundaries. Hence, the proposed filtering framework is suitable to reduce the unwanted speckle and improve the quality of the ultrasound images.

  4. On-Site Incineration of Contaminated Soil: A Study into U.S. Navy Applications

    DTIC Science & Technology

    1991-08-01

    venturi scrubber Minimum water flow rate and p1l to absorber Minimum water/alkaline reagent flow to dry scrubber Minimum particulate scrubber blowdown...remove hydrochloric acid and sulfur dioxide from flue gases using, for example, wet scrubbers and limestone adsorption towers, respectively. Modified...Reagent preparation 8) Bllending 26) Fugitive emission control 9) Pretreatment 27) Scrubber liquid cooling 10) Blended and pretreated solid waste

  5. Aeroacoustic and aerodynamic applications of the theory of nonequilibrium thermodynamics

    NASA Technical Reports Server (NTRS)

    Horne, W. Clifton; Smith, Charles A.; Karamcheti, Krishnamurty

    1991-01-01

    Recent developments in the field of nonequilibrium thermodynamics associated with viscous flows are examined and related to developments to the understanding of specific phenomena in aerodynamics and aeroacoustics. A key element of the nonequilibrium theory is the principle of minimum entropy production rate for steady dissipative processes near equilibrium, and variational calculus is used to apply this principle to several examples of viscous flow. A review of nonequilibrium thermodynamics and its role in fluid motion are presented. Several formulations are presented of the local entropy production rate and the local energy dissipation rate, two quantities that are of central importance to the theory. These expressions and the principle of minimum entropy production rate for steady viscous flows are used to identify parallel-wall channel flow and irrotational flow as having minimally dissipative velocity distributions. Features of irrotational, steady, viscous flow near an airfoil, such as the effect of trailing-edge radius on circulation, are also found to be compatible with the minimum principle. Finally, the minimum principle is used to interpret the stability of infinitesimal and finite amplitude disturbances in an initially laminar, parallel shear flow, with results that are consistent with experiment and linearized hydrodynamic stability theory. These results suggest that a thermodynamic approach may be useful in unifying the understanding of many diverse phenomena in aerodynamics and aeroacoustics.

  6. Hydrologic reconnaissance of the geothermal area near Klamath Falls, Oregon

    USGS Publications Warehouse

    Sammel, E.A.; Peterson, D.L.

    1976-01-01

    Geothermal phenomena observed in the vicinity of Klamath Falls include hot springs with temperatures that approach 204°F (96 o C) (the approximate boiling temperature for the altitude), steam and water wells with temperatures that exceed 212°F (100°C), and hundreds of warm-water wells with temperatures mostly ranging from 68° to 95°F (20° to 35°C). Although warm waters are encountered by wells throughout much of the 350 square miles (900 square kilometers) of the area studied, waters with temperatures exceeding 140°F (60°C) are confined to three relatively restricted areas, the northeast part of the City of Klamath Falls, Olene Gap, and the southwest flank of the Klamath Hills.The hot waters are located near, and are presumably related to, major fault and fracture zones of the Basin and Range type. The displaced crustal blocks are composed of basaltic flow rocks and pyroclastics of Miocene to Pleistocene age, and of sediments and basalt flows of the Yonna Formation of Pliocene age. Dip-slip movement along the high-angle faults may be as much as 6,000 feet (1,800 meters) at places.Shallow ground water of local meteoric origin moves through the upper 1,000 to 1,500 feet (300 to 450 meters) of sediments and volcanic rocks at relatively slow rates. A small amount of ground water, perhaps 100,000 acre feet (1.2 x 108 cubic meters) per year, leaves the area in flow toward the southwest, but much of the ground water is discharged as evapotranspiration within the basin. Average annual precipitation on 7,317 square miles (18,951 square kilometers) of land surface near Klamath Falls is estimated to be 18.16 inches (461 millimeters), of which between 12 and 14 inches (305 and 356 millimeters) is estimated to be lost through evapotranspiration.Within the older basaltic rocks of the area, hydraulic conductivities are greater than in the shallow sediments, and ground water may move relatively freely parallel to the northwest-southeast structural trend. Recharge to the geothermal systems probably occurs as water, in the deeper basalt rocks, penetrating downward along the extensive fracture zones that transect the area.Shallow meteoric water that is assumed to be the source of the thermal waters has low dissolved-solids concentrations generally dominated by calcium and bicarbonate. During its passage through the geothermal reservoir, the water gains dissolved solids in amounts up to about 900 milligrams per liter. Sodium and sulfate become the dominant ions. Chloride concentrations remain relatively low, and silica concentrations increase from an average of about 35 milligrams per liter to about 100 milligrams per liter.Both cation ratios and silica concentrations in the hot waters indicate that reservoir temperatures are relatively low. The estimate arrived at in this study for the minimum reservoir temperature is 130°C. Silica concentrations are probably more reliable than cation ratios for estimates of reservoir temperatures for these waters. Other chemical indicators, including oxygen and deuterium isotopes, are consistent in indicating that reservoir temperatures are probably not much greater than the minimum estimate.Temperature distributions and heat flows in the shallow rocks of the area are strongly influenced by convective flow of water. Most observed temperature gradients and estimated heat flows are believed to be unreliable as indicators of conditions in or directly above the thermal reservoir. Some evidence from temperature profiles suggests, however, that heat flow in the Lower Klamath Lake basin is about 1.4 microcalories per square centimeter per second (1.4 HFU), a value that is near the minimum expected for the Basin and Range province.The net thermal flux discharged from springs and wells in the area is estimated to be on the order of 2 x 106 calories per second. Discharge by thermal waters into the shallow ground-water system beneath land surface may be many times this amount. Reportedly, at present only about 1,300 calories per second of geothermal heat is being put to beneficial use in the area.A conceptual model of the geothermal system at Klamath Falls suggests that most of the observed phenomena result from transport of heat in a convective hot-water system closely related to the regional fault system. Temperatures at shallow depths are elevated above normal both by convective transport and by blockage of heat flow in sediments of low thermal conductivities. Circulation of meteoric water to depths of 10,000 to 14,000 feet (3,000 to 4,300 meters) could account for the temperatures that probably exist in the thermal reservoir, assuming temperature gradients of 30° to 40°C per kilometer in a crustal zone of normal conductive heat flow. Circulation to shallower depths may be sufficient to warm the water to the required temperatures assuming the more likely conditions of convective transport of heat and the insulating effect of overlying sediments.Heat contents in the shallow hot-water system (<3 kilometers depth) are probably in the range 12 x 1018 calories to 36 x 1018 calories. The geothermal resource at Klamath Falls may, therefore, be one of the largest in the United States.

  7. Research on configuration of railway self-equipped tanker based on minimum cost maximum flow model

    NASA Astrophysics Data System (ADS)

    Yang, Yuefang; Gan, Chunhui; Shen, Tingting

    2017-05-01

    In the study of the configuration of the tanker of chemical logistics park, the minimum cost maximum flow model is adopted. Firstly, the transport capacity of the park loading and unloading area and the transportation demand of the dangerous goods are taken as the constraint condition of the model; then the transport arc capacity, the transport arc flow and the transport arc edge weight are determined in the transportation network diagram; finally, the software calculations. The calculation results show that the configuration issue of the tankers can be effectively solved by the minimum cost maximum flow model, which has theoretical and practical application value for tanker management of railway transportation of dangerous goods in the chemical logistics park.

  8. Three methods for estimating a range of vehicular interactions

    NASA Astrophysics Data System (ADS)

    Krbálek, Milan; Apeltauer, Jiří; Apeltauer, Tomáš; Szabová, Zuzana

    2018-02-01

    We present three different approaches how to estimate the number of preceding cars influencing a decision-making procedure of a given driver moving in saturated traffic flows. The first method is based on correlation analysis, the second one evaluates (quantitatively) deviations from the main assumption in the convolution theorem for probability, and the third one operates with advanced instruments of the theory of counting processes (statistical rigidity). We demonstrate that universally-accepted premise on short-ranged traffic interactions may not be correct. All methods introduced have revealed that minimum number of actively-followed vehicles is two. It supports an actual idea that vehicular interactions are, in fact, middle-ranged. Furthermore, consistency between the estimations used is surprisingly credible. In all cases we have found that the interaction range (the number of actively-followed vehicles) drops with traffic density. Whereas drivers moving in congested regimes with lower density (around 30 vehicles per kilometer) react on four or five neighbors, drivers moving in high-density flows respond to two predecessors only.

  9. Detection of a dynamic topography signal in last interglacial sea-level records

    PubMed Central

    Austermann, Jacqueline; Mitrovica, Jerry X.; Huybers, Peter; Rovere, Alessio

    2017-01-01

    Estimating minimum ice volume during the last interglacial based on local sea-level indicators requires that these indicators are corrected for processes that alter local sea level relative to the global average. Although glacial isostatic adjustment is generally accounted for, global scale dynamic changes in topography driven by convective mantle flow are generally not considered. We use numerical models of mantle flow to quantify vertical deflections caused by dynamic topography and compare predictions at passive margins to a globally distributed set of last interglacial sea-level markers. The deflections predicted as a result of dynamic topography are significantly correlated with marker elevations (>95% probability) and are consistent with construction and preservation attributes across marker types. We conclude that a dynamic topography signal is present in the elevation of last interglacial sea-level records and that the signal must be accounted for in any effort to determine peak global mean sea level during the last interglacial to within an accuracy of several meters. PMID:28695210

  10. The constructal law of design and evolution in nature

    PubMed Central

    Bejan, Adrian; Lorente, Sylvie

    2010-01-01

    Constructal theory is the view that (i) the generation of images of design (pattern, rhythm) in nature is a phenomenon of physics and (ii) this phenomenon is covered by a principle (the constructal law): ‘for a finite-size flow system to persist in time (to live) it must evolve such that it provides greater and greater access to the currents that flow through it’. This law is about the necessity of design to occur, and about the time direction of the phenomenon: the tape of the design evolution ‘movie’ runs such that existing configurations are replaced by globally easier flowing configurations. The constructal law has two useful sides: the prediction of natural phenomena and the strategic engineering of novel architectures, based on the constructal law, i.e. not by mimicking nature. We show that the emergence of scaling laws in inanimate (geophysical) flow systems is the same phenomenon as the emergence of allometric laws in animate (biological) flow systems. Examples are lung design, animal locomotion, vegetation, river basins, turbulent flow structure, self-lubrication and natural multi-scale porous media. This article outlines the place of the constructal law as a self-standing law in physics, which covers all the ad hoc (and contradictory) statements of optimality such as minimum entropy generation, maximum entropy generation, minimum flow resistance, maximum flow resistance, minimum time, minimum weight, uniform maximum stresses and characteristic organ sizes. Nature is configured to flow and move as a conglomerate of ‘engine and brake’ designs. PMID:20368252

  11. The constructal law of design and evolution in nature.

    PubMed

    Bejan, Adrian; Lorente, Sylvie

    2010-05-12

    Constructal theory is the view that (i) the generation of images of design (pattern, rhythm) in nature is a phenomenon of physics and (ii) this phenomenon is covered by a principle (the constructal law): 'for a finite-size flow system to persist in time (to live) it must evolve such that it provides greater and greater access to the currents that flow through it'. This law is about the necessity of design to occur, and about the time direction of the phenomenon: the tape of the design evolution 'movie' runs such that existing configurations are replaced by globally easier flowing configurations. The constructal law has two useful sides: the prediction of natural phenomena and the strategic engineering of novel architectures, based on the constructal law, i.e. not by mimicking nature. We show that the emergence of scaling laws in inanimate (geophysical) flow systems is the same phenomenon as the emergence of allometric laws in animate (biological) flow systems. Examples are lung design, animal locomotion, vegetation, river basins, turbulent flow structure, self-lubrication and natural multi-scale porous media. This article outlines the place of the constructal law as a self-standing law in physics, which covers all the ad hoc (and contradictory) statements of optimality such as minimum entropy generation, maximum entropy generation, minimum flow resistance, maximum flow resistance, minimum time, minimum weight, uniform maximum stresses and characteristic organ sizes. Nature is configured to flow and move as a conglomerate of 'engine and brake' designs.

  12. Surface reaction rate and probability of ozone and alpha-terpineol on glass, polyvinyl chloride, and latex paint surfaces.

    PubMed

    Shu, Shi; Morrison, Glenn C

    2011-05-15

    Ozone can react homogeneously with unsaturated organic compounds in buildings to generate undesirable products. However, these reactions can also occur on indoor surfaces, especially for low-volatility organics. Conversion rates of ozone with α-terpineol, a representative low-volatility compound, were quantified on surfaces that mimic indoor substrates. Rates were measured for α-terpineol adsorbed to beads of glass, polyvinylchloride (PVC), and dry latex paint, in a plug flow reactor. A newly defined second-order surface reaction rate coefficient, k(2), was derived from the flow reactor model. The value of k(2) ranged from 0.68 × 10(-14) cm(4)s(-1)molecule(-1) for α-terpineol adsorbed to PVC to 3.17 × 10(-14) cm(4)s(-1)molecule(-1) for glass, but was insensitive to relative humidity. Further, k(2) is only weakly influenced by the adsorbed mass but instead appears to be more strongly related to the interfacial activity α-terpineol. The minimum reaction probability ranged from 3.79 × 10(-6) for glass at 20% RH to 6.75 × 10(-5) for PVC at 50% RH. The combination of high equilibrium surface coverage and high reactivity for α-terpineol suggests that surface conversion rates are fast enough to compete with or even overwhelm other removal mechanisms in buildings such as gas-phase conversion and air exchange.

  13. Definition of hydraulic stability of KVGM-100 hot-water boiler and minimum water flow rate

    NASA Astrophysics Data System (ADS)

    Belov, A. A.; Ozerov, A. N.; Usikov, N. V.; Shkondin, I. A.

    2016-08-01

    In domestic power engineering, the methods of quantitative and qualitative-quantitative adjusting the load of the heat supply systems are widely distributed; furthermore, during the greater part of the heating period, the actual discharge of network water is less than estimated values when changing to quantitative adjustment. Hence, the hydraulic circuits of hot-water boilers should ensure the water velocities, minimizing the scale formation and excluding the formation of stagnant zones. The results of the calculations of hot-water KVGM-100 boiler and minimum water flow rate for the basic and peak modes at the fulfillment of condition of the lack of surface boil are presented in the article. The minimal flow rates of water at its underheating to the saturation state and the thermal flows in the furnace chamber were defined. The boiler hydraulic calculation was performed using the "Hydraulic" program, and the analysis of permissible and actual velocities of the water movement in the pipes of the heating surfaces was carried out. Based on the thermal calculations of furnace chamber and thermal- hydraulic calculations of heating surfaces, the following conclusions were drawn: the minimum velocity of water movement (by condition of boiling surface) at lifting movement of environment increases from 0.64 to 0.79 m/s; it increases from 1.14 to 1.38 m/s at down movement of environmental; the minimum water flow rate by the boiler in the basic mode (by condition of the surface boiling) increased from 887 t/h at the load of 20% up to 1074 t/h at the load of 100%. The minimum flow rate is 1074 t/h at nominal load and is achieved at the pressure at the boiler outlet equal to 1.1 MPa; the minimum water flow rate by the boiler in the peak mode by condition of surface boiling increases from 1669 t/h at the load of 20% up to 2021 t/h at the load of 100%.

  14. 14 CFR 121.335 - Equipment standards.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...

  15. 14 CFR 121.335 - Equipment standards.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...

  16. 14 CFR 121.335 - Equipment standards.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...

  17. 14 CFR 121.335 - Equipment standards.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...

  18. 14 CFR 121.335 - Equipment standards.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...

  19. Peak-flow characteristics of Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.; Krstolic, Jennifer L.; Wiegand, Ute

    2011-01-01

    Peak-flow annual exceedance probabilities, also called probability-percent chance flow estimates, and regional regression equations are provided describing the peak-flow characteristics of Virginia streams. Statistical methods are used to evaluate peak-flow data. Analysis of Virginia peak-flow data collected from 1895 through 2007 is summarized. Methods are provided for estimating unregulated peak flow of gaged and ungaged streams. Station peak-flow characteristics identified by fitting the logarithms of annual peak flows to a Log Pearson Type III frequency distribution yield annual exceedance probabilities of 0.5, 0.4292, 0.2, 0.1, 0.04, 0.02, 0.01, 0.005, and 0.002 for 476 streamgaging stations. Stream basin characteristics computed using spatial data and a geographic information system are used as explanatory variables in regional regression model equations for six physiographic regions to estimate regional annual exceedance probabilities at gaged and ungaged sites. Weighted peak-flow values that combine annual exceedance probabilities computed from gaging station data and from regional regression equations provide improved peak-flow estimates. Text, figures, and lists are provided summarizing selected peak-flow sites, delineated physiographic regions, peak-flow estimates, basin characteristics, regional regression model equations, error estimates, definitions, data sources, and candidate regression model equations. This study supersedes previous studies of peak flows in Virginia.

  20. Probability and volume of potential postwildfire debris flows in the 2010 Fourmile burn area, Boulder County, Colorado

    USGS Publications Warehouse

    Ruddy, Barbara C.; Stevens, Michael R.; Verdin, Kristine

    2010-01-01

    This report presents a preliminary emergency assessment of the debris-flow hazards from drainage basins burned by the Fourmile Creek fire in Boulder County, Colorado, in 2010. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the intermountain western United States were used to estimate the probability of debris-flow occurrence and volumes of debris flows for selected drainage basins. Data for the models include burn severity, rainfall total and intensity for a 25-year-recurrence, 1-hour-duration rainstorm, and topographic and soil property characteristics. Several of the selected drainage basins in Fourmile Creek and Gold Run were identified as having probabilities of debris-flow occurrence greater than 60 percent, and many more with probabilities greater than 45 percent, in response to the 25-year recurrence, 1-hour rainfall. None of the Fourmile Canyon Creek drainage basins selected had probabilities greater than 45 percent. Throughout the Gold Run area and the Fourmile Creek area upstream from Gold Run, the higher probabilities tend to be in the basins with southerly aspects (southeast, south, and southwest slopes). Many basins along the perimeter of the fire area were identified as having low probability of occurrence of debris flow. Volume of debris flows predicted from drainage basins with probabilities of occurrence greater than 60 percent ranged from 1,200 to 9,400 m3. The predicted moderately high probabilities and some of the larger volumes responses predicted for the modeled storm indicate a potential for substantial debris-flow effects to buildings, roads, bridges, culverts, and reservoirs located both within these drainages and immediately downstream from the burned area. However, even small debris flows that affect structures at the basin outlets could cause considerable damage.

  1. Inflight fuel tank temperature survey data

    NASA Technical Reports Server (NTRS)

    Pasion, A. J.

    1979-01-01

    Statistical summaries of the fuel and air temperature data for twelve different routes and for different aircraft models (B747, B707, DC-10 and DC-8), are given. The minimum fuel, total air and static air temperature expected for a 0.3% probability were summarized in table form. Minimum fuel temperature extremes agreed with calculated predictions and the minimum fuel temperature did not necessarily equal the minimum total air temperature even for extreme weather, long range flights.

  2. Peak flow regression equations For small, ungaged streams in Maine: Comparing map-based to field-based variables

    USGS Publications Warehouse

    Lombard, Pamela J.; Hodgkins, Glenn A.

    2015-01-01

    Regression equations to estimate peak streamflows with 1- to 500-year recurrence intervals (annual exceedance probabilities from 99 to 0.2 percent, respectively) were developed for small, ungaged streams in Maine. Equations presented here are the best available equations for estimating peak flows at ungaged basins in Maine with drainage areas from 0.3 to 12 square miles (mi2). Previously developed equations continue to be the best available equations for estimating peak flows for basin areas greater than 12 mi2. New equations presented here are based on streamflow records at 40 U.S. Geological Survey streamgages with a minimum of 10 years of recorded peak flows between 1963 and 2012. Ordinary least-squares regression techniques were used to determine the best explanatory variables for the regression equations. Traditional map-based explanatory variables were compared to variables requiring field measurements. Two field-based variables—culvert rust lines and bankfull channel widths—either were not commonly found or did not explain enough of the variability in the peak flows to warrant inclusion in the equations. The best explanatory variables were drainage area and percent basin wetlands; values for these variables were determined with a geographic information system. Generalized least-squares regression was used with these two variables to determine the equation coefficients and estimates of accuracy for the final equations.

  3. USING RESPONSES OF OYSTERS IN ESTABLISHING MINIMUM FLOWS AND LEVELS IN THE CALOOSAHATCHEE ESTUARY, FLORIDA

    EPA Science Inventory

    Volety, Aswani K., S. Gregory Tolley and James T. Winstead. 2002. Using Responses of Oysters in Establishing Minimum Flows and Levels in the Caloosahatchee Estuary, Florida (Abstract). Presented at the 6th International Conference on Shellfish Restoration, 20-24 November 2002, Ch...

  4. 40 CFR 1065.546 - Validation of minimum dilution ratio for PM batch sampling.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... flows and/or tracer gas concentrations for transient and ramped modal cycles to validate the minimum... mode-average values instead of continuous measurements for discrete mode steady-state duty cycles... molar flow data. This involves determination of at least two of the following three quantities: Raw...

  5. Sufficient condition for finite-time singularity and tendency towards self-similarity in a high-symmetry flow

    NASA Astrophysics Data System (ADS)

    Ng, C. S.; Bhattacharjee, A.

    A highly symmetric Euler flow, first proposed by Kida (1985), and recently simulated by Boratav and Pelz (1994) is considered. It is found that the fourth order spatial derivative of the pressure (pxxxx) at the origin is most probably positive. It is demonstrated that if pxxxx grows fast enough, there must be a finite-time singularity (FTS). For a random energy spectrum E(k) ∞ k-v, a FTS can occur if the spectral index v<3. Furthermore, a positive pxxxx has the dynamical consequence of reducing the third derivative of the velocity uxxx at the origin. Since the expectation value of uxxx is zero for a random distribution of energy, an ever decreasing uxxx means that the Kida flow has an intrinsic tendency to deviate from a random state. By assuming that uxxx reaches the minimum value for a given spectral profile, the velocity and pressure are found to have locally self-similar forms similar in shape to what are found in numerical simulations. Such a quasi self-similar solution relaxes the requirement for FTS to v<6. A special self-similar solution that satisfies Kelvin's circulation theorem and exhibits a FTS is found for v=2.

  6. An analysis of potential water availability from the Charles Mill, Clendening, Piedmont, Pleasant Hill, Senecaville, and Wills Creek Lakes in the Muskingum River Watershed, Ohio

    USGS Publications Warehouse

    Koltun, G.F.

    2014-01-01

    This report presents the results of a study to assess potential water availability from the Charles Mill, Clendening, Piedmont, Pleasant Hill, Senecaville, and Wills Creek Lakes, located within the Muskingum River Watershed, Ohio. The assessment was based on the criterion that water withdrawals should not appreciably affect maintenance of recreation-season pool levels in current use. To facilitate and simplify the assessment, it was assumed that historical lake operations were successful in maintaining seasonal pool levels, and that any discharges from lakes constituted either water that was discharged to prevent exceeding seasonal pool levels or discharges intended to meet minimum in-stream flow targets downstream from the lakes. It further was assumed that the volume of water discharged in excess of the minimum in-stream flow target is available for use without negatively impacting seasonal pool levels or downstream water uses and that all or part of it is subject to withdrawal. Historical daily outflow data for the lakes were used to determine the quantity of water that potentially could be withdrawn and the resulting quantity of water that would flow downstream (referred to as “flow-by”) on a daily basis as a function of all combinations of three hypothetical target minimum flow-by amounts (1, 2, and 3 times current minimum in-stream flow targets) and three pumping capacities (1, 2, and 3 million gallons per day). Using both U.S. Geological Survey streamgage data (where available) and lake-outflow data provided by the U.S. Army Corps of Engineers resulted in analytical periods ranging from 51 calendar years for Charles Mill, Clendening, and Piedmont Lakes to 74 calendar years for Pleasant Hill, Senecaville, and Wills Creek Lakes. The observed outflow time series and the computed time series of daily flow-by amounts and potential withdrawals were analyzed to compute and report order statistics (95th, 75th, 50th, 25th, 10th, and 5th percentiles) and means for the analytical period, in aggregate, and broken down by calendar month. In addition, surplus-water mass curve data were tabulated for each of the lakes. Monthly order statistics of computed withdrawals indicated that, for the three pumping capacities considered, increasing the target minimum flow-by amount tended to reduce the amount of water that can be withdrawn. The reduction was greatest in the lower percentiles of withdrawal; however, increasing the flow-by amount had no impact on potential withdrawals during high flow. In addition, for a given target minimum flow-by amount, increasing the pumping rate typically increased the total amount of water that could be withdrawn; however, that increase was less than a direct multiple of the increase in pumping rate for most flow statistics. Potential monthly withdrawals were observed to be more variable and more limited in some calendar months than others. Monthly order statistics and means of computed daily mean flow-by amounts indicated that flow-by amounts generally tended to be lowest during June–October. Increasing the target minimum flow-by amount for a given pumping rate resulted in some small increases in the magnitudes of the mean and 50th percentile and lower order statistics of computed mean flow-by, but had no effect on the magnitudes of the higher percentile statistics. Increasing the pumping rate for a given target minimum flow-by amount resulted in decreases in magnitudes of higher-percentile flow-by statistics by an amount equal to the flow equivalent of the increase in pumping rate; however, some lower percentile statistics remained unchanged.

  7. Continuity vs. the Crowd-Tradeoffs Between Continuous and Intermittent Citizen Hydrology Streamflow Observations.

    PubMed

    Davids, Jeffrey C; van de Giesen, Nick; Rutten, Martine

    2017-07-01

    Hydrologic data has traditionally been collected with permanent installations of sophisticated and accurate but expensive monitoring equipment at limited numbers of sites. Consequently, observation frequency and costs are high, but spatial coverage of the data is limited. Citizen Hydrology can possibly overcome these challenges by leveraging easily scaled mobile technology and local residents to collect hydrologic data at many sites. However, understanding of how decreased observational frequency impacts the accuracy of key streamflow statistics such as minimum flow, maximum flow, and runoff is limited. To evaluate this impact, we randomly selected 50 active United States Geological Survey streamflow gauges in California. We used 7 years of historical 15-min flow data from 2008 to 2014 to develop minimum flow, maximum flow, and runoff values for each gauge. To mimic lower frequency Citizen Hydrology observations, we developed a bootstrap randomized subsampling with replacement procedure. We calculated the same statistics, and their respective distributions, from 50 subsample iterations with four different subsampling frequencies ranging from daily to monthly. Minimum flows were estimated within 10% for half of the subsample iterations at 39 (daily) and 23 (monthly) of the 50 sites. However, maximum flows were estimated within 10% at only 7 (daily) and 0 (monthly) sites. Runoff volumes were estimated within 10% for half of the iterations at 44 (daily) and 12 (monthly) sites. Watershed flashiness most strongly impacted accuracy of minimum flow, maximum flow, and runoff estimates from subsampled data. Depending on the questions being asked, lower frequency Citizen Hydrology observations can provide useful hydrologic information.

  8. The effects of flow on schooling Devario aequipinnatus: school structure, startle response and information transmission

    PubMed Central

    Chicoli, A.; Butail, S.; Lun, Y.; Bak-Coleman, J.; Coombs, S.; Paley, D.A.

    2014-01-01

    To assess how flow affects school structure and threat detection, startle response rates of solitary and small groups of giant danio Devario aequipinnatus were compared to visual looming stimuli in flow and no-flow conditions. The instantaneous position and heading of each D. aequipinnatus were extracted from high-speed videos. Behavioural results indicate that (1) school structure is altered in flow such that D. aequipinnatus orient upstream while spanning out in a crosswise direction, (2) the probability of at least one D. aequipinnatus detecting the visual looming stimulus is higher in flow than no flow for both solitary D. aequipinnatus and groups of eight D. aequipinnatus, however, (3) the probability of three or more individuals responding is higher in no flow than flow. Taken together, these results indicate a higher probability of stimulus detection in flow but a higher probability of internal transmission of information in no flow. Finally, results were well predicted by a computational model of collective fright response that included the probability of direct detection (based on signal detection theory) and indirect detection (i.e. via interactions between group members) of threatening stimuli. This model provides a new theoretical framework for analysing the collective transfer of information among groups of fishes and other organisms. PMID:24773538

  9. Effects of regulated river flows on habitat suitability for the robust redhorse

    USGS Publications Warehouse

    Fisk, J. M.; Kwak, Thomas J.; Heise, R. J.

    2015-01-01

    The Robust Redhorse Moxostoma robustum is a rare and imperiled fish, with wild populations occurring in three drainages from North Carolina to Georgia. Hydroelectric dams have altered the species’ habitat and restricted its range. An augmented minimum-flow regime that will affect Robust Redhorse habitat was recently prescribed for Blewett Falls Dam, a hydroelectric facility on the Pee Dee River, North Carolina. Our objective was to quantify suitable spawning and nonspawning habitat under current and proposed minimum-flow regimes. We implanted radio transmitters into 27 adult Robust Redhorses and relocated the fish from spring 2008 to summer 2009, and we described habitat at 15 spawning capture locations. Nonspawning habitat consisted of deep, slow-moving pools (mean depth D 2.3 m; mean velocity D 0.23 m/s), bedrock and sand substrates, and boulders or coarse woody debris as cover. Spawning habitat was characterized as shallower, faster-moving water (mean depth D 0.84 m; mean velocity D 0.61 m/s) with gravel and cobble as substrates and boulders as cover associated with shoals. Telemetry relocations revealed two behavioral subgroups: a resident subgroup (linear range [mean § SE] D 7.9 § 3.7 river kilometers [rkm]) that remained near spawning areas in the Piedmont region throughout the year; and a migratory subgroup (linear range D 64.3 § 8.4 rkm) that migrated extensively downstream into the Coastal Plain region. Spawning and nonspawning habitat suitability indices were developed based on field microhabitat measurements and were applied to model suitable available habitat (weighted usable area) for current and proposed augmented minimum flows. Suitable habitat (both spawning and nonspawning) increased for each proposed seasonal minimum flow relative to former minimum flows, with substantial increases for spawning sites. Our results contribute to an understanding of how regulated flows affect available habitats for imperiled species. Flow managers can use these findings to regulate discharge more effectively and to create and maintain important habitats during critical periods for priority species.

  10. Minimum average 7-day, 10-year flows in the Hudson River basin, New York, with release-flow data on Rondout and Ashokan reservoirs

    USGS Publications Warehouse

    Archer, Roger J.

    1978-01-01

    Minimum average 7-day, 10-year flow at 67 gaging stations and 173 partial-record stations in the Hudson River basin are given in tabular form. Variation of the 7-day, 10-year low flow from point to point in selected reaches, and the corresponding times of travel, are shown graphically for Wawayanda Creek, Wallkill River, Woodbury-Moodna Creek, and the Fishkill Creek basins. The 7-day, 10-year low flow for the Saw Kill basin, and estimates of the 7-day, 10-year low flow of the Roeliff Jansen Kill at Ancram and of Birch Creek at Pine Hill, are given. Summaries of discharge from Rondout and Ashokan Reservoirs, in Ulster County, are also included. Minimum average 7-day, 10-year flow for gaging stations with 10 years or more of record were determined by log-Pearson Type III computation; those for partial-record stations were developed by correlation of discharge measurements made at the partial-record stations with discharge data from appropriate long-term gaging stations. The variation in low flows from point to point within the selected subbasins were estimated from available data and regional regression formula. Time of travel at these flows in the four subbasins was estimated from available data and Boning's equations.

  11. Effects of meridional flow variations on solar cycles 23 and 24

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upton, Lisa; Hathaway, David H., E-mail: lisa.a.upton@vanderbilt.edu, E-mail: lar0009@uah.edu, E-mail: david.hathaway@nasa.gov

    2014-09-10

    The faster meridional flow that preceded the solar cycle 23/24 minimum is thought to have led to weaker polar field strengths, producing the extended solar minimum and the unusually weak cycle 24. To determine the impact of meridional flow variations on the sunspot cycle, we have simulated the Sun's surface magnetic field evolution with our newly developed surface flux transport model. We investigate three different cases: a constant average meridional flow, the observed time-varying meridional flow, and a time-varying meridional flow in which the observed variations from the average have been doubled. Comparison of these simulations shows that the variationsmore » in the meridional flow over cycle 23 have a significant impact (∼20%) on the polar fields. However, the variations produced polar fields that were stronger than they would have been otherwise. We propose that the primary cause of the extended cycle 23/24 minimum and weak cycle 24 was the weakness of cycle 23 itself—with fewer sunspots, there was insufficient flux to build a big cycle. We also find that any polar counter-cells in the meridional flow (equatorward flow at high latitudes) produce flux concentrations at mid-to-high latitudes that are not consistent with observations.« less

  12. Minimum viewing angle for visually guided ground speed control in bumblebees.

    PubMed

    Baird, Emily; Kornfeldt, Torill; Dacke, Marie

    2010-05-01

    To control flight, flying insects extract information from the pattern of visual motion generated during flight, known as optic flow. To regulate their ground speed, insects such as honeybees and Drosophila hold the rate of optic flow in the axial direction (front-to-back) constant. A consequence of this strategy is that its performance varies with the minimum viewing angle (the deviation from the frontal direction of the longitudinal axis of the insect) at which changes in axial optic flow are detected. The greater this angle, the later changes in the rate of optic flow, caused by changes in the density of the environment, will be detected. The aim of the present study is to examine the mechanisms of ground speed control in bumblebees and to identify the extent of the visual range over which optic flow for ground speed control is measured. Bumblebees were trained to fly through an experimental tunnel consisting of parallel vertical walls. Flights were recorded when (1) the distance between the tunnel walls was either 15 or 30 cm, (2) the visual texture on the tunnel walls provided either strong or weak optic flow cues and (3) the distance between the walls changed abruptly halfway along the tunnel's length. The results reveal that bumblebees regulate ground speed using optic flow cues and that changes in the rate of optic flow are detected at a minimum viewing angle of 23-30 deg., with a visual field that extends to approximately 155 deg. By measuring optic flow over a visual field that has a low minimum viewing angle, bumblebees are able to detect and respond to changes in the proximity of the environment well before they are encountered.

  13. Low-flow characteristics of Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.; Krstolic, Jennifer L.; Wiegand, Ute

    2011-01-01

    Low-flow annual non-exceedance probabilities (ANEP), called probability-percent chance (P-percent chance) flow estimates, regional regression equations, and transfer methods are provided describing the low-flow characteristics of Virginia streams. Statistical methods are used to evaluate streamflow data. Analysis of Virginia streamflow data collected from 1895 through 2007 is summarized. Methods are provided for estimating low-flow characteristics of gaged and ungaged streams. The 1-, 4-, 7-, and 30-day average streamgaging station low-flow characteristics for 290 long-term, continuous-record, streamgaging stations are determined, adjusted for instances of zero flow using a conditional probability adjustment method, and presented for non-exceedance probabilities of 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.05, 0.02, 0.01, and 0.005. Stream basin characteristics computed using spatial data and a geographic information system are used as explanatory variables in regional regression equations to estimate annual non-exceedance probabilities at gaged and ungaged sites and are summarized for 290 long-term, continuous-record streamgaging stations, 136 short-term, continuous-record streamgaging stations, and 613 partial-record streamgaging stations. Regional regression equations for six physiographic regions use basin characteristics to estimate 1-, 4-, 7-, and 30-day average low-flow annual non-exceedance probabilities at gaged and ungaged sites. Weighted low-flow values that combine computed streamgaging station low-flow characteristics and annual non-exceedance probabilities from regional regression equations provide improved low-flow estimates. Regression equations developed using the Maintenance of Variance with Extension (MOVE.1) method describe the line of organic correlation (LOC) with an appropriate index site for low-flow characteristics at 136 short-term, continuous-record streamgaging stations and 613 partial-record streamgaging stations. Monthly streamflow statistics computed on the individual daily mean streamflows of selected continuous-record streamgaging stations and curves describing flow-duration are presented. Text, figures, and lists are provided summarizing low-flow estimates, selected low-flow sites, delineated physiographic regions, basin characteristics, regression equations, error estimates, definitions, and data sources. This study supersedes previous studies of low flows in Virginia.

  14. The effects of drainage basin geomorphometry on minimum low flow discharge: the study of small watershed in Kelang River Valley in Peninsular Malaysia.

    PubMed

    Yunus, Ahmad Jailani Muhamed; Nakagoshi, Nobukazu; Salleh, Khairulmaini Osman

    2003-03-01

    This study investigate the relationships between geomorphometric properties and the minimum low flow discharge of undisturbed drainage basins in the Taman Bukit Cahaya Seri Alam Forest Reserve, Peninsular Malaysia. The drainage basins selected were third-order basins so as to facilitate a common base for sampling and performing an unbiased statistical analyses. Three levels of relationships were observed in the study. Significant relationships existed between the geomorphometric properties as shown by the correlation network analysis; secondly, individual geomorphometric properties were observed to influence minimum flow discharge; and finally, the multiple regression model set up showed that minimum flow discharge (Q min) was dependent of basin area (AU), stream length (LS), maximum relief (Hmax), average relief (HAV) and stream frequency (SF). These findings further enforced other studies of this nature that drainage basins were dynamic and functional entities whose operations were governed by complex interrelationships occurring within the basins. Changes to any of the geomorphometric properties would influence their role as basin regulators thus influencing a change in basin response. In the case of the basin's minimum low flow, a change in any of the properties considered in the regression model influenced the "time to peak" of flow. A shorter time period would mean higher discharge, which is generally considered the prerequisite to flooding. This research also conclude that the role of geomorphometric properties to control the water supply within the stream through out the year even though during the drought and less precipitations months. Drainage basins are sensitive entities and any deteriorations involve will generate reciprocals and response to the water supply as well as the habitat within the areas.

  15. Flow Boiling Critical Heat Flux in Reduced Gravity

    NASA Technical Reports Server (NTRS)

    Mudawar, Issam; Zhang, Hui; Hasan, Mohammad M.

    2004-01-01

    This study provides systematic method for reducing power consumption in reduced gravity systems by adopting minimum velocity required to provide adequate CHF and preclude detrimental effects of reduced gravity . This study proves it is possible to use existing 1 ge flow boiling and CHF correlations and models to design reduced gravity systems provided minimum velocity criteria are met

  16. Binomial Test Method for Determining Probability of Detection Capability for Fracture Critical Applications

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2011-01-01

    The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.

  17. Low-flow frequency and flow duration of selected South Carolina streams in the Pee Dee River basin through March 2007

    USGS Publications Warehouse

    Feaster, Toby D.; Guimaraes, Wladmir B.

    2009-01-01

    Part of the mission of the South Carolina Department of Health and Environmental Control and the South Carolina Department of Natural Resources is to protect and preserve South Carolina's water resources. Doing so requires an ongoing understanding of streamflow characteristics of the rivers and streams in South Carolina. A particular need is information concerning the low-flow characteristics of streams; this information is especially important for effectively managing the State's water resources during critical flow periods such as the severe drought that occurred between 1998 and 2002 and the most recent drought that occurred between 2006 and 2009. In 2008, the U.S. Geological Survey, in cooperation with the South Carolina Department of Health and Environmental Control, initiated a study to update low-flow statistics at continuous-record streamgaging stations operated by the U.S. Geological Survey in South Carolina. Under this agreement, the low-flow characteristics at continuous-record streamgaging stations will be updated in a systematic manner during the monitoring and assessment of the eight major basins in South Carolina as defined and grouped according to the South Carolina Department of Health and Environmental Control's Watershed Water Quality Management Strategy. Depending on the length of record available at the continuous-record streamgaging stations, low-flow frequency characteristics are estimated for annual minimum 1-, 3-, 7-, 14-, 30-, 60-, and 90-day average flows with recurrence intervals of 2, 5, 10, 20, 30, and 50 years. Low-flow statistics are presented for 18 streamgaging stations in the Pee Dee River basin. In addition, daily flow durations for the 5-, 10-, 25-, 50-, 75-, 90-, and 95-percent probability of exceedance also are presented for the stations. The low-flow characteristics were computed from records available through March 31, 2007. The last systematic update of low-flow characteristics in South Carolina occurred more than 20 years ago and included data through March 1987. Of the 17 streamgaging stations included in this study, 15 had low-flow characteristics that were published in previous U.S. Geological Survey reports. A comparison of the low-flow characteristic for the minimum average flow for a 7-consecutive-day period with a 10-year recurrence interval from this study with the most recently published values indicated that 10 of the 15 streamgaging stations had values that were within ±25 percent of each other. Nine of the 15 streamgaging stations had negative percentage differences indicating the low-flow statistic had decreased since the previous study, 4 streamgaging stations had positive percent differences indicating that the low-flow statistic had increased since the previous study, and 2 streamgaging stations had a zero percent difference indicating no change since the previous study. The low-flow characteristics are influenced by length of record, hydrologic regime under which the record was collected, techniques used to do the analysis, and other changes that may have occurred in the watershed.

  18. Inflation and Collapse of the Wai'anae Volcano (Oahu,Hawaii, USA):Insights from Magnetic Fabric Studies of Dikes

    NASA Astrophysics Data System (ADS)

    Lau, J. K. S.; Herrero-Bervera, E.; Moreira, M. A. D. A.

    2016-12-01

    The Waianae Volcano is the older of two shield volcanoes that make up the island of Oahu. Previous age determinations suggest that the subaerial portion of the edifice erupted between approximately 3.7 and 2.7 Ma. The eroded Waianae Volcano had a well-developed caldera centered near the back of its two most prominent valleys and two major rift zones: a prominent north-west rift zone, well-defined by a complex of sub-parallel dikes trending approximately N52W, and a more diffuse south rift zone trending between S20W to due South. In order to investigate the volcanic evolution, the plumbing and the triggering mechanisms of the catastrophic mass wasting that had occurred in the volcano, we have undertaken an AMS study of 7 dikes from the volcano. The width of the dikes ranged between 0.5 to 4 m. Low-field susceptibility versus temperature (k-T) and SIRM experiments were able to identify magnetite at 575 0C and at about 250-300 0C, corresponding to titanomagnetite.. Magnetic fabric studies of the dikes along a NW-SE section across the present southwestern part of the Waianae volcano have been conducted. The flow direction was studied using the imbrication angle between the dike walls and the magnetic foliation. The flow direction has been obtained in the 7 studied dikes. For the majority of the cases, the maximum axis, K1, appears to be perpendicular to the flow direction, and in some cases, with a permutation with respect to the intermediate axis, K2, or even with respect to the minimum axis, K3. In addition, in one of the sites studied, the minimum axis, K3, is very close to the flow direction. In all cases, the magma flowed along a direction with a moderate plunge. For six of the dikes, the interpreted flow was from the internal part of the volcano towards the volcano border, and corresponds probably to the inflation phase of the volcano. In two cases (dikes located on the northwestern side of the volcano), the flow is slightly downwards, possibly related to the distal extension due to inflation of the central part of the volcano. . It also revealed a downward flow that could correspond to another magma pulse that resulted from a flow-back during distension due to the collapsing of the Waianae volcano.

  19. Minimum tailwater flows in relation to habitat suitability and sport-fish harvest

    USGS Publications Warehouse

    Jacobs, K.E.; Swink, W.D.; Novotny, J.F.

    1987-01-01

    The instream flow needs of four sport fishes (rainbow trout Salmo gairdneri, channel catfish Ictalurus punctatus, smallmouth bass Micropterus dolomieui, and white crappie Pomoxis annularis) were evaluated in the tailwater below Green River Lake, Kentucky. The Newcombe method, a simple procedure developed in British Columbia that is based on the distribution of water depths and velocities at various flows, was used to predict usable habitat at seven flows. Predicted usable habitat was two to six times greater for rainbow trout than for any of the other species at all flows. Angler harvest corresponded to the predicted abundance for rainbow trout and smallmouth bass, but the catch of channel catfish and white crappies was seasonally greater than expected. The presence of the dam and reservoir apparently disrupted the normal movement and feeding patterns of these species and periodically overrode the relation between usable habitat and abundance assumed in the Newcombe method. The year-round minimum flow of 4.6 m 3/s recommended for the tailwater would generally increase the amount of habitat available in the tailwater from April through October, and the minimum flow of 2.4 m3/s recommended for periods of drought would allow the maintenance of a trout fishery.

  20. Glottal volume velocity waveform characteristics in subjects with and without vocal training, related to gender, sound intensity, fundamental frequency, and age.

    PubMed

    Sulter, A M; Wit, H P

    1996-11-01

    Glottal volume velocity waveform characteristics of 224 subjects, categorized in four groups according to gender and vocal training, were determined, and their relations to sound-pressure level, fundamental frequency, intra-oral pressure, and age were analyzed. Subjects phonated at three intensity conditions. The glottal volume velocity waveforms were obtained by inverse filtering the oral flow. Glottal volume velocity waveforms were parameterized with flow-based (minimum flow, ac flow, average flow, maximum flow declination rate) and time-based parameters (closed quotient, closing quotient, speed quotient), as well as with derived parameters (vocal efficiency and glottal resistance). Higher sound-pressure levels, intra-oral pressures, and flow-parameter values (ac flow, maximum flow declination rate) were observed, when compared with previous investigations. These higher values might be the result of the specific phonation tasks (stressed /ae/ vowel in a word and a sentence) or filtering processes. Few statistically significant (p < 0.01) differences in parameters were found between untrained and trained subjects [the maximum flow declination rate and the closing quotient were higher in trained women (p < 0.001), and the speed quotient was higher in trained men (p < 0.005)]. Several statistically significant parameter differences were found between men and women [minimum flow, ac flow, average flow, maximum flow declination rate, closing quotient, glottal resistance (p < 0.001), and closed quotient (p < 0.005)]. Significant effects of intensity condition were observed on ac flow, maximum flow declination rate, closing quotient, and vocal efficiency in women (p < 0.005), and on minimum flow, ac flow, average flow, maximum flow declination rate, closed quotient, and vocal efficiency in men (p < 0.01).

  1. Contrasts between chemical and physical estimates of baseflow help discern multiple sources of water contributing to rivers

    NASA Astrophysics Data System (ADS)

    Cartwright, I.; Gilfedder, B.; Hofmann, H.

    2013-05-01

    This study compares geochemical and physical methods of estimating baseflow in the upper reaches of the Barwon River, southeast Australia. Estimates of baseflow from physical techniques such as local minima and recursive digital filters are higher than those based on chemical mass balance using continuous electrical conductivity (EC). Between 2001 and 2011 the baseflow flux calculated using chemical mass balance is between 1.8 × 103 and 1.5 × 104 ML yr-1 (15 to 25% of the total discharge in any one year) whereas recursive digital filters yield baseflow fluxes of 3.6 × 103 to 3.8 × 104 ML yr-1 (19 to 52% of discharge) and the local minimum method yields baseflow fluxes of 3.2 × 103 to 2.5 × 104 ML yr-1 (13 to 44% of discharge). These differences most probably reflect how the different techniques characterise baseflow. Physical methods probably aggregate much of the water from delayed sources as baseflow. However, as many delayed transient water stores (such as bank return flow or floodplain storage) are likely to be geochemically similar to surface runoff, chemical mass balance calculations aggregate them with the surface runoff component. The mismatch between geochemical and physical estimates is greatest following periods of high discharge in winter, implying that these transient stores of water feed the river for several weeks to months. Consistent with these interpretations, modelling of bank storage indicates that bank return flows provide water to the river for several weeks after flood events. EC vs. discharge variations during individual flow events also imply that an inflow of low EC water stored within the banks or on the floodplain occurs as discharge falls. The joint use of physical and geochemical techniques allows a better understanding of the different components of water that contribute to river flow, which is important for the management and protection of water resources.

  2. Calculating e-flow using UAV and ground monitoring

    NASA Astrophysics Data System (ADS)

    Zhao, C. S.; Zhang, C. B.; Yang, S. T.; Liu, C. M.; Xiang, H.; Sun, Y.; Yang, Z. Y.; Zhang, Y.; Yu, X. Y.; Shao, N. F.; Yu, Q.

    2017-09-01

    Intense human activity has led to serious degradation of basin water ecosystems and severe reduction in the river flow available for aquatic biota. As an important water ecosystem index, environmental flows (e-flows) are crucial for maintaining sustainability. However, most e-flow measurement methods involve long cycles, low efficiency, and transdisciplinary expertise. This makes it impossible to rapidly assess river e-flows at basin or larger scales. This study presents a new method to rapidly assessing e-flows coupling UAV and ground monitorings. UAV was firstly used to calculate river-course cross-sections with high-resolution stereoscopic images. A dominance index was then used to identify key fish species. Afterwards a habitat suitability index, along with biodiversity and integrity indices, was used to determine an appropriate flow velocity with full consideration of the fish spawning period. The cross-sections and flow velocity values were then combined into AEHRA, an e-flow assessment method for studying e-flows and supplying-rate. To verify the results from this new method, the widely used Tennant method was employed. The root-mean-square errors of river cross-sections determined by UAV are less than 0.25 m, which constitutes 3-5% water-depth of the river cross-sections. In the study area of Jinan city, the ecological flow velocity (VE) is equal to or greater than 0.11 m/s, and the ecological water depth (HE) is greater than 0.8 m. The river ecosystem is healthy with the minimum e-flow requirements being always met when it is close to large rivers, which is beneficial for the sustainable development of the water ecosystem. In the south river channel of Jinan, the upstream flow mostly meets the minimum e-flow requirements, and the downstream flow always meets the minimum e-flow requirements. The north of Jinan consists predominantly of artificial river channels used for irrigation. Rainfall rarely meets the minimum e-flow and irrigation water requirements. We suggest that the water shortage problem can be partly solved by diversion of the Yellow River. These results can provide useful information for ecological operations and restoration. The method used in this study for calculating e-flow based on a combination of UAV and ground monitoring can effectively promote research progress into basin e-flow, and provide an important reference for e-flow monitoring around the world.

  3. Quantum-state comparison and discrimination

    NASA Astrophysics Data System (ADS)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2018-05-01

    We investigate the performance of discrimination strategy in the comparison task of known quantum states. In the discrimination strategy, one infers whether or not two quantum systems are in the same state on the basis of the outcomes of separate discrimination measurements on each system. In some cases with more than two possible states, the optimal strategy in minimum-error comparison is that one should infer the two systems are in different states without any measurement, implying that the discrimination strategy performs worse than the trivial "no-measurement" strategy. We present a sufficient condition for this phenomenon to happen. For two pure states with equal prior probabilities, we determine the optimal comparison success probability with an error margin, which interpolates the minimum-error and unambiguous comparison. We find that the discrimination strategy is not optimal except for the minimum-error case.

  4. A probabilistic approach to quantifying spatial patterns of flow regimes and network-scale connectivity

    NASA Astrophysics Data System (ADS)

    Garbin, Silvia; Alessi Celegon, Elisa; Fanton, Pietro; Botter, Gianluca

    2017-04-01

    The temporal variability of river flow regime is a key feature structuring and controlling fluvial ecological communities and ecosystem processes. In particular, streamflow variability induced by climate/landscape heterogeneities or other anthropogenic factors significantly affects the connectivity between streams with notable implication for river fragmentation. Hydrologic connectivity is a fundamental property that guarantees species persistence and ecosystem integrity in riverine systems. In riverine landscapes, most ecological transitions are flow-dependent and the structure of flow regimes may affect ecological functions of endemic biota (i.e., fish spawning or grazing of invertebrate species). Therefore, minimum flow thresholds must be guaranteed to support specific ecosystem services, like fish migration, aquatic biodiversity and habitat suitability. In this contribution, we present a probabilistic approach aiming at a spatially-explicit, quantitative assessment of hydrologic connectivity at the network-scale as derived from river flow variability. Dynamics of daily streamflows are estimated based on catchment-scale climatic and morphological features, integrating a stochastic, physically based approach that accounts for the stochasticity of rainfall with a water balance model and a geomorphic recession flow model. The non-exceedance probability of ecologically meaningful flow thresholds is used to evaluate the fragmentation of individual stream reaches, and the ensuing network-scale connectivity metrics. A multi-dimensional Poisson Process for the stochastic generation of rainfall is used to evaluate the impact of climate signature on reach-scale and catchment-scale connectivity. The analysis shows that streamflow patterns and network-scale connectivity are influenced by the topology of the river network and the spatial variability of climatic properties (rainfall, evapotranspiration). The framework offers a robust basis for the prediction of the impact of land-use/land-cover changes and river regulation on network-scale connectivity.

  5. Modeling magma flow and cooling in dikes: Implications for emplacement of Columbia River flood basalts

    NASA Astrophysics Data System (ADS)

    Petcovic, Heather L.; Dufek, Josef D.

    2005-10-01

    The Columbia River flood basalts include some of the world's largest individual lava flows, most of which were fed by the Chief Joseph dike swarm. The majority of dikes are chilled against their wall rock; however, rare dikes caused their wall rock to undergo partial melting. These partial melt zones record the thermal history of magma flow and cooling in the dike and, consequently, the emplacement history of the flow it fed. Here, we examine two-dimensional thermal models of basalt injection, flow, and cooling in a 10-m-thick dike constrained by the field example of the Maxwell Lake dike, a likely feeder to the large-volume Wapshilla Ridge unit of the Grande Ronde Basalt. Two types of models were developed: static conduction simulations and advective transport simulations. Static conduction simulation results confirm that instantaneous injection and stagnation of a single dike did not produce wall rock melt. Repeated injection generated wall rock melt zones comparable to those observed, yet the regular texture across the dike and its wall rock is inconsistent with repeated brittle injection. Instead, advective flow in the dike for 3-4 years best reproduced the field example. Using this result, we estimate that maximum eruption rates for Wapshilla Ridge flows ranged from 3 to 5 km3 d-1. Local eruption rates were likely lower (minimum 0.1-0.8 km3 d-1), as advective modeling results suggest that other fissure segments as yet unidentified fed the same flow. Consequently, the Maxwell Lake dike probably represents an upper crustal (˜2 km) exposure of a long-lived point source within the Columbia River flood basalts.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hang, E-mail: hangchen@mit.edu; Thill, Peter; Cao, Jianshu

    In biochemical systems, intrinsic noise may drive the system switch from one stable state to another. We investigate how kinetic switching between stable states in a bistable network is influenced by dynamic disorder, i.e., fluctuations in the rate coefficients. Using the geometric minimum action method, we first investigate the optimal transition paths and the corresponding minimum actions based on a genetic toggle switch model in which reaction coefficients draw from a discrete probability distribution. For the continuous probability distribution of the rate coefficient, we then consider two models of dynamic disorder in which reaction coefficients undergo different stochastic processes withmore » the same stationary distribution. In one, the kinetic parameters follow a discrete Markov process and in the other they follow continuous Langevin dynamics. We find that regulation of the parameters modulating the dynamic disorder, as has been demonstrated to occur through allosteric control in bistable networks in the immune system, can be crucial in shaping the statistics of optimal transition paths, transition probabilities, and the stationary probability distribution of the network.« less

  7. Evaluation of Lightning Jumps as a Predictor of Severe Weather in the Northeastern United States

    NASA Astrophysics Data System (ADS)

    Eck, Pamela

    Severe weather events in the northeastern United States can be challenging to forecast, given how the evolution of deep convection can be influenced by complex terrain and the lack of quality observations in complex terrain. To supplement existing observations, this study explores using lightning to forecast severe convection in areas of complex terrain in the northeastern United States. A sudden increase in lightning flash rate by two standard deviations (2sigma), also known as a lightning jump, may be indicative of a strengthening updraft and an increased probability of severe weather. This study assesses the value of using lightning jumps to forecast severe weather during July 2015 in the northeastern United States. Total lightning data from the National Lightning Detection Network (NLDN) is used to calculate lightning jumps using a 2sigma lightning jump algorithm with a minimum threshold of 5 flashes min-1. Lightning jumps are used to predict the occurrence of severe weather, as given by whether a Storm Prediction Center (SPC) severe weather report occurred 45 min after a lightning jump in the same cell. Results indicate a high probability of detection (POD; 85%) and a high false alarm rate (FAR; 89%), suggesting that lightning jumps occur in sub-severe storms. The interaction between convection and complex terrain results in a locally enhanced updraft and an increased probability of severe weather. Thus, it is hypothesized that conditioning on an upslope variable may reduce the FAR. A random forest is introduced to objectively combine upslope flow, calculated using data from the High Resolution Rapid Refresh (HRRR), flash rate (FR), and flash rate changes with time (DFRDT). The random forest, a machine-learning algorithm, uses pattern recognition to predict a severe or non-severe classification based on the predictors. In addition to upslope flow, FR, and DFRDT, Next-Generation Radar (NEXRAD) Level III radar data was also included as a predictor to compare its value to that of lightning data. Results indicate a high POD (82%), a low FAR (28%), and that lightning data and upslope flow data account for 39% and 32% of variable importance, respectively.

  8. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows

    PubMed Central

    Wang, Di; Kleinberg, Robert D.

    2009-01-01

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596

  9. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.

    PubMed

    Wang, Di; Kleinberg, Robert D

    2009-11-28

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.

  10. Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow

    NASA Astrophysics Data System (ADS)

    Gupta, Atma Ram; Kumar, Ashwani

    2017-12-01

    Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.

  11. Caspase activation, hydrogen peroxide production and Akt dephosphorylation occur during stallion sperm senescence.

    PubMed

    Gallardo Bolaños, J M; Balao da Silva, C; Martín Muñoz, P; Plaza Dávila, M; Ezquerra, J; Aparicio, I M; Tapia, J A; Ortega Ferrusola, C; Peña, F J

    2014-08-01

    To investigate the mechanisms inducing sperm death after ejaculation, stallion ejaculates were incubated in BWW media during 6 h at 37°C. At the beginning of the incubation period and after 1, 2, 4 and 6 h sperm motility and kinematics (CASA), mitochondrial membrane potential and membrane permeability and integrity were evaluated (flow cytometry). Also, at the same time intervals, active caspase 3, hydrogen peroxide, superoxide anion (flow cytometry) and Akt phosphorylation (flow cytometry) were evaluated. Major decreases in sperm function occurred after 6 h of incubation, although after 1 h decrease in the percentages of motile and progressive motile sperm occurred. The decrease observed in sperm functionality after 6 h of incubation was accompanied by a significant increase in the production of hydrogen peroxide and the greatest increase in caspase 3 activity. Additionally, the percentage of phosphorylated Akt reached a minimum after 6 h of incubation. These results provide evidences that sperm death during in vitro incubation is largely an apoptotic phenomena, probably stimulated by endogenous production of hydrogen peroxide and the lack of prosurvival factors maintaining Akt in a phosphorylated status. Disclosing molecular mechanisms leading to sperm death may help to develop new strategies for stallion sperm conservation. © 2014 Blackwell Verlag GmbH.

  12. Effects of the proposed California WaterFix North Delta Diversion on flow reversals and entrainment of juvenile Chinook salmon (Oncorhynchus tshawytscha) into Georgiana Slough and the Delta Cross Channel, northern California

    USGS Publications Warehouse

    Perry, Russell W.; Romine, Jason G.; Pope, Adam C.; Evans, Scott D.

    2018-02-27

    The California Department of Water Resources and Bureau of Reclamation propose new water intake facilities on the Sacramento River in northern California that would convey some of the water for export to areas south of the Sacramento-San Joaquin River Delta (hereinafter referred to as the Delta) through tunnels rather than through the Delta. The collection of water intakes, tunnels, pumping facilities, associated structures, and proposed operations are collectively referred to as California WaterFix. The water intake facilities, hereinafter referred to as the North Delta Diversion (NDD), are proposed to be located on the Sacramento River downstream of the city of Sacramento and upstream of the first major river junction where Sutter Slough branches from the Sacramento River. The NDD can divert a maximum discharge of 9,000 cubic feet per second (ft3/s) from the Sacramento River, which reduces the amount of Sacramento River inflow into the Delta.In this report, we conducted three analyses to investigate the effect of the NDD and its proposed operation on entrainment of juvenile Chinook salmon (Oncorhynchus tshawytscha) into Georgiana Slough and the Delta Cross Channel (DCC). Fish that enter the interior Delta (the network of channels to the south of the Sacramento River) through Georgiana Slough and the DCC survive at lower rates than fish that use other migration routes (Sacramento River, Sutter Slough, and Steamboat Slough). Therefore, fisheries managers were concerned about the extent to which operation of the NDD would increase the proportion of the population entering the interior Delta, which, all else being equal, would lower overall survival through the Delta by increasing the fraction of the population subject to lower survival rates. Operation of the NDD would reduce flow in the Sacramento River, which has the potential to increase the magnitude and duration of reverse flows of the Sacramento River downstream of Georgiana Slough.In the first analysis, we evaluate the effect of the NDD bypass rules on flow reversals of the Sacramento River downstream of Georgiana Slough. The NDD bypass rules are a set of operational criteria designed to minimize upstream transport of fish into Georgiana Slough and the DCC, and were developed based on previous studies showing that the magnitude and duration of flow reversals increase the proportion of fish entering Georgiana Slough and the DCC. We estimated the frequency and duration of reverse-flow conditions of the Sacramento River downstream of Georgiana Slough under each of the prescribed minimum bypass flows described in the NDD bypass rules. To accommodate adaptive levels of protection during different times of year when juvenile salmon are migrating through the Delta, the NDD bypass rules prescribe a series of minimum allowable bypass flows that vary depending on (1) month of the year, and (2) progressively decreasing levels of protection following a pulse flow event.We determined that the NDD bypass rules increased the frequency and duration of reverse flows of the Sacramento River downstream of Georgiana Slough, with the magnitude of increase varying among scenarios. Constant low-level pumping, the most protective bypass rule that limits diversion to 10 percent of the maximum diversion and is implemented following a pulse-flow event, led to the smallest increase in frequency and duration of flow reversals. In contrast, we found that some scenarios led to sizeable increases in the fraction of the day with reverse flow. The conditions under which the proportion of the day with reverse flow can increase by greater than or equal to 10 percentage points between October and June, when juvenile salmon are present in the Delta, include October–November bypass rules and level-3 post-pulse operations during December–June. These conditions would be expected to increase the proportion of juvenile salmon entering the interior Delta through Georgiana Slough.In the second analysis, we assessed bias in Delta Simulation Model 2 (DSM2) flow predictions at the junction of the Sacramento River, DCC, and Georgiana Slough. Because DSM2 was being used to simulate California WaterFix operations, understanding the extent of bias relative to USGS streamgages was important since fish routing models were based on flow data at streamgages. We determined that river flow predicted by DSM2 was biased for Georgiana Slough and the Sacramento River. Therefore, for subsequent analysis, we bias-corrected the DSM2 flow predictions using measured stream flows as predictor variables.In the third analysis, we evaluated the effect of the NDD on the daily probability of fish entering Georgiana Slough and the DCC. We applied an existing model to predict entrainment from 15-minute flow simulations for an 82-year time series of flows simulated by DSM2 under the Proposed Action (PA), where the North Delta Diversion is implemented under California WaterFix, and the No Action Alternative (NAA), where the diversion is not implemented. To estimate the daily fraction of fish entering each river channel, entrainment probabilities were averaged over each day. To evaluate the two scenarios, we then compared mean annual entrainment probabilities by month, water year classification, and three different assumed run timings. Overall, the probability of remaining in the Sacramento River was lower under the PA scenario, but the magnitude of the difference was small (3/s. At flows greater than 41,000 ft3/s, we hypothesize that entrainment into the interior Delta is relatively constant, which would have caused little difference between scenarios at higher flows.

  13. Solar-cycle Variations of Meridional Flows in the Solar Convection Zone Using Helioseismic Methods

    NASA Astrophysics Data System (ADS)

    Lin, Chia-Hsien; Chou, Dean-Yi

    2018-06-01

    The solar meridional flow is an axisymmetric flow in solar meridional planes, extending through the convection zone. Here we study its solar-cycle variations in the convection zone using SOHO/MDI helioseismic data from 1996 to 2010, including two solar minima and one maximum. The travel-time difference between northward and southward acoustic waves is related to the meridional flow along the wave path. Applying the ray approximation and the SOLA inversion method to the travel-time difference measured in a previous study, we obtain the meridional flow distributions in 0.67 ≤ r ≤ 0.96R ⊙ at the minimum and maximum. At the minimum, the flow has a three-layer structure: poleward in the upper convection zone, equatorward in the middle convection zone, and poleward again in the lower convection zone. The flow speed is close to zero within the error bar near the base of the convection zone. The flow distribution changes significantly from the minimum to the maximum. The change above 0.9R ⊙ shows two phenomena: first, the poleward flow speed is reduced at the maximum; second, an additional convergent flow centered at the active latitudes is generated at the maximum. These two phenomena are consistent with the surface meridional flow reported in previous studies. The change in flow extends all the way down to the base of the convection zone, and the pattern of the change below 0.9R ⊙ is more complicated. However, it is clear that the active latitudes play a role in the flow change: the changes in flow speed below and above the active latitudes have opposite signs. This suggests that magnetic fields could be responsible for the flow change.

  14. Emergency Assessment of Debris-Flow Hazards from Basins Burned by the Piru, Simi, and Verdale Fires of 2003, Southern California

    USGS Publications Warehouse

    Cannon, Susan H.; Gartner, Joseph E.; Rupert, Michael G.; Michael, John A.

    2003-01-01

    These maps present preliminary assessments of the probability of debris-flow activity and estimates of peak discharges that can potentially be generated by debris-flows issuing from basins burned by the Piru, Simi and Verdale Fires of October 2003 in southern California in response to the 25-year, 10-year, and 2-year 1-hour rain storms. The probability maps are based on the application of a logistic multiple regression model that describes the percent chance of debris-flow production from an individual basin as a function of burned extent, soil properties, basin gradients and storm rainfall. The peak discharge maps are based on application of a multiple-regression model that can be used to estimate debris-flow peak discharge at a basin outlet as a function of basin gradient, burn extent, and storm rainfall. Probabilities of debris-flow occurrence for the Piru Fire range between 2 and 94% and estimates of debris flow peak discharges range between 1,200 and 6,640 ft3/s (34 to 188 m3/s). Basins burned by the Simi Fire show probabilities for debris-flow occurrence between 1 and 98%, and peak discharge estimates between 1,130 and 6,180 ft3/s (32 and 175 m3/s). The probabilities for debris-flow activity calculated for the Verdale Fire range from negligible values to 13%. Peak discharges were not estimated for this fire because of these low probabilities. These maps are intended to identify those basins that are most prone to the largest debris-flow events and provide information for the preliminary design of mitigation measures and for the planning of evacuation timing and routes.

  15. Inverse and forward modeling under uncertainty using MRE-based Bayesian approach

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Rubin, Y.

    2004-12-01

    A stochastic inverse approach for subsurface characterization is proposed and applied to shallow vadose zone at a winery field site in north California and to a gas reservoir at the Ormen Lange field site in the North Sea. The approach is formulated in a Bayesian-stochastic framework, whereby the unknown parameters are identified in terms of their statistical moments or their probabilities. Instead of the traditional single-valued estimation /prediction provided by deterministic methods, the approach gives a probability distribution for an unknown parameter. This allows calculating the mean, the mode, and the confidence interval, which is useful for a rational treatment of uncertainty and its consequences. The approach also allows incorporating data of various types and different error levels, including measurements of state variables as well as information such as bounds on or statistical moments of the unknown parameters, which may represent prior information. To obtain minimally subjective prior probabilities required for the Bayesian approach, the principle of Minimum Relative Entropy (MRE) is employed. The approach is tested in field sites for flow parameters identification and soil moisture estimation in the vadose zone and for gas saturation estimation at great depth below the ocean floor. Results indicate the potential of coupling various types of field data within a MRE-based Bayesian formalism for improving the estimation of the parameters of interest.

  16. Uncertainty, imprecision, and the precautionary principle in climate change assessment.

    PubMed

    Borsuk, M E; Tomassini, L

    2005-01-01

    Statistical decision theory can provide useful support for climate change decisions made under conditions of uncertainty. However, the probability distributions used to calculate expected costs in decision theory are themselves subject to uncertainty, disagreement, or ambiguity in their specification. This imprecision can be described using sets of probability measures, from which upper and lower bounds on expectations can be calculated. However, many representations, or classes, of probability measures are possible. We describe six of the more useful classes and demonstrate how each may be used to represent climate change uncertainties. When expected costs are specified by bounds, rather than precise values, the conventional decision criterion of minimum expected cost is insufficient to reach a unique decision. Alternative criteria are required, and the criterion of minimum upper expected cost may be desirable because it is consistent with the precautionary principle. Using simple climate and economics models as an example, we determine the carbon dioxide emissions levels that have minimum upper expected cost for each of the selected classes. There can be wide differences in these emissions levels and their associated costs, emphasizing the need for care when selecting an appropriate class.

  17. Multiple cenozoic invasions of Africa by penguins (Aves, Sphenisciformes)

    PubMed Central

    Ksepka, Daniel T.; Thomas, Daniel B.

    2012-01-01

    Africa hosts a single breeding species of penguin today, yet the fossil record indicates that a diverse array of now-extinct taxa once inhabited southern African coastlines. Here, we show that the African penguin fauna had a complex history involving multiple dispersals and extinctions. Phylogenetic analyses and biogeographic reconstructions incorporating new fossil material indicate that, contrary to previous hypotheses, the four Early Pliocene African penguin species do not represent an endemic radiation or direct ancestors of the living Spheniscus demersus (blackfooted penguin). A minimum of three dispersals to Africa, probably assisted by the eastward-flowing Antarctic Circumpolar and South Atlantic currents, occurred during the Late Cenozoic. As regional sea-level fall eliminated islands and reduced offshore breeding areas during the Pliocene, all but one penguin lineage ended in extinction, resulting in today's depleted fauna. PMID:21900330

  18. Study of magnetic notions in the solar photosphere and their implications for heating the solar atmosphere

    NASA Technical Reports Server (NTRS)

    Noyes, Robert W.

    1995-01-01

    This progress report covers the first year of NASA Grant NAGw-2545, a study of magnetic structure in the solar photosphere and chromosphere. We have made significant progress in three areas: (1) analysis of vorticity in photospheric convection, which probably affects solar atmospheric heating through the stresses it imposes on photospheric magnetic fields; (2) modelling of the horizontal motions of magnetic footpoints in the solar photosphere using an assumed relation between brightness and vertical motion as well as continuity of flow; and (3) observations and analysis of infrared CO lines formed near the solar temperature minimum, whose structure and dynamics also yield important clues to the nature of heating of the upper atmosphere. Each of these areas are summarized in this report, with copies of those papers prepared or published this year included.

  19. Method and system for gas flow mitigation of molecular contamination of optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delgado, Gildardo; Johnson, Terry; Arienti, Marco

    A computer-implemented method for determining an optimized purge gas flow in a semi-conductor inspection metrology or lithography apparatus, comprising receiving a permissible contaminant mole fraction, a contaminant outgassing flow rate associated with a contaminant, a contaminant mass diffusivity, an outgassing surface length, a pressure, a temperature, a channel height, and a molecular weight of a purge gas, calculating a flow factor based on the permissible contaminant mole fraction, the contaminant outgassing flow rate, the channel height, and the outgassing surface length, comparing the flow factor to a predefined maximum flow factor value, calculating a minimum purge gas velocity and amore » purge gas mass flow rate from the flow factor, the contaminant mass diffusivity, the pressure, the temperature, and the molecular weight of the purge gas, and introducing the purge gas into the semi-conductor inspection metrology or lithography apparatus with the minimum purge gas velocity and the purge gas flow rate.« less

  20. Magnitude and frequency of low flows in the Suwannee River Water Management District, Florida

    USGS Publications Warehouse

    Giese, G.L.; Franklin, M.A.

    1996-01-01

    Low-flow frequency statistics for 20 gaging stations having at least 10 years of continuous record and 31 other stations having less than 10 years of continu ous record or a series of at least two low- flow measurements are presented for unregulated streams in the Suwannee River Water Management District in north-central Florida. Statistics for the 20 continuous-record stations included are the annual and monthly minimum consecutive-day average low- flow magnitudes for 1, 3, 7, 14, and 30 consecutive days for recurrence intervals of 2, 5, 10, 20, and, for some long-term stations, 50 years, based on records available through the 1994 climatic year.Only theannual statistics are given for the 31 other stations; these are for the 7- and 30-consecutive day periods only and for recurrence intervals of 2 and 10 years only. Annual low-flow frequency statistics range from zero for many small streams to 5,500 cubic feet per second for the annual 30- consecutive-day average flow with a recurrenceinterval of 2 years for the Suwannee River near Wilcox (station 02323500). Monthly low-flow frequency statistics range from zero for many small streams to 13,800 cubic feet per second for the minimum 30-consecutive-day average flow with a 2-year recurrence interval for the month of March for the same station. Generally, low-flow characteristics of streams in the Suwannee River Water Management District are controlled by climatic, topographic, and geologic fac tors. The carbonate Floridan aquifer system underlies, or is at the surface of, the entire District. The terrane's karstic nature results in manysinkholes and springs. In some places, springs may contribute greatly to low streamflow and the contributing areas of such springs may include areasoutside the presumed surface drainage area of the springs. In other places, water may enter sinkholes within a drainage basin, then reappear in springs downstream from a gage. Many of the smaller streams in the District go dry or have no flow forseveral months in many years. In addition to the low-flow statistics, four synoptic low-flow measurement surveys were conducted on 161 sites during 1990, 1995, and 1996. Themeasurements were made to provide "snapshots" of flow conditions of streams throughout the Suwannee River Water Management District. Magnitudes of low flows during the 1990 series of measurements were in the range associated withminimum 7-consecutive-day 50-year recurrence interval to the minimum 7-consecutive-day 20-year recurrence interval, except in Taylor and Dixie Counties, where the magnitudes ranged from the minimum 7-consecutive-day 5-year flow level to the7-consecutive-day 2-year flow level. The magnitudes were all greater than the minimum 7- consecutive-day 2-year flow level during 1995 and 1996. Observations of no flow were recorded at many of the sites for all four series of measurements.

  1. Methods and results of peak-flow frequency analyses for streamgages in and bordering Minnesota, through water year 2011

    USGS Publications Warehouse

    Kessler, Erich W.; Lorenz, David L.; Sanocki, Christopher A.

    2013-01-01

    Peak-flow frequency analyses were completed for 409 streamgages in and bordering Minnesota having at least 10 systematic peak flows through water year 2011. Selected annual exceedance probabilities were determined by fitting a log-Pearson type III probability distribution to the recorded annual peak flows. A detailed explanation of the methods that were used to determine the annual exceedance probabilities, the historical period, acceptable low outliers, and analysis method for each streamgage are presented. The final results of the analyses are presented.

  2. Improvement in precipitation-runoff model simulations by recalibration with basin-specific data, and subsequent model applications, Onondaga Lake Basin, Onondaga County, New York

    USGS Publications Warehouse

    Coon, William F.

    2011-01-01

    Simulation of streamflows in small subbasins was improved by adjusting model parameter values to match base flows, storm peaks, and storm recessions more precisely than had been done with the original model. Simulated recessional and low flows were either increased or decreased as appropriate for a given stream, and simulated peak flows generally were lowered in the revised model. The use of suspended-sediment concentrations rather than concentrations of the surrogate constituent, total suspended solids, resulted in increases in the simulated low-flow sediment concentrations and, in most cases, decreases in the simulated peak-flow sediment concentrations. Simulated orthophosphate concentrations in base flows generally increased but decreased for peak flows in selected headwater subbasins in the revised model. Compared with the original model, phosphorus concentrations simulated by the revised model were comparable in forested subbasins, generally decreased in developed and wetland-dominated subbasins, and increased in agricultural subbasins. A final revision to the model was made by the addition of the simulation of chloride (salt) concentrations in the Onondaga Creek Basin to help water-resource managers better understand the relative contributions of salt from multiple sources in this particular tributary. The calibrated revised model was used to (1) compute loading rates for the various land types that were simulated in the model, (2) conduct a watershed-management analysis that estimated the portion of the total load that was likely to be transported to Onondaga Lake from each of the modeled subbasins, (3) compute and assess chloride loads to Onondaga Lake from the Onondaga Creek Basin, and (4) simulate precolonization (forested) conditions in the basin to estimate the probable minimum phosphorus loads to the lake.

  3. Model of Transition from Laminar to Turbulent Flow

    NASA Astrophysics Data System (ADS)

    Kanda, Hidesada

    2001-11-01

    For circular pipe flows, a model of transition from laminar to turbulent flow has already been proposed and the minimum critical Reynolds number of approximately 2040 was obtained (Kanda, 1999). In order to prove the validity of the model, another verification is required. Thus, for plane Poiseuille flow, results of previous investigations were studied, focusing on experimental data on the critical Reynolds number Rc, the entrance length, and the transition length. Consequently, concerning the natural transition, it was confirmed from the experimental data that (i) the transition occurs in the entrance region, (ii) Rc increases as the contraction ratio in the inlet section increases, and (iii) the minimum Rc is obtained when the contraction ratio is the smallest or one, and there is no-bellshaped entrance or straight parallel plates. Its value exists in the neighborhood of 1300, based on the channel height and the average velocity. Although, for Hagen-Poiseuille flow, the minimum Rc is approximately 2000, based on the pipe diameter and the average velocity, there seems to be no significant difference in the transition from laminar to turbulent flow between Hagen-Poiseuille flow and plane Poiseuille flow (Kanda, 2001). Rc is determined by the shape of the inlet. Kanda, H., 1999, Proc. of ASME Fluids Engineering Division - 1999, FED-Vol. 250, pp. 197-204. Kanda, H., 2001, Proc. of ASME Fluids Engineering Division - 2001.

  4. Airfoil profiles for minimum pressure drag at supersonic velocities -- general analysis with application to linearized supersonic flow

    NASA Technical Reports Server (NTRS)

    Chapman, Dean R

    1952-01-01

    A theoretical investigation is made of the airfoil profile for minimum pressure drag at zero lift in supersonic flow. In the first part of the report a general method is developed for calculating the profile having the least pressure drag for a given auxiliary condition, such as a given structural requirement or a given thickness ratio. The various structural requirements considered include bending strength, bending stiffness, torsional strength, and torsional stiffness. No assumption is made regarding the trailing-edge thickness; the optimum value is determined in the calculations as a function of the base pressure. To illustrate the general method, the optimum airfoil, defined as the airfoil having minimum pressure drag for a given auxiliary condition, is calculated in a second part of the report using the equations of linearized supersonic flow.

  5. Disentangling the role of athermal walls on the Knudsen paradox in molecular and granular gases

    NASA Astrophysics Data System (ADS)

    Gupta, Ronak; Alam, Meheboob

    2018-01-01

    The nature of particle-wall interactions is shown to have a profound impact on the well-known "Knudsen paradox" [or the "Knudsen minimum" effect, which refers to the decrease of the mass-flow rate of a gas with increasing Knudsen number Kn, reaching a minimum at Kn˜O (1 ) and increasing logarithmically with Kn as Kn→∞ ] in the acceleration-driven Poiseuille flow of rarefied gases. The nonmonotonic variation of the flow rate with Kn occurs even in a granular or dissipative gas in contact with thermal walls. The latter result is in contradiction with recent work [Alam et al., J. Fluid Mech. 782, 99 (2015), 10.1017/jfm.2015.523] that revealed the absence of the Knudsen minimum in granular Poiseuille flow for which the flow rate was found to decrease at large values of Kn. The above conundrum is resolved by distinguishing between "thermal" and "athermal" walls, and it is shown that, for both molecular and granular gases, the momentum transfer to athermal walls is much different than that to thermal walls which is directly responsible for the anomalous flow-rate variation with Kn in the rarefied regime. In the continuum limit of Kn→0 , the athermal walls are shown to be closely related to "no-flux" ("adiabatic") walls for which the Knudsen minimum does not exist either. A possible characterization of athermal walls in terms of (1) an effective specularity coefficient for the slip velocity and (2) a flux-type boundary condition for granular temperature is suggested based on simulation results.

  6. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  7. Estimation of additive forces and moments for supersonic inlets

    NASA Technical Reports Server (NTRS)

    Perkins, Stanley C., Jr.; Dillenius, Marnix F. E.

    1991-01-01

    A technique for estimating the additive forces and moments associated with supersonic, external compression inlets as a function of mass flow ratio has been developed. The technique makes use of a low order supersonic paneling method for calculating minimum additive forces at maximum mass flow conditions. A linear relationship between the minimum additive forces and the maximum values for fully blocked flow is employed to obtain the additive forces at a specified mass flow ratio. The method is applicable to two-dimensional inlets at zero or nonzero angle of attack, and to axisymmetric inlets at zero angle of attack. Comparisons with limited available additive drag data indicate fair to good agreement.

  8. Combined AIE/EBE/GMRES approach to incompressible flows. [Adaptive Implicit-Explicit/Grouped Element-by-Element/Generalized Minimum Residuals

    NASA Technical Reports Server (NTRS)

    Liou, J.; Tezduyar, T. E.

    1990-01-01

    Adaptive implicit-explicit (AIE), grouped element-by-element (GEBE), and generalized minimum residuals (GMRES) solution techniques for incompressible flows are combined. In this approach, the GEBE and GMRES iteration methods are employed to solve the equation systems resulting from the implicitly treated elements, and therefore no direct solution effort is involved. The benchmarking results demonstrate that this approach can substantially reduce the CPU time and memory requirements in large-scale flow problems. Although the description of the concepts and the numerical demonstration are based on the incompressible flows, the approach presented here is applicable to larger class of problems in computational mechanics.

  9. Does the Current Minimum Validate (or Invalidate) Cycle Prediction Methods?

    NASA Technical Reports Server (NTRS)

    Hathaway, David H.

    2010-01-01

    This deep, extended solar minimum and the slow start to Cycle 24 strongly suggest that Cycle 24 will be a small cycle. A wide array of solar cycle prediction techniques have been applied to predicting the amplitude of Cycle 24 with widely different results. Current conditions and new observations indicate that some highly regarded techniques now appear to have doubtful utility. Geomagnetic precursors have been reliable in the past and can be tested with 12 cycles of data. Of the three primary geomagnetic precursors only one (the minimum level of geomagnetic activity) suggests a small cycle. The Sun's polar field strength has also been used to successfully predict the last three cycles. The current weak polar fields are indicative of a small cycle. For the first time, dynamo models have been used to predict the size of a solar cycle but with opposite predictions depending on the model and the data assimilation. However, new measurements of the surface meridional flow indicate that the flow was substantially faster on the approach to Cycle 24 minimum than at Cycle 23 minimum. In both dynamo predictions a faster meridional flow should have given a shorter cycle 23 with stronger polar fields. This suggests that these dynamo models are not yet ready for solar cycle prediction.

  10. 40 CFR 89.415 - Fuel flow measurement specifications.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Fuel flow measurement specifications... Emission Test Procedures § 89.415 Fuel flow measurement specifications. The fuel flow rate measurement instrument must have a minimum accuracy of 2 percent of the engine maximum fuel flow rate. The controlling...

  11. The minimum control authority of a system of actuators with applications to Gravity Probe-B

    NASA Technical Reports Server (NTRS)

    Wiktor, Peter; Debra, Dan

    1991-01-01

    The forcing capabilities of systems composed of many actuators are analyzed in this paper. Multiactuator systems can generate higher forces in some directions than in others. Techniques are developed to find the force in the weakest direction. This corresponds to the worst-case output and is defined as the 'minimum control authority'. The minimum control authority is a function of three things: the actuator configuration, the actuator controller and the way in which the output of the system is limited. Three output limits are studied: (1) fuel-flow rate, (2) power, and (3) actuator output. The three corresponding actuator controllers are derived. These controllers generate the desired force while minimizing either fuel flow rate, power or actuator output. It is shown that using the optimal controller can substantially increase the minimum control authority. The techniques for calculating the minimum control authority are applied to the Gravity Probe-B spacecraft thruster system. This example shows that the minimum control authority can be used to design the individual actuators, choose actuator configuration, actuator controller, and study redundancy.

  12. Statistical summaries of streamflow in Oklahoma through 1999

    USGS Publications Warehouse

    Tortorelli, R.L.

    2002-01-01

    Statistical summaries of streamflow records through 1999 for gaging stations in Oklahoma and parts of adjacent states are presented for 188 stations with at least 10 years of streamflow record. Streamflow at 113 of the stations is regulated for specific periods. Data for these periods were analyzed separately to account for changes in streamflow due to regulation by dams or other human modification of streamflow. A brief description of the location, drainage area, and period of record is given for each gaging station. A brief regulation history also is given for stations with a regulated streamflow record. This descriptive information is followed by tables of mean annual discharges, magnitude and probability of exceedance of annual high flows, magnitude and probability of exceedance of annual instantaneous peak flows, durations of daily mean flow, magnitude and probability of non-exceedance of annual low flows, and magnitude and probability of non-exceedance of seasonal low flows.

  13. Present and Future Water Supply for Mammoth Cave National Park, Kentucky

    USGS Publications Warehouse

    Cushman, R.V.; Krieger, R.A.; McCabe, John A.

    1965-01-01

    The increase in the number of visitors during the past several years at Mammoth Cave National Park has rendered the present water supply inadequate. Emergency measures were necessary during August 1962 to supplement the available supply. The Green River is the largest potential source of water supply for Mammoth Cave. The 30-year minimum daily discharge is 40 mgd (million gallons per day) . The chemical quality is now good, but in the past the river has been contaminated by oil-field-brine wastes. By mixing it with water from the existing supply, Green River water could be diluted to provide water of satisfactory quality in the event of future brine pollution. The Nolin River is the next largest potential source of water (minimum releases from Nolin Reservoir, 97-129 mgd). The quality is satisfactory, but use of this source would require a 8-mile pipeline. The present water supply comes from springs draining a perched aquifer in the Haney Limestone Member of the Golconda Formation on Flint Ridge. Chemical quality is excellent but the minimum observed flow of all the springs on Flint Ridge plus Bransford well was only 121,700 gpd (gallons per day). This supply is adequate for present needs but not for future requirements; it could be augmented with water from the Green River. Wet Prong Buffalo Creek is the best of several small-stream supplies in the vicinity of Mammoth Cave. Minimum flow of the creek is probably about 300,000 gpd and the quality is good. The supply is about 5 miles from Mammoth Cave. This supply also may be utilized for a future separate development in the northern part of the park. The maximum recorded yield of wells drilled into the basal ground water in the Ste. Genevieve and St. Louis Limestone is 36 gpm (gallons per minute). Larger supplies may be developed if a large underground stream is struck. Quality can be expected to be good unless the well is drilled too far below the basal water table and intercepts poorer quality water at a lower level. This source of supply might be used to augment the present supply, but locating the trunk conduits might be difficult. Water in alluvium adjacent to the Green River and perched water in the Big Clifty Sandstone Member of the Golconda Formation and Girkin Formation have little potential as a water supply.

  14. Optimizing occupancy surveys by maximizing detection probability: application to amphibian monitoring in the Mediterranean region.

    PubMed

    Petitot, Maud; Manceau, Nicolas; Geniez, Philippe; Besnard, Aurélien

    2014-09-01

    Setting up effective conservation strategies requires the precise determination of the targeted species' distribution area and, if possible, its local abundance. However, detection issues make these objectives complex for most vertebrates. The detection probability is usually <1 and is highly dependent on species phenology and other environmental variables. The aim of this study was to define an optimized survey protocol for the Mediterranean amphibian community, that is, to determine the most favorable periods and the most effective sampling techniques for detecting all species present on a site in a minimum number of field sessions and a minimum amount of prospecting effort. We visited 49 ponds located in the Languedoc region of southern France on four occasions between February and June 2011. Amphibians were detected using three methods: nighttime call count, nighttime visual encounter, and daytime netting. The detection nondetection data obtained was then modeled using site-occupancy models. The detection probability of amphibians sharply differed between species, the survey method used and the date of the survey. These three covariates also interacted. Thus, a minimum of three visits spread over the breeding season, using a combination of all three survey methods, is needed to reach a 95% detection level for all species in the Mediterranean region. Synthesis and applications: detection nondetection surveys combined to site occupancy modeling approach are powerful methods that can be used to estimate the detection probability and to determine the prospecting effort necessary to assert that a species is absent from a site.

  15. Quantitative assessment of changes in landslide risk using a regional scale run-out model

    NASA Astrophysics Data System (ADS)

    Hussin, Haydar; Chen, Lixia; Ciurean, Roxana; van Westen, Cees; Reichenbach, Paola; Sterlacchini, Simone

    2015-04-01

    The risk of landslide hazard continuously changes in time and space and is rarely a static or constant phenomena in an affected area. However one of the main challenges of quantitatively assessing changes in landslide risk is the availability of multi-temporal data for the different components of risk. Furthermore, a truly "quantitative" landslide risk analysis requires the modeling of the landslide intensity (e.g. flow depth, velocities or impact pressures) affecting the elements at risk. Such a quantitative approach is often lacking in medium to regional scale studies in the scientific literature or is left out altogether. In this research we modelled the temporal and spatial changes of debris flow risk in a narrow alpine valley in the North Eastern Italian Alps. The debris flow inventory from 1996 to 2011 and multi-temporal digital elevation models (DEMs) were used to assess the susceptibility of debris flow triggering areas and to simulate debris flow run-out using the Flow-R regional scale model. In order to determine debris flow intensities, we used a linear relationship that was found between back calibrated physically based Flo-2D simulations (local scale models of five debris flows from 2003) and the probability values of the Flow-R software. This gave us the possibility to assign flow depth to a total of 10 separate classes on a regional scale. Debris flow vulnerability curves from the literature and one curve specifically for our case study area were used to determine the damage for different material and building types associated with the elements at risk. The building values were obtained from the Italian Revenue Agency (Agenzia delle Entrate) and were classified per cadastral zone according to the Real Estate Observatory data (Osservatorio del Mercato Immobiliare, Agenzia Entrate - OMI). The minimum and maximum market value for each building was obtained by multiplying the corresponding land-use value (€/msq) with building area and number of floors. The risk was calculated by multiplying the vulnerability with the spatial probability and the building values. Changes in landslide risk was assessed using the loss estimation of four different periods: (1) pre-August 2003 disaster, (2) the August 2003 event, (3) post-August 2003 to 2011 and (4) smaller frequent events occurring between the entire 1996-2011 period. One of the major findings of our work was the calculation of a significant decrease in landslide risk after the 2003 disaster compared to the pre-disaster risk period. This indicates the importance of estimating risk after a few years of a major event in order to avoid overestimation or exaggeration of future losses.

  16. The Laplace method for probability measures in Banach spaces

    NASA Astrophysics Data System (ADS)

    Piterbarg, V. I.; Fatalov, V. R.

    1995-12-01

    Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography

  17. Flow convergence caused by a salinity minimum in a tidal channel

    USGS Publications Warehouse

    Warner, John C.; Schoellhamer, David H.; Burau, Jon R.; Schladow, S. Geoffrey

    2006-01-01

    Residence times of dissolved substances and sedimentation rates in tidal channels are affected by residual (tidally averaged) circulation patterns. One influence on these circulation patterns is the longitudinal density gradient. In most estuaries the longitudinal density gradient typically maintains a constant direction. However, a junction of tidal channels can create a local reversal (change in sign) of the density gradient. This can occur due to a difference in the phase of tidal currents in each channel. In San Francisco Bay, the phasing of the currents at the junction of Mare Island Strait and Carquinez Strait produces a local salinity minimum in Mare Island Strait. At the location of a local salinity minimum the longitudinal density gradient reverses direction. This paper presents four numerical models that were used to investigate the circulation caused by the salinity minimum: (1) A simple one-dimensional (1D) finite difference model demonstrates that a local salinity minimum is advected into Mare Island Strait from the junction with Carquinez Strait during flood tide. (2) A three-dimensional (3D) hydrodynamic finite element model is used to compute the tidally averaged circulation in a channel that contains a salinity minimum (a change in the sign of the longitudinal density gradient) and compares that to a channel that contains a longitudinal density gradient in a constant direction. The tidally averaged circulation produced by the salinity minimum is characterized by converging flow at the bed and diverging flow at the surface, whereas the circulation produced by the constant direction gradient is characterized by converging flow at the bed and downstream surface currents. These velocity fields are used to drive both a particle tracking and a sediment transport model. (3) A particle tracking model demonstrates a 30 percent increase in the residence time of neutrally buoyant particles transported through the salinity minimum, as compared to transport through a constant direction density gradient. (4) A sediment transport model demonstrates increased deposition at the near-bed null point of the salinity minimum, as compared to the constant direction gradient null point. These results are corroborated by historically noted large sedimentation rates and a local maximum of selenium accumulation in clams at the null point in Mare Island Strait.

  18. Reduction of tablet weight variability by optimizing paddle speed in the forced feeder of a high-speed rotary tablet press.

    PubMed

    Peeters, Elisabeth; De Beer, Thomas; Vervaet, Chris; Remon, Jean-Paul

    2015-04-01

    Tableting is a complex process due to the large number of process parameters that can be varied. Knowledge and understanding of the influence of these parameters on the final product quality is of great importance for the industry, allowing economic efficiency and parametric release. The aim of this study was to investigate the influence of paddle speeds and fill depth at different tableting speeds on the weight and weight variability of tablets. Two excipients possessing different flow behavior, microcrystalline cellulose (MCC) and dibasic calcium phosphate dihydrate (DCP), were selected as model powders. Tablets were manufactured via a high-speed rotary tablet press using design of experiments (DoE). During each experiment also the volume of powder in the forced feeder was measured. Analysis of the DoE revealed that paddle speeds are of minor importance for tablet weight but significantly affect volume of powder inside the feeder in case of powders with excellent flowability (DCP). The opposite effect of paddle speed was observed for fairly flowing powders (MCC). Tableting speed played a role in weight and weight variability, whereas changing fill depth exclusively influenced tablet weight. The DoE approach allowed predicting the optimum combination of process parameters leading to minimum tablet weight variability. Monte Carlo simulations allowed assessing the probability to exceed the acceptable response limits if factor settings were varied around their optimum. This multi-dimensional combination and interaction of input variables leading to response criteria with acceptable probability reflected the design space.

  19. The minimum area requirements (MAR) for giant panda: an empirical study

    PubMed Central

    Qing, Jing; Yang, Zhisong; He, Ke; Zhang, Zejun; Gu, Xiaodong; Yang, Xuyu; Zhang, Wen; Yang, Biao; Qi, Dunwu; Dai, Qiang

    2016-01-01

    Habitat fragmentation can reduce population viability, especially for area-sensitive species. The Minimum Area Requirements (MAR) of a population is the area required for the population’s long-term persistence. In this study, the response of occupancy probability of giant pandas against habitat patch size was studied in five of the six mountain ranges inhabited by giant panda, which cover over 78% of the global distribution of giant panda habitat. The probability of giant panda occurrence was positively associated with habitat patch area, and the observed increase in occupancy probability with patch size was higher than that due to passive sampling alone. These results suggest that the giant panda is an area-sensitive species. The MAR for giant panda was estimated to be 114.7 km2 based on analysis of its occupancy probability. Giant panda habitats appear more fragmented in the three southern mountain ranges, while they are large and more continuous in the other two. Establishing corridors among habitat patches can mitigate habitat fragmentation, but expanding habitat patch sizes is necessary in mountain ranges where fragmentation is most intensive. PMID:27929520

  20. The minimum area requirements (MAR) for giant panda: an empirical study.

    PubMed

    Qing, Jing; Yang, Zhisong; He, Ke; Zhang, Zejun; Gu, Xiaodong; Yang, Xuyu; Zhang, Wen; Yang, Biao; Qi, Dunwu; Dai, Qiang

    2016-12-08

    Habitat fragmentation can reduce population viability, especially for area-sensitive species. The Minimum Area Requirements (MAR) of a population is the area required for the population's long-term persistence. In this study, the response of occupancy probability of giant pandas against habitat patch size was studied in five of the six mountain ranges inhabited by giant panda, which cover over 78% of the global distribution of giant panda habitat. The probability of giant panda occurrence was positively associated with habitat patch area, and the observed increase in occupancy probability with patch size was higher than that due to passive sampling alone. These results suggest that the giant panda is an area-sensitive species. The MAR for giant panda was estimated to be 114.7 km 2 based on analysis of its occupancy probability. Giant panda habitats appear more fragmented in the three southern mountain ranges, while they are large and more continuous in the other two. Establishing corridors among habitat patches can mitigate habitat fragmentation, but expanding habitat patch sizes is necessary in mountain ranges where fragmentation is most intensive.

  1. Siphon flows in isolated magnetic flux tubes. V - Radiative flows with variable ionization

    NASA Technical Reports Server (NTRS)

    Montesinos, Benjamin; Thomas, John H.

    1993-01-01

    Steady siphon flows in arched isolated magnetic flux tubes in the solar atmosphere are calculated here including radiative transfer between the flux tube and its surrounding and variable ionization of the flowing gas. It is shown that the behavior of a siphon flow is strongly determined by the degree of radiative coupling between the flux tube and its surroundings in the superadiabatic layer just below the solar surface. Critical siphon flows with adiabatic tube shocks in the downstream leg are calculated, illustrating the radiative relaxation of the temperature jump downstream of the shock. For flows in arched flux tubes reaching up to the temperature minimum, where the opacity is low, the gas inside the flux tube is much cooler than the surrounding atmosphere at the top of the arch. It is suggested that gas cooled by siphon flows contribute to the cool component of the solar atmosphere at the height of the temperature minimum implied by observations of the infrared CO bands at 4.6 and 2.3 microns.

  2. The Optical Flow Technique on the Research of Solar Non-potentiality

    NASA Astrophysics Data System (ADS)

    Liu, Ji-hong; Zhang, Hong-qi

    2010-06-01

    Several optical flow techniques, which have being applied to the researches of solar magnetic non-potentiality recently, have been summarized here. And a few new non-potential parameters which can be derived from them have been discussed, too. The main components of the work are presented as follows: (1) The optical flow techniques refers to a series of new image analyzing techniques arisen recently on the researches of solar magnetic non-potentiality. They mainly include LCT (local correlation tracking), ILCT (inductive equation combining with LCT), MEF (minimum energy effect), DAVE (differential affine velocity estimator) and NAVE (nonlinear affine velocity estimator). Their calculating and applying conditions, merits and deficiencies, all have been discussed detailedly in this work. (2) Benefit from the optical flow techniques, the transverse velocity fields of the magnetic features on the solar surface may be determined by a time sequence of high-quality images currently produced by high-resolution observations either from the ground or in space. Consequently, several new non-potential parameters may be acquired, such as the magnetic helicity flux, the induced electric field in the photosphere, the non-potential magnetic stress (whose area integration is the Lorentz force), etc. Then we can determine the energy flux across the photosphere, and subsequently evaluate the energy budget. Former works on them by small or special samples have shown that they are probably related closely to the erupting events, such as flare, filament eruptions and coronal mass ejections.

  3. Nonstationary decision model for flood risk decision scaling

    NASA Astrophysics Data System (ADS)

    Spence, Caitlin M.; Brown, Casey M.

    2016-11-01

    Hydroclimatic stationarity is increasingly questioned as a default assumption in flood risk management (FRM), but successor methods are not yet established. Some potential successors depend on estimates of future flood quantiles, but methods for estimating future design storms are subject to high levels of uncertainty. Here we apply a Nonstationary Decision Model (NDM) to flood risk planning within the decision scaling framework. The NDM combines a nonstationary probability distribution of annual peak flow with optimal selection of flood management alternatives using robustness measures. The NDM incorporates structural and nonstructural FRM interventions and valuation of flows supporting ecosystem services to calculate expected cost of a given FRM strategy. A search for the minimum-cost strategy under incrementally varied representative scenarios extending across the plausible range of flood trend and value of the natural flow regime discovers candidate FRM strategies that are evaluated and compared through a decision scaling analysis (DSA). The DSA selects a management strategy that is optimal or close to optimal across the broadest range of scenarios or across the set of scenarios deemed most likely to occur according to estimates of future flood hazard. We illustrate the decision framework using a stylized example flood management decision based on the Iowa City flood management system, which has experienced recent unprecedented high flow episodes. The DSA indicates a preference for combining infrastructural and nonstructural adaptation measures to manage flood risk and makes clear that options-based approaches cannot be assumed to be "no" or "low regret."

  4. Minimum requirements for predictive pore-network modeling of solute transport in micromodels

    NASA Astrophysics Data System (ADS)

    Mehmani, Yashar; Tchelepi, Hamdi A.

    2017-10-01

    Pore-scale models are now an integral part of analyzing fluid dynamics in porous materials (e.g., rocks, soils, fuel cells). Pore network models (PNM) are particularly attractive due to their computational efficiency. However, quantitative predictions with PNM have not always been successful. We focus on single-phase transport of a passive tracer under advection-dominated regimes and compare PNM with high-fidelity direct numerical simulations (DNS) for a range of micromodel heterogeneities. We identify the minimum requirements for predictive PNM of transport. They are: (a) flow-based network extraction, i.e., discretizing the pore space based on the underlying velocity field, (b) a Lagrangian (particle tracking) simulation framework, and (c) accurate transfer of particles from one pore throat to the next. We develop novel network extraction and particle tracking PNM methods that meet these requirements. Moreover, we show that certain established PNM practices in the literature can result in first-order errors in modeling advection-dominated transport. They include: all Eulerian PNMs, networks extracted based on geometric metrics only, and flux-based nodal transfer probabilities. Preliminary results for a 3D sphere pack are also presented. The simulation inputs for this work are made public to serve as a benchmark for the research community.

  5. 40 CFR 63.1257 - Test methods and compliance procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...)(2), or 63.1256(h)(2)(i)(C) with a minimum residence time of 0.5 seconds and a minimum temperature of... temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B) For a...

  6. 40 CFR 63.1257 - Test methods and compliance procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...)(2), or 63.1256(h)(2)(i)(C) with a minimum residence time of 0.5 seconds and a minimum temperature of... temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B) For a...

  7. 40 CFR 63.1257 - Test methods and compliance procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...)(2), or 63.1256(h)(2)(i)(C) with a minimum residence time of 0.5 seconds and a minimum temperature of... temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B) For a...

  8. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  9. Application of the Maximum Amplitude-Early Rise Correlation to Cycle 23

    NASA Technical Reports Server (NTRS)

    Willson, Robert M.; Hathaway, David H.

    2004-01-01

    On the basis of the maximum amplitude-early rise correlation, cycle 23 could have been predicted to be about the size of the mean cycle as early as 12 mo following cycle minimum. Indeed, estimates for the size of cycle 23 throughout its rise consistently suggested a maximum amplitude that would not differ appreciably from the mean cycle, contrary to predictions based on precursor information. Because cycle 23 s average slope during the rising portion of the solar cycle measured 2.4, computed as the difference between the conventional maximum (120.8) and minimum (8) amplitudes divided by the ascent duration in months (47), statistically speaking, it should be a cycle of shorter period. Hence, conventional sunspot minimum for cycle 24 should occur before December 2006, probably near July 2006 (+/-4 mo). However, if cycle 23 proves to be a statistical outlier, then conventional sunspot minimum for cycle 24 would be delayed until after July 2007, probably near December 2007 (+/-4 mo). In anticipation of cycle 24, a chart and table are provided for easy monitoring of the nearness and size of its maximum amplitude once onset has occurred (with respect to the mean cycle and using the updated maximum amplitude-early rise relationship).

  10. Ferromagnetic core valve gives rapid action on minimum energy

    NASA Technical Reports Server (NTRS)

    Larson, A. V.; Tinkham, J. P.

    1967-01-01

    Miniature solenoid valve controls propellant flow during tests on a coaxial plasma accelerator. It uses an advanced ferromagnetic core design which meets all the rapid-acting requirements with a minimum of input energy.

  11. Minimum-dissipation scalar transport model for large-eddy simulation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Abkar, Mahdi; Bae, Hyun J.; Moin, Parviz

    2016-08-01

    Minimum-dissipation models are a simple alternative to the Smagorinsky-type approaches to parametrize the subfilter turbulent fluxes in large-eddy simulation. A recently derived model of this type for subfilter stress tensor is the anisotropic minimum-dissipation (AMD) model [Rozema et al., Phys. Fluids 27, 085107 (2015), 10.1063/1.4928700], which has many desirable properties. It is more cost effective than the dynamic Smagorinsky model, it appropriately switches off in laminar and transitional flows, and it is consistent with the exact subfilter stress tensor on both isotropic and anisotropic grids. In this study, an extension of this approach to modeling the subfilter scalar flux is proposed. The performance of the AMD model is tested in the simulation of a high-Reynolds-number rough-wall boundary-layer flow with a constant and uniform surface scalar flux. The simulation results obtained from the AMD model show good agreement with well-established empirical correlations and theoretical predictions of the resolved flow statistics. In particular, the AMD model is capable of accurately predicting the expected surface-layer similarity profiles and power spectra for both velocity and scalar concentration.

  12. Performance of transonic fan stage with weight flow per unit annulus area of 178 kilograms per second per square meter (6.5(lb/sec)/(sq ft))

    NASA Technical Reports Server (NTRS)

    Moore, R. D.; Urasek, D. C.; Kovich, G.

    1973-01-01

    The overall and blade-element performances are presented over the stable flow operating range from 50 to 100 percent of design speed. Stage peak efficiency of 0.834 was obtained at a weight flow of 26.4 kg/sec (58.3 lb/sec) and a pressure ratio of 1.581. The stall margin for the stage was 7.5 percent based on weight flow and pressure ratio at stall and peak efficiency conditions. The rotor minimum losses were approximately equal to design except in the blade vibration damper region. Stator minimum losses were less than design except in the tip and damper regions.

  13. Seepage investigation using geophysical techniques at Coursier Lake Dam, B.C., Canada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sirles, P.

    1997-10-01

    Subsurface seepage flow at Coursier Lake Dam was identified by onshore and offshore self-potential surveys, and electrical resistivity profiles and soundings during a Deficiency Investigation by BChydro. For typical seepage investigations baseline geophysical data are collected at {open_quotes}low-pool{close_quotes} level and the measurements are repeated when high hydraulic gradient conditions exist. At Coursier Lake Dam a rather unanticipated outcome of the low-pool surveys was that significant seepage beneath the structure was detected. The low-pool results were conclusive enough that, when combined with visual inspection and observation of sinkholes on the embankment, an immediate restriction was placed on the pool elevation. Thus,more » because of the identified potential hazard, the remaining geophysical investigations were conducted under a {open_quotes}minimum-pool{close_quotes} reservoir level in order to complete the comparative study. Therefore, the dam was studied under low- and minimum-pool reservoir conditions in the spring and fall of 1993, respectively. Low-pool data indicated very high resistivities (3000 to 5000 ohm-m) throughout the embankment indicating a coarse-average grain size, probably unsaturated sands and gravels. Higher resistivities (>5000 ohm-m) were obtained within the foundation deposits along the downstream toe indicating a combination of lower moisture content, coarser average grain size and higher porosity than the embankment. These electrical data indicate the subsurface conditions in the embankment and the foundation to be conducive to seepage. Results from low-pool SP surveys, performed both on-shore and offshore, indicate a dispersed or sheet flow seepage occurring nearly 1100 feet upstream of the intake. Therefore, apparently the seepage source begins far upstream of the embankment within the foundation deposits.« less

  14. Stationary zonal flows during the formation of the edge transport barrier in the JET tokamak

    DOE PAGES

    Hillesheim, J. C.; Meyer, H.; Maggi, C. F.; ...

    2016-02-10

    In this study, high spatial resolution Doppler backscattering measurements in JET have enabled new insights into the development of the edge E r. We observe fine-scale spatial structures in the edge E r well with a wave number k rρi ≈ 0.4-0.8, consistent with stationary zonal flows, the characteristics of which vary with density. The zonal flow amplitude and wavelength both decrease with local collisionality, such that the zonal flow E x B shear increases. Above the minimum of the L-H transition power threshold dependence on density, the zonal flows are present during L mode and disappear following the H-modemore » transition, while below the minimum they are reduced below measurable amplitude during L mode, before the L-H transition.« less

  15. Bounds of cavitation inception in a creeping flow between eccentric cylinders rotating with a small minimum gap

    NASA Astrophysics Data System (ADS)

    Monakhov, A. A.; Chernyavski, V. M.; Shtemler, Yu.

    2013-09-01

    Bounds of cavitation inception are experimentally determined in a creeping flow between eccentric cylinders, the inner one being static and the outer rotating at a constant angular velocity, Ω. The geometric configuration is additionally specified by a small minimum gap between cylinders, H, as compared with the radii of the inner and outer cylinders. For some values H and Ω, cavitation bubbles are observed, which are collected on the surface of the inner cylinder and equally distributed over the line parallel to its axis near the downstream minimum gap position. Cavitation occurs for the parameters {H,Ω} within a region bounded on the right by the cavitation inception curve that passes through the plane origin and cannot exceed the asymptotic threshold value of the minimum gap, Ha, in whose vicinity cavitation may occur at H < Ha only for high angular rotation velocities.

  16. Supersonic minimum length nozzle design for dense gases

    NASA Technical Reports Server (NTRS)

    Aldo, Andrew C.; Argrow, Brian M.

    1993-01-01

    Recently, dense gases have been investigated for many engineering applications such as for turbomachinery and wind tunnels. Supersonic nozzle design for these gases is complicated by their nonclassical behavior in the transonic flow regime. In this paper a method of characteristics (MOC) is developed for two-dimensional (planar) and, primarily, axisymmetric flow of a van der Waals gas. Using a straight aortic line assumption, a centered expansion is used to generate an inviscid wall contour of minimum length. The van der Waals results are compared to previous perfect gas results to show the real gas effects on the flow properties and inviscid wall contours.

  17. Multimodal pressure-flow method to assess dynamics of cerebral autoregulation in stroke and hypertension.

    PubMed

    Novak, Vera; Yang, Albert C C; Lepicovsky, Lukas; Goldberger, Ary L; Lipsitz, Lewis A; Peng, Chung-Kang

    2004-10-25

    This study evaluated the effects of stroke on regulation of cerebral blood flow in response to fluctuations in systemic blood pressure (BP). The autoregulatory dynamics are difficult to assess because of the nonstationarity and nonlinearity of the component signals. We studied 15 normotensive, 20 hypertensive and 15 minor stroke subjects (48.0 +/- 1.3 years). BP and blood flow velocities (BFV) from middle cerebral arteries (MCA) were measured during the Valsalva maneuver (VM) using transcranial Doppler ultrasound. A new technique, multimodal pressure-flow analysis (MMPF), was implemented to analyze these short, nonstationary signals. MMPF analysis decomposes complex BP and BFV signals into multiple empirical modes, representing their instantaneous frequency-amplitude modulation. The empirical mode corresponding to the VM BP profile was used to construct the continuous phase diagram and to identify the minimum and maximum values from the residual BP (BPR) and BFV (BFVR) signals. The BP-BFV phase shift was calculated as the difference between the phase corresponding to the BPR and BFVR minimum (maximum) values. BP-BFV phase shifts were significantly different between groups. In the normotensive group, the BFVR minimum and maximum preceded the BPR minimum and maximum, respectively, leading to large positive values of BP-BFV shifts. In the stroke and hypertensive groups, the resulting BP-BFV phase shift was significantly smaller compared to the normotensive group. A standard autoregulation index did not differentiate the groups. The MMPF method enables evaluation of autoregulatory dynamics based on instantaneous BP-BFV phase analysis. Regulation of BP-BFV dynamics is altered with hypertension and after stroke, rendering blood flow dependent on blood pressure.

  18. An extended car-following model considering the appearing probability of truck and driver's characteristics

    NASA Astrophysics Data System (ADS)

    Rong, Ying; Wen, Huiying

    2018-05-01

    In this paper, the appearing probability of truck is introduced and an extended car-following model is presented to analyze the traffic flow based on the consideration of driver's characteristics, under honk environment. The stability condition of this proposed model is obtained through linear stability analysis. In order to study the evolution properties of traffic wave near the critical point, the mKdV equation is derived by the reductive perturbation method. The results show that the traffic flow will become more disorder for the larger appearing probability of truck. Besides, the appearance of leading truck affects not only the stability of traffic flow, but also the effect of other aspects on traffic flow, such as: driver's reaction and honk effect. The effects of them on traffic flow are closely correlated with the appearing probability of truck. Finally, the numerical simulations under the periodic boundary condition are carried out to verify the proposed model. And they are consistent with the theoretical findings.

  19. 42 CFR 84.207 - Bench tests; gas and vapor tests; minimum requirements; general.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....) Flowrate (l.p.m.) Number of tests Penetration 1 (p.p.m.) Minimum life 2 (min.) Ammonia As received NH3 1000... minimum life shall be one-half that shown for each type of gas or vapor. Where a respirator is designed... at predetermined concentrations and rates of flow, and that has means for determining the test life...

  20. 42 CFR 84.207 - Bench tests; gas and vapor tests; minimum requirements; general.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....) Flowrate (l.p.m.) Number of tests Penetration 1 (p.p.m.) Minimum life 2 (min.) Ammonia As received NH3 1000... minimum life shall be one-half that shown for each type of gas or vapor. Where a respirator is designed... at predetermined concentrations and rates of flow, and that has means for determining the test life...

  1. 42 CFR 84.207 - Bench tests; gas and vapor tests; minimum requirements; general.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....) Flowrate (l.p.m.) Number of tests Penetration 1 (p.p.m.) Minimum life 2 (min.) Ammonia As received NH3 1000... minimum life shall be one-half that shown for each type of gas or vapor. Where a respirator is designed... at predetermined concentrations and rates of flow, and that has means for determining the test life...

  2. Post-fire debris-flow hazard assessment of the area burned by the 2013 Beaver Creek Fire near Hailey, central Idaho

    USGS Publications Warehouse

    Skinner, Kenneth D.

    2013-01-01

    A preliminary hazard assessment was developed for debris-flow hazards in the 465 square-kilometer (115,000 acres) area burned by the 2013 Beaver Creek fire near Hailey in central Idaho. The burn area covers all or part of six watersheds and selected basins draining to the Big Wood River and is at risk of substantial post-fire erosion, such as that caused by debris flows. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the Intermountain Region in Western United States were used to estimate the probability of debris-flow occurrence, potential volume of debris flows, and the combined debris-flow hazard ranking along the drainage network within the burn area and to estimate the same for analyzed drainage basins within the burn area. Input data for the empirical models included topographic parameters, soil characteristics, burn severity, and rainfall totals and intensities for a (1) 2-year-recurrence, 1-hour-duration rainfall, referred to as a 2-year storm (13 mm); (2) 10-year-recurrence, 1-hour-duration rainfall, referred to as a 10-year storm (19 mm); and (3) 25-year-recurrence, 1-hour-duration rainfall, referred to as a 25-year storm (22 mm). Estimated debris-flow probabilities for drainage basins upstream of 130 selected basin outlets ranged from less than 1 to 78 percent with the probabilities increasing with each increase in storm magnitude. Probabilities were high in three of the six watersheds. For the 25-year storm, probabilities were greater than 60 percent for 11 basin outlets and ranged from 50 to 60 percent for an additional 12 basin outlets. Probability estimates for stream segments within the drainage network can vary within a basin. For the 25-year storm, probabilities for stream segments within 33 basins were higher than the basin outlet, emphasizing the importance of evaluating the drainage network as well as basin outlets. Estimated debris-flow volumes for the three modeled storms range from a minimal debris flow volume of 10 cubic meters [m3]) to greater than 100,000 m3. Estimated debris-flow volumes increased with basin size and distance downstream. For the 25-year storm, estimated debris-flow volumes were greater than 100,000 m3 for 4 basins and between 50,000 and 100,000 m3 for 10 basins. The debris-flow hazard rankings did not result in the highest hazard ranking of 5, indicating that none of the basins had a high probability of debris-flow occurrence and a high debris-flow volume estimate. The hazard ranking was 4 for one basin using the 10-year-recurrence storm model and for three basins using the 25-year-recurrence storm model. The maps presented herein may be used to prioritize areas where post-wildfire remediation efforts should take place within the 2- to 3-year period of increased erosional vulnerability.

  3. Streamflow distribution maps for the Cannon River drainage basin, southeast Minnesota, and the St. Louis River drainage basin, northeast Minnesota

    USGS Publications Warehouse

    Smith, Erik A.; Sanocki, Chris A.; Lorenz, David L.; Jacobsen, Katrin E.

    2017-12-27

    Streamflow distribution maps for the Cannon River and St. Louis River drainage basins were developed by the U.S. Geological Survey, in cooperation with the Legislative-Citizen Commission on Minnesota Resources, to illustrate relative and cumulative streamflow distributions. The Cannon River was selected to provide baseline data to assess the effects of potential surficial sand mining, and the St. Louis River was selected to determine the effects of ongoing Mesabi Iron Range mining. Each drainage basin (Cannon, St. Louis) was subdivided into nested drainage basins: the Cannon River was subdivided into 152 nested drainage basins, and the St. Louis River was subdivided into 353 nested drainage basins. For each smaller drainage basin, the estimated volumes of groundwater discharge (as base flow) and surface runoff flowing into all surface-water features were displayed under the following conditions: (1) extreme low-flow conditions, comparable to an exceedance-probability quantile of 0.95; (2) low-flow conditions, comparable to an exceedance-probability quantile of 0.90; (3) a median condition, comparable to an exceedance-probability quantile of 0.50; and (4) a high-flow condition, comparable to an exceedance-probability quantile of 0.02.Streamflow distribution maps were developed using flow-duration curve exceedance-probability quantiles in conjunction with Soil-Water-Balance model outputs; both the flow-duration curve and Soil-Water-Balance models were built upon previously published U.S. Geological Survey reports. The selected streamflow distribution maps provide a proactive water management tool for State cooperators by illustrating flow rates during a range of hydraulic conditions. Furthermore, after the nested drainage basins are highlighted in terms of surface-water flows, the streamflows can be evaluated in the context of meeting specific ecological flows under different flow regimes and potentially assist with decisions regarding groundwater and surface-water appropriations. Presented streamflow distribution maps are foundational work intended to support the development of additional streamflow distribution maps that include statistical constraints on the selected flow conditions.

  4. Early irrigation systems in southeastern Arizona: the ostracode perspective

    NASA Astrophysics Data System (ADS)

    Palacios-Fest, Manuel R.; Mabry, Jonathan B.; Nials, Fred; Holmlund, James P.; Miksa, Elizabeth; Davis, Owen K.

    2001-10-01

    For the first time, the Early Agricultural Period (1200 BC-150 AD) canal irrigation in the Santa Cruz River Valley, southeastern Arizona, is documented through ostracode paleoecology. Interpretations based on ostracode paleoecology and taphonomy are supported by anthropological, sedimentological, geomorphological, and palynological information, and were used to determine the environmental history of the northern Tucson Basin during the time span represented by the sequence of canals at Las Capas (site AZ AA:12:753 ASM). We also attempt to elucidate based on archaeological artifacts if the Hohokam or a previous civilization built the canals. Between 3000 and 2400 radiocarbon years BP, at least three episodes of canal operation are defined by ostracode assemblages and pollen records. Modern (mid-late 20th century) canals supported no ostracodes, probably because of temporally brief canal operation from local wells. Three stages of water management are well defined during prehistoric canal operation. Ostracode faunal associations indicate that prehistoric peoples first operated their irrigation systems in a simple, 'opportunistic' mode (diversion of ephemeral flows following storms), and later in a complex, 'functional' mode (carefully timed diversions of perennial flows). The geomorphological reconstruction indicates that these canals had a minimum length of 1.1 km, and were possibly twice as long. The hydraulic reconstruction of these canals suggests that they had similar gradients (0.05-0.1%) to later prehistoric canals in the same valley. Discharges were also respectable. When flowing at bank-full, the largest canal provided an acre-foot of water in about 2.3 h; when flowing half-full (probably a more realistic assumption), it produced an acre-foot of water in about 8.6 h. Palynological records of the oldest canals (here identified as Features 3 and 4; 3000-2500 years BP) indicate they were used temporarily, since riparian vegetation did not grow consistently in the area. The presence of maize (Zea sp.) pollen in the canals confirms agricultural use of the canal water. However, a low percentage of maize and weed pollen suggests limited agricultural activity in this location, consistent with the lithostratigraphy, granulometry, and ostracode paleoecology. Agricultural fields were probably located downstream of this site. Ostracode assemblages show patterns consistent with the opportunistic or functional water control method, hence proving their value as indicators of human activity and environmental change. The transition from opportunistic to functional modes of canal operation indicates the increasing complexity of the social structure in the Santa Cruz Valley during the San Pedro Phase (1200-800 BC) of the Early Agricultural Period.

  5. Topology for efficient information dissemination in ad-hoc networking

    NASA Technical Reports Server (NTRS)

    Jennings, E.; Okino, C. M.

    2002-01-01

    In this paper, we explore the information dissemination problem in ad-hoc wirless networks. First, we analyze the probability of successful broadcast, assuming: the nodes are uniformly distributed, the available area has a lower bould relative to the total number of nodes, and there is zero knowledge of the overall topology of the network. By showing that the probability of such events is small, we are motivated to extract good graph topologies to minimize the overall transmissions. Three algorithms are used to generate topologies of the network with guaranteed connectivity. These are the minimum radius graph, the relative neighborhood graph and the minimum spanning tree. Our simulation shows that the relative neighborhood graph has certain good graph properties, which makes it suitable for efficient information dissemination.

  6. Low-flow profiles of the Tennessee River tributaries in Georgia

    USGS Publications Warehouse

    Carter, R.F.; Hopkins, E.H.; Perlman, H.A.

    1988-01-01

    Low flow information is provided for use in an evaluation of the capacity of streams to permit withdrawals or to accept waste loads without exceeding the limits of State water quality standards. The purpose of this report is to present the results of a compilation of available low flow data in the form of tables and ' 7Q10 flow profiles ' (minimum average flow for 7 consecutive days with a 10-yr recurrence interval) (7Q10 flow plotted against distance along a stream channel) for all stream reaches of the Tennessee River tributaries where sufficient data of acceptable accuracy are available. Drainage area profiles are included for all stream basins larger than 5 sq mi, except for those in a few remote areas. This report is the fifth in a series of reports that will cover all stream basins north of the Fall Line in Georgia. It includes the parts of the Tennessee River basin in Georgia. Flow records were not adjusted for diversions or other factors that cause measured flows to represent other than natural flow conditions. The 7-day minimum flow profile was omitted for stream reaches where natural flow was known to be altered significantly. (Lantz-PTT)

  7. Numerical Modeling of Surface and Volumetric Cooling using Optimal T- and Y-shaped Flow Channels

    NASA Astrophysics Data System (ADS)

    Kosaraju, Srinivas

    2017-11-01

    The layout of T- and V-shaped flow channel networks on a surface can be optimized for minimum pressure drop and pumping power. The results of the optimization are in the form of geometric parameters such as length and diameter ratios of the stem and branch sections. While these flow channels are optimized for minimum pressure drop, they can also be used for surface and volumetric cooling applications such as heat exchangers, air conditioning and electronics cooling. In this paper, an effort has been made to study the heat transfer characteristics of multiple T- and Y-shaped flow channel configurations using numerical simulations. All configurations are subjected to same input parameters and heat generation constraints. Comparisons are made with similar results published in literature.

  8. Low-Current, Xenon Orificed Hollow Cathode Performance for In-Space Applications

    NASA Technical Reports Server (NTRS)

    Domonkos, Matthew T.; Patterson, Michael J.; Gallimore, Alec D.

    2002-01-01

    An experimental investigation of the operating characteristics of 3.2-mm diameter orificed hollow cathodes was conducted to examine low current and low flow rate operation. Cathode power was minimized with an orifice aspect ratio of approximately one and the use of an enclosed keeper. Cathode flow rate requirements were proportional to orifice diameter and the inverse of the orifice length. The minimum power consumption in diode mode was 10-W, and the minimum mass flow rate required for spot-mode emission was approximately 0.08-mg/s. Cathode temperature profiles were obtained using an imaging radiometer and conduction was found to be the dominant heat transfer mechanism from the cathode tube. Orifice plate temperatures were found to be weakly dependent upon the flow rate and strongly dependent upon the current.

  9. Oregon Cascades Play Fairway Analysis: Faults and Heat Flow maps

    DOE Data Explorer

    Adam Brandt

    2015-11-15

    This submission includes a fault map of the Oregon Cascades and backarc, a probability map of heat flow, and a fault density probability layer. More extensive metadata can be found within each zip file.

  10. A Model for Risk Analysis of Oil Tankers

    NASA Astrophysics Data System (ADS)

    Montewka, Jakub; Krata, Przemysław; Goerland, Floris; Kujala, Pentti

    2010-01-01

    The paper presents a model for risk analysis regarding marine traffic, with the emphasis on two types of the most common marine accidents which are: collision and grounding. The focus is on oil tankers as these pose the highest environmental risk. A case study in selected areas of Gulf of Finland in ice free conditions is presented. The model utilizes a well-founded formula for risk calculation, which combines the probability of an unwanted event with its consequences. Thus the model is regarded a block type model, consisting of blocks for the probability of collision and grounding estimation respectively as well as blocks for consequences of an accident modelling. Probability of vessel colliding is assessed by means of a Minimum Distance To Collision (MDTC) based model. The model defines in a novel way the collision zone, using mathematical ship motion model and recognizes traffic flow as a non homogeneous process. The presented calculations address waterways crossing between Helsinki and Tallinn, where dense cross traffic during certain hours is observed. For assessment of a grounding probability, a new approach is proposed, which utilizes a newly developed model, where spatial interactions between objects in different locations are recognized. A ship at a seaway and navigational obstructions may be perceived as interacting objects and their repulsion may be modelled by a sort of deterministic formulation. Risk due to tankers running aground addresses an approach fairway to an oil terminal in Sköldvik, near Helsinki. The consequences of an accident are expressed in monetary terms, and concern costs of an oil spill, based on statistics of compensations claimed from the International Oil Pollution Compensation Funds (IOPC Funds) by parties involved.

  11. Reactive Resonances in N+N2 Exchange Reaction

    NASA Technical Reports Server (NTRS)

    Wang, Dunyou; Huo, Winifred M.; Dateo, Christopher E.; Schwenke, David W.; Stallcop, James R.

    2003-01-01

    Rich reactive resonances are found in a 3D quantum dynamics study of the N + N2 exchange reaction using a recently developed ab initio potential energy surface. This surface is characterized by a feature in the interaction region called Lake Eyring , that is, two symmetric transition states with a shallow minimum between them. An L2 analysis of the quasibound states associated with the shallow minimum confirms that the quasibound states associated with oscillations in all three degrees of freedom in Lake Eyring are responsible for the reactive resonances in the state-to-state reaction probabilities. The quasibound states, mostly the bending motions, give rise to strong reasonance peaks, whereas other motions contribute to the bumps and shoulders in the resonance structure. The initial state reaction probability further proves that the bending motions are the dominating factors of the reaction probability and have longer life times than the stretching motions. This is the first observation of reactive resonances from a "Lake Eyring" feature in a potential energy surface.

  12. Minimum Winfree loop determines self-sustained oscillations in excitable Erdös-Rényi random networks.

    PubMed

    Qian, Yu; Cui, Xiaohua; Zheng, Zhigang

    2017-07-18

    The investigation of self-sustained oscillations in excitable complex networks is very important in understanding various activities in brain systems, among which the exploration of the key determinants of oscillations is a challenging task. In this paper, by investigating the influence of system parameters on self-sustained oscillations in excitable Erdös-Rényi random networks (EERRNs), the minimum Winfree loop (MWL) is revealed to be the key factor in determining the emergence of collective oscillations. Specifically, the one-to-one correspondence between the optimal connection probability (OCP) and the MWL length is exposed. Moreover, many important quantities such as the lower critical connection probability (LCCP), the OCP, and the upper critical connection probability (UCCP) are determined by the MWL. Most importantly, they can be approximately predicted by the network structure analysis, which have been verified in numerical simulations. Our results will be of great importance to help us in understanding the key factors in determining persistent activities in biological systems.

  13. Intercooler cooling-air weight flow and pressure drop for minimum drag loss

    NASA Technical Reports Server (NTRS)

    Reuter, J George; Valerino, Michael F

    1944-01-01

    An analysis has been made of the drag losses in airplane flight of cross-flow plate and tubular intercoolers to determine the cooling-air weight flow and pressure drop that give a minimum drag loss for any given cooling effectiveness and, thus, a maximum power-plant net gain due to charge-air cooling. The drag losses considered in this analysis are those due to (1) the extra drag imposed on the airplane by the weight of the intercooler, its duct, and its supports and (2) the drag sustained by the cooling air in flowing through the intercooler and its duct. The investigation covers a range of conditions of altitude, airspeed, lift-drag ratio, supercharger-pressure ratio, and supercharger adiabatic efficiency. The optimum values of cooling air pressure drop and weight flow ratio are tabulated. Curves are presented to illustrate the results of the analysis.

  14. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  15. Impact of blood flow on diffusion coefficients of the human kidney: a time-resolved ECG-triggered diffusion-tensor imaging (DTI) study at 3T.

    PubMed

    Heusch, Philipp; Wittsack, Hans-Jörg; Kröpil, Patric; Blondin, Dirk; Quentin, Michael; Klasen, Janina; Pentang, Gael; Antoch, Gerald; Lanzman, Rotem S

    2013-01-01

    To evaluate the impact of renal blood flow on apparent diffusion coefficients (ADC) and fractional anisotropy (FA) using time-resolved electrocardiogram (ECG)-triggered diffusion-tensor imaging (DTI) of the human kidneys. DTI was performed in eight healthy volunteers (mean age 29.1 ± 3.2) using a single slice coronal echoplanar imaging (EPI) sequence (3 b-values: 0, 50, and 300 s/mm(2)) at the timepoint of minimum (20 msec after R wave) and maximum renal blood flow (200 msec after R wave) at 3T. Following 2D motion correction, region of interest (ROI)-based analysis of cortical and medullary ADC- and FA-values was performed. ADC-values of the renal cortex at maximum blood flow (2.6 ± 0.19 × 10(-3) mm(2)/s) were significantly higher than at minimum blood flow (2.2 ± 0.11 × 10(-3) mm(2)/s) (P < 0.001), while medullary ADC-values did not differ significantly (maximum blood flow: 2.2 ± 0.18 × 10(-3) mm(2)/s; minimum blood flow: 2.15 ± 0.14 × 10(-3) mm(2)/s). FA-values of the renal medulla were significantly greater at maximal blood (0.53 ± 0.05) than at minimal blood flow (0.47 ± 0.05) (P < 0.01). In contrast, cortical FA-values were comparable at different timepoints of the cardiac cycle. ADC-values in the renal cortex as well as FA-values in the renal medulla are influenced by renal blood flow. This impact has to be considered when interpreting renal ADC- and FA-values. Copyright © 2012 Wiley Periodicals, Inc.

  16. Preferential flow, connectivity and the principle of "minimum time to equilibrium": a new perspective on environmental water flow

    NASA Astrophysics Data System (ADS)

    Zehe, E.; Blume, T.; Bloeschl, G.

    2008-12-01

    Preferential/rapid flow and transport is known as one key process in soil hydrology for more than 20 years. It seems to be rather the rule, than the exception. It occurs in soils, in surface rills and river networks. If connective preferential are present at any scale, they crucially control water flow and solute transport. Why? Is there an underlying principle? If energy is conserved a system follows Fermat's principle of minimum action i.e. it follows the trajectory that minimise the integral of the total energy/ La Grangian over time. Hydrological systems are, however, non-conservative as surface and subsurface water flows dissipate energy. From thermodynamics it is well known that natural processes minimize the free energy of the system. For hydrological systems we suggest, therefore, that flow in a catchment arranges in such a way that time to a minimum of free energy becomes minimal for a given rainfall input (disturbance) and under given constraints. Free energy in a soil is determined by potential energy and capillary energy. The pore size distribution of the soil, soil structures, depth to groundwater and most important vegetation make up the constraints. The pore size distribution determines whether potential energy or capillarity dominates the free energy of the soil system. The first term is minimal when the pore space is completely de-saturated the latter becomes minimal at soil saturation. Hence, the soil determines a) the amount of excess (gravity) water that has to be exported from the soil to reach a minimum state of free energy and b) whether redistribution or groundwater recharge is more efficient to reach that equilibrium. On the other hand, the pore size distribution of the soil and the connectivity of preferential pathways (root channels, worm holes and cracks) determine flow velocities and the redistribution of water within the pore space. As water flow and ground water recharge are fast in sandy soils and capillary energy is of minor importance, connective preferential pathways do not mean any advantage for an efficient transition to an equilibrium in these systems. In fine grained soils Darcy velocities and therefore redistribution of water is 2-4 orders of magnitude slower. As capillary energy dominates in these soils an effective redistribution of water within the pore space is crucial for a fast transition of system to an equilibrium state. Connective preferential pathways ore even cracks allow a faster redistribution of water and seem therefore necessary for a fast transition into a state of minimum free energy. The suggested principle "of minimum time to equilibrium" may explain the "advantage" of preferential flow as a much more efficient dissipation of energy in fine grained soils and therefore why connective preferential pathways control environmental flow. From a fundamental, long term perspective the principle may help us to understand whether and why soil structures and even cracks evolve in different landscapes and climates and b) to link soil hydrology and (landscape) ecology. Along the lines the proposed study will present model results to test the stated hypothesis.

  17. 40 CFR 1066.125 - Data updating, recording, and control.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... minimum recording frequency, such as for sample flow rates from a CVS that does not have a heat exchanger... exhaust flow rate from a CVS with a heat exchanger upstream of the flow measurement 1 Hz. 40 CFR 1065.545§ 1066.425 Diluted exhaust flow rate from a CVS without a heat exchanger upstream of the flow measurement...

  18. Probability modeling of high flow extremes in Yingluoxia watershed, the upper reaches of Heihe River basin

    NASA Astrophysics Data System (ADS)

    Li, Zhanling; Li, Zhanjie; Li, Chengcheng

    2014-05-01

    Probability modeling of hydrological extremes is one of the major research areas in hydrological science. Most basins in humid and semi-humid south and east of China are concerned for probability modeling analysis of high flow extremes. While, for the inland river basin which occupies about 35% of the country area, there is a limited presence of such studies partly due to the limited data availability and a relatively low mean annual flow. The objective of this study is to carry out probability modeling of high flow extremes in the upper reach of Heihe River basin, the second largest inland river basin in China, by using the peak over threshold (POT) method and Generalized Pareto Distribution (GPD), in which the selection of threshold and inherent assumptions for POT series are elaborated in details. For comparison, other widely used probability distributions including generalized extreme value (GEV), Lognormal, Log-logistic and Gamma are employed as well. Maximum likelihood estimate is used for parameter estimations. Daily flow data at Yingluoxia station from 1978 to 2008 are used. Results show that, synthesizing the approaches of mean excess plot, stability features of model parameters, return level plot and the inherent independence assumption of POT series, an optimum threshold of 340m3/s is finally determined for high flow extremes in Yingluoxia watershed. The resulting POT series is proved to be stationary and independent based on Mann-Kendall test, Pettitt test and autocorrelation test. In terms of Kolmogorov-Smirnov test, Anderson-Darling test and several graphical diagnostics such as quantile and cumulative density function plots, GPD provides the best fit to high flow extremes in the study area. The estimated high flows for long return periods demonstrate that, as the return period increasing, the return level estimates are probably more uncertain. The frequency of high flow extremes exhibits a very slight but not significant decreasing trend from 1978 to 2008, while the intensity of such flow extremes is comparatively increasing especially for the higher return levels.

  19. Combined-probability space and certainty or uncertainty relations for a finite-level quantum system

    NASA Astrophysics Data System (ADS)

    Sehrawat, Arun

    2017-08-01

    The Born rule provides a probability vector (distribution) with a quantum state for a measurement setting. For two settings, we have a pair of vectors from the same quantum state. Each pair forms a combined-probability vector that obeys certain quantum constraints, which are triangle inequalities in our case. Such a restricted set of combined vectors, called the combined-probability space, is presented here for a d -level quantum system (qudit). The combined space is a compact convex subset of a Euclidean space, and all its extreme points come from a family of parametric curves. Considering a suitable concave function on the combined space to estimate the uncertainty, we deliver an uncertainty relation by finding its global minimum on the curves for a qudit. If one chooses an appropriate concave (or convex) function, then there is no need to search for the absolute minimum (maximum) over the whole space; it will be on the parametric curves. So these curves are quite useful for establishing an uncertainty (or a certainty) relation for a general pair of settings. We also demonstrate that many known tight certainty or uncertainty relations for a qubit can be obtained with the triangle inequalities.

  20. Discrete-vortex simulation of pulsating flow on a turbulent leading-edge separation bubble

    NASA Technical Reports Server (NTRS)

    Sung, Hyung Jin; Rhim, Jae Wook; Kiya, Masaru

    1992-01-01

    Studies are made of the turbulent separation bubble in a two-dimensional semi-infinite blunt plate aligned to a uniform free stream with a pulsating component. The discrete-vortex method is applied to simulate this flow situation because this approach is effective for representing the unsteady motions of the turbulent shear layer and the effect of viscosity near the solid surface. The numerical simulation provides reasonable predictions when compared with the experimental results. A particular frequency with a minimum reattachment is related to the drag reduction. The most effective frequency is dependent on the amplified shedding frequency. The turbulent flow structure is scrutinized. This includes the time-mean and fluctuations of the velocity and the surface pressure, together with correlations between the fluctuating components. A comparison between the pulsating flow and the non-pulsating flow at the particular frequency of the minimum reattachment length of the separation bubble suggests that the large-scale vortical structure is associated with the shedding frequency and the flow instabilities.

  1. On streak spacing in wall-bounded turbulent flows

    NASA Technical Reports Server (NTRS)

    Hamilton, James M.; Kim, John J.

    1993-01-01

    The present study is a continuation of the examination by Hamilton et al. of the regeneration mechanisms of near-wall turbulence and an attempt to investigate the conjecture of Waleffe et al. The basis of this study is an extension of the 'minimal channel' approach of Jimenez and Moin that emphasizes the near-wall region and reduces the complexity of the turbulent flow by considering a plane Couette flow of near minimum Reynolds number and stream-wise and span-wise extent. Reduction of the flow Reynolds number to the minimum value which will allow turbulence to be sustained has the effect of reducing the ratio of the largest scales to the smallest scales or, equivalently, of causing the near-wall region to fill more of the area between the channel walls. A plane Couette flow was chosen for study since this type of flow has a mean shear of a single sign, and at low Reynolds numbers, the two wall regions are found to share a single set of structures.

  2. A logistic regression equation for estimating the probability of a stream in Vermont having intermittent flow

    USGS Publications Warehouse

    Olson, Scott A.; Brouillette, Michael C.

    2006-01-01

    A logistic regression equation was developed for estimating the probability of a stream flowing intermittently at unregulated, rural stream sites in Vermont. These determinations can be used for a wide variety of regulatory and planning efforts at the Federal, State, regional, county and town levels, including such applications as assessing fish and wildlife habitats, wetlands classifications, recreational opportunities, water-supply potential, waste-assimilation capacities, and sediment transport. The equation will be used to create a derived product for the Vermont Hydrography Dataset having the streamflow characteristic of 'intermittent' or 'perennial.' The Vermont Hydrography Dataset is Vermont's implementation of the National Hydrography Dataset and was created at a scale of 1:5,000 based on statewide digital orthophotos. The equation was developed by relating field-verified perennial or intermittent status of a stream site during normal summer low-streamflow conditions in the summer of 2005 to selected basin characteristics of naturally flowing streams in Vermont. The database used to develop the equation included 682 stream sites with drainage areas ranging from 0.05 to 5.0 square miles. When the 682 sites were observed, 126 were intermittent (had no flow at the time of the observation) and 556 were perennial (had flowing water at the time of the observation). The results of the logistic regression analysis indicate that the probability of a stream having intermittent flow in Vermont is a function of drainage area, elevation of the site, the ratio of basin relief to basin perimeter, and the areal percentage of well- and moderately well-drained soils in the basin. Using a probability cutpoint (a lower probability indicates the site has perennial flow and a higher probability indicates the site has intermittent flow) of 0.5, the logistic regression equation correctly predicted the perennial or intermittent status of 116 test sites 85 percent of the time.

  3. Emergency Assessment of Postfire Debris-Flow Hazards for the 2009 Station Fire, San Gabriel Mountains, Southern California

    USGS Publications Warehouse

    Cannon, Susan H.; Gartner, Joseph E.; Rupert, Michael G.; Michael, John A.; Staley, Dennis M.; Worstell, Bruce B.

    2009-01-01

    This report presents an emergency assessment of potential debris-flow hazards from basins burned by the 2009 Station fire in Los Angeles County, southern California. Statistical-empirical models developed for postfire debris flows are used to estimate the probability and volume of debris-flow production from 678 drainage basins within the burned area and to generate maps of areas that may be inundated along the San Gabriel mountain front by the estimated volume of material. Debris-flow probabilities and volumes are estimated as combined functions of different measures of basin burned extent, gradient, and material properties in response to both a 3-hour-duration, 1-year-recurrence thunderstorm and to a 12-hour-duration, 2-year recurrence storm. Debris-flow inundation areas are mapped for scenarios where all sediment-retention basins are empty and where the basins are all completely full. This assessment provides critical information for issuing warnings, locating and designing mitigation measures, and planning evacuation timing and routes within the first two winters following the fire. Tributary basins that drain into Pacoima Canyon, Big Tujunga Canyon, Arroyo Seco, West Fork of the San Gabriel River, and Devils Canyon were identified as having probabilities of debris-flow occurrence greater than 80 percent, the potential to produce debris flows with volumes greater than 100,000 m3, and the highest Combined Relative Debris-Flow Hazard Ranking in response to both storms. The predicted high probability and large magnitude of the response to such short-recurrence storms indicates the potential for significant debris-flow impacts to any buildings, roads, bridges, culverts, and reservoirs located both within these drainages and downstream from the burned area. These areas will require appropriate debris-flow mitigation and warning efforts. Probabilities of debris-flow occurrence greater than 80 percent, debris-flow volumes between 10,000 and 100,000 m3, and high Combined Relative Debris-Flow Hazard Rankings were estimated in response to both short recurrence-interval (1- and 2-year) storms for all but the smallest basins along the San Gabriel mountain front between Big Tujunga Canyon and Arroyo Seco. The combination of high probabilities and large magnitudes determined for these basins indicates significant debris-flow hazards for neighborhoods along the mountain front. When the capacity of sediment-retention basins is exceeded, debris flows may be deposited in neighborhoods and streets and impact infrastructure between the mountain front and Foothill Boulevard. In addition, debris flows may be deposited in neighborhoods immediately below unprotected basins. Hazards to neighborhoods and structures at risk from these events will require appropriate debris-flow mitigation and warning efforts.

  4. An economic analysis of selected strategies for dissolved-oxygen management; Chattahoochee River, Georgia

    USGS Publications Warehouse

    Schefter, John E.; Hirsch, Robert M.

    1980-01-01

    A method for evaluating the cost-effectiveness of alternative strategies for dissolved-oxygen (DO) management is demonstrated, using the Chattahoochee River, GA., as an example. The conceptual framework for the analysis is suggested by the economic theory of production. The minimum flow of the river and the percentage of the total waste inflow receiving nitrification are considered to be two variable inputs to be used in the production of given minimum concentration of DO in the river. Each of the inputs has a cost: the loss of dependable peak hydroelectric generating capacity at Buford Dam associated with flow augmentation and the cost associated with nitrification of wastes. The least-cost combination of minimum flow and waste treatment necessary to achieve a prescribed minimum DO concentration is identified. Results of the study indicate that, in some instances, the waste-assimilation capacity of the Chattahoochee River can be substituted for increased waste treatment; the associated savings in waste-treatment costs more than offset the benefits foregone because of the loss of peak generating capacity at Buford Dam. The sensitivity of the results to the estimates of the cost of replacing peak generating capacity is examined. It is also demonstrated that a flexible approach to the management of DO in the Chattahoochee River may be much more cost effective than a more rigid, institutional approach wherein constraints are placed on the flow of the river and(or) on waste-treatment practices. (USGS)

  5. Potential postwildfire debris-flow hazards - A prewildfire evaluation for the Jemez Mountains, north-central New Mexico

    Treesearch

    Anne C. Tillery; Jessica Haas

    2016-01-01

    Wildfire can substantially increase the probability of debris flows, a potentially hazardous and destructive form of mass wasting, in landscapes that have otherwise been stable throughout recent history. Although the exact location, extent, and severity of wildfire or subsequent rainfall intensity and duration cannot be known, probabilities of fire and debris‑flow...

  6. Impacts of coronary artery eccentricity on macro-recirculation and pressure drops using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Poon, Eric; Thondapu, Vikas; Barlis, Peter; Ooi, Andrew

    2017-11-01

    Coronary artery disease remains a major cause of mortality in developed countries, and is most often due to a localized flow-limiting stenosis, or narrowing, of coronary arteries. Patients often undergo invasive procedures such as X-ray angiography and fractional flow reserve to diagnose flow-limiting lesions. Even though such diagnostic techniques are well-developed, the effects of diseased coronary segments on local flow are still poorly understood. Therefore, this study investigated the effect of irregular geometries of diseased coronary segments on the macro-recirculation and local pressure minimum regions. We employed an idealized coronary artery model with a diameter of stenosis of 75%. By systematically adjusting the eccentricity and the asymmetry of the coronary stenosis, we uncovered an increase in macro-recirculation size. Most importantly, the presence of this macro-recirculation signifies a local pressure minimum (identified by λ2 vortex identification method). This local pressure minimum has a profound effect on the pressure drops in both longitudinal and planar directions, which has implications for diagnosis and treatment of coronary artery disease. Supported by Australian Research Council LP150100233 and National Computational Infrastructure m45.

  7. Experimental characterization of collision avoidance in pedestrian dynamics

    NASA Astrophysics Data System (ADS)

    Parisi, Daniel R.; Negri, Pablo A.; Bruno, Luciana

    2016-08-01

    In the present paper, the avoidance behavior of pedestrians was characterized by controlled experiments. Several conflict situations were studied considering different flow rates and group sizes in crossing and head-on configurations. Pedestrians were recorded from above, and individual two-dimensional trajectories of their displacement were recovered after image processing. Lateral swaying amplitude and step lengths were measured for free pedestrians, obtaining similar values to the ones reported in the literature. Minimum avoidance distances were computed in two-pedestrian experiments. In the case of one pedestrian dodging an arrested one, the avoidance distance did not depend on the relative orientation of the still pedestrian with respect to the direction of motion of the first. When both pedestrians were moving, the avoidance distance in a perpendicular encounter was longer than the one obtained during a head-on approach. It was found that the mean curvature of the trajectories was linearly anticorrelated with the mean speed. Furthermore, two common avoidance maneuvers, stopping and steering, were defined from the analysis of the acceleration and curvature in single trajectories. Interestingly, it was more probable to observe steering events than stopping ones, also the probability of simultaneous steering and stopping occurrences was negligible. The results obtained in this paper can be used to validate and calibrate pedestrian dynamics models.

  8. Automated MRI segmentation for individualized modeling of current flow in the human head.

    PubMed

    Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C

    2013-12-01

    High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.

  9. Multimodal pressure-flow method to assess dynamics of cerebral autoregulation in stroke and hypertension

    PubMed Central

    Novak, Vera; Yang, Albert CC; Lepicovsky, Lukas; Goldberger, Ary L; Lipsitz, Lewis A; Peng, Chung-Kang

    2004-01-01

    Background This study evaluated the effects of stroke on regulation of cerebral blood flow in response to fluctuations in systemic blood pressure (BP). The autoregulatory dynamics are difficult to assess because of the nonstationarity and nonlinearity of the component signals. Methods We studied 15 normotensive, 20 hypertensive and 15 minor stroke subjects (48.0 ± 1.3 years). BP and blood flow velocities (BFV) from middle cerebral arteries (MCA) were measured during the Valsalva maneuver (VM) using transcranial Doppler ultrasound. Results A new technique, multimodal pressure-flow analysis (MMPF), was implemented to analyze these short, nonstationary signals. MMPF analysis decomposes complex BP and BFV signals into multiple empirical modes, representing their instantaneous frequency-amplitude modulation. The empirical mode corresponding to the VM BP profile was used to construct the continuous phase diagram and to identify the minimum and maximum values from the residual BP (BPR) and BFV (BFVR) signals. The BP-BFV phase shift was calculated as the difference between the phase corresponding to the BPR and BFVR minimum (maximum) values. BP-BFV phase shifts were significantly different between groups. In the normotensive group, the BFVR minimum and maximum preceded the BPR minimum and maximum, respectively, leading to large positive values of BP-BFV shifts. Conclusion In the stroke and hypertensive groups, the resulting BP-BFV phase shift was significantly smaller compared to the normotensive group. A standard autoregulation index did not differentiate the groups. The MMPF method enables evaluation of autoregulatory dynamics based on instantaneous BP-BFV phase analysis. Regulation of BP-BFV dynamics is altered with hypertension and after stroke, rendering blood flow dependent on blood pressure. PMID:15504235

  10. The volume-outcome relationship and minimum volume standards--empirical evidence for Germany.

    PubMed

    Hentschker, Corinna; Mennicken, Roman

    2015-06-01

    For decades, there is an ongoing discussion about the quality of hospital care leading i.a. to the introduction of minimum volume standards in various countries. In this paper, we analyze the volume-outcome relationship for patients with intact abdominal aortic aneurysm and hip fracture. We define hypothetical minimum volume standards in both conditions and assess consequences for access to hospital services in Germany. The results show clearly that patients treated in hospitals with a higher case volume have on average a significant lower probability of death in both conditions. Furthermore, we show that the hypothetical minimum volume standards do not compromise overall access measured with changes in travel times. Copyright © 2014 John Wiley & Sons, Ltd.

  11. The impact of the minimum wage on health.

    PubMed

    Andreyeva, Elena; Ukert, Benjamin

    2018-03-07

    This study evaluates the effect of minimum wage on risky health behaviors, healthcare access, and self-reported health. We use data from the 1993-2015 Behavioral Risk Factor Surveillance System, and employ a difference-in-differences strategy that utilizes time variation in new minimum wage laws across U.S. states. Results suggest that the minimum wage increases the probability of being obese and decreases daily fruit and vegetable intake, but also decreases days with functional limitations while having no impact on healthcare access. Subsample analyses reveal that the increase in weight and decrease in fruit and vegetable intake are driven by the older population, married, and whites. The improvement in self-reported health is especially strong among non-whites, females, and married.

  12. Determining Coolant Flow Rate Distribution In The Fuel-Modified TRIGA Plate Reactor

    NASA Astrophysics Data System (ADS)

    Puji Hastuti, Endiah; Widodo, Surip; Darwis Isnaini, M.; Geni Rina, S.; Syaiful, B.

    2018-02-01

    TRIGA 2000 reactor in Bandung is planned to have the fuel element replaced, from cylindrical uranium and zirconium-hydride (U-ZrH) alloy to U3Si2-Al plate type of low enriched uranium of 19.75% with uranium density of 2.96 gU/cm3, while the reactor power is maintained at 2 MW. This change is planned to anticipate the discontinuity of TRIGA fuel element production. The selection of this plate-type fuel element is supported by the fact that such fuel type has been produced in Indonesia and used in MPR-30 safely since 2000. The core configuration of plate-type-fuelled TRIGA reactor requires coolant flow rate through each fuel element channel in order to meet its safety function. This paper is aimed to describe the results of coolant flow rate distribution in the TRIGA core that meets the safety function at normal operation condition, physical test, shutdown, and at initial event of loss of coolant flow due power supply interruption. The design analysis to determine coolant flow rate in this paper employs CAUDVAP and COOLODN computation code. The designed coolant flow rate that meets the safety criteria of departure from nucleate boiling ratio (DNBR), onset of flow instability ratio (OFIR), and ΔΤ onset of nucleate boiling (ONB), indicates that the minimum flow rate required to cool the plate-type fuelled TRIGA core at 2 MW is 80 kg/s. Therefore, it can be concluded that the operating limitation condition (OLC) for the minimum flow rate is 80 kg/s; the 72 kg/s is to cool the active core; while the minimum flow rate for coolant flow rate drop is limited to 68 kg/s with the coolant inlet temperature 35°C. This thermohydraulic design also provides cooling for 4 positions irradiation position (IP) utilization and 1 central irradiation position (CIP) with end fitting inner diameter (ID) of 10 mm and 20 mm, respectively.

  13. Comparison of Implicit Schemes for the Incompressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.

    1995-01-01

    For a computational flow simulation tool to be useful in a design environment, it must be very robust and efficient. To develop such a tool for incompressible flow applications, a number of different implicit schemes are compared for several two-dimensional flow problems in the current study. The schemes include Point-Jacobi relaxation, Gauss-Seidel line relaxation, incomplete lower-upper decomposition, and the generalized minimum residual method preconditioned with each of the three other schemes. The efficiency of the schemes is measured in terms of the computing time required to obtain a steady-state solution for the laminar flow over a backward-facing step, the flow over a NACA 4412 airfoil, and the flow over a three-element airfoil using overset grids. The flow solver used in the study is the INS2D code that solves the incompressible Navier-Stokes equations using the method of artificial compressibility and upwind differencing of the convective terms. The results show that the generalized minimum residual method preconditioned with the incomplete lower-upper factorization outperforms all other methods by at least a factor of 2.

  14. Almost Perfect Teleportation Using 4-PARTITE Entangled States

    NASA Astrophysics Data System (ADS)

    Prakash, H.; Chandra, N.; Prakash, R.; Shivani

    In a recent paper N. Ba An (Phys. Rev. A 68, 022321 (2003)) proposed a scheme to teleport a single particle state, which is superposition of coherent states |α> and |-α> using a 4-partite state, a beam splitter, and phase shifters and concluded that the probability for successful teleportation is only 1/4 in the limit |α| → 0 and 1/2 in the limit |α| → ∞. In this paper, we modify this scheme and find that an almost perfect success can be achieved if |α|2 is appreciable. For example, for |α|2 = 5, the minimum of average fidelity for teleportation, which is the minimum of sum of the product of probability for occurrence of any case and the corresponding fidelity evaluated for an arbitrary chosen information state, is 0.9999.

  15. Effects of Ocean Acidification and Flow on Oxygen and pH Conditions of Developing Squid (Doryteuthis pealeii) Egg Cases

    NASA Astrophysics Data System (ADS)

    Panyi, A.; Long, M. H.; Mooney, T. A.

    2016-02-01

    While young animals found future cohorts and populations, these early life stages are often particularly susceptible to conditions of the local environment in which they develop. The oxygen and pH of this critical developmental environment is likely impacted by the nearby physical conditions and the animals own respirations. Yet, in nearly all cases, this microenvironment is unknown, limiting our understanding of animal tolerances to current and future OA and hypoxic conditions. This study investigated the oxygen and pH environment adjacent to and within the egg cases of a keystone species, the longfin squid, Doryteuthis pealeii, under ambient and elevated CO2 (400 and 2200 ppm), and across differing water flow rates (0, 1, and 10 cm/s) using microprobes. Under both CO2 treatments, oxygen and pH in the egg case centers dropped dramatically across development to levels generally considered metabolically stressful even for adults. In the ambient CO2 trial, oxygen concentrations reached a minimum of 4.351 µmol/L, and pH reached a minimum of 7.36. In the elevated CO2 trial, oxygen concentrations reached a minimum of 9.910 µmol/L, and pH reached a minimum of 6.79. Flow appeared to alleviate these conditions, with highest O2 concentrations in the egg cases exposed to 10 cm/s flow in both CO2 trials, across all age classes measured. Surprisingly, all tested egg cases successfully hatched, demonstrating that developing D. pealeii embryos have a strong tolerance for low oxygen and pH, but there were more unsuccessful embryos counted in the 0 and 1 cm/s flow conditions. Further climate change could place young, keystone squid outside of their physiological limits, but water flow may play a key role in mitigating developmental stress to egg case bound embryos by increasing available oxygen.

  16. Use of linkage mapping and centrality analysis across habitat gradients to conserve connectivity of gray wolf populations in western North America.

    PubMed

    Carroll, Carlos; McRae, Brad H; Brookes, Allen

    2012-02-01

    Centrality metrics evaluate paths between all possible pairwise combinations of sites on a landscape to rank the contribution of each site to facilitating ecological flows across the network of sites. Computational advances now allow application of centrality metrics to landscapes represented as continuous gradients of habitat quality. This avoids the binary classification of landscapes into patch and matrix required by patch-based graph analyses of connectivity. It also avoids the focus on delineating paths between individual pairs of core areas characteristic of most corridor- or linkage-mapping methods of connectivity analysis. Conservation of regional habitat connectivity has the potential to facilitate recovery of the gray wolf (Canis lupus), a species currently recolonizing portions of its historic range in the western United States. We applied 3 contrasting linkage-mapping methods (shortest path, current flow, and minimum-cost-maximum-flow) to spatial data representing wolf habitat to analyze connectivity between wolf populations in central Idaho and Yellowstone National Park (Wyoming). We then applied 3 analogous betweenness centrality metrics to analyze connectivity of wolf habitat throughout the northwestern United States and southwestern Canada to determine where it might be possible to facilitate range expansion and interpopulation dispersal. We developed software to facilitate application of centrality metrics. Shortest-path betweenness centrality identified a minimal network of linkages analogous to those identified by least-cost-path corridor mapping. Current flow and minimum-cost-maximum-flow betweenness centrality identified diffuse networks that included alternative linkages, which will allow greater flexibility in planning. Minimum-cost-maximum-flow betweenness centrality, by integrating both land cost and habitat capacity, allows connectivity to be considered within planning processes that seek to maximize species protection at minimum cost. Centrality analysis is relevant to conservation and landscape genetics at a range of spatial extents, but it may be most broadly applicable within single- and multispecies planning efforts to conserve regional habitat connectivity. ©2011 Society for Conservation Biology.

  17. Flow variation and substrate type affect dislodgement of the freshwater polychaete, Manayunkia speciosa

    USGS Publications Warehouse

    Malakauskas, David M.; Wilson, Sarah J.; Wilzbach, Margaret A.; Som, Nicholas A.

    2013-01-01

    We quantified microscale flow forces and their ability to entrain the freshwater polychaete, Manayunkia speciosa, the intermediate host for 2 myxozoan parasites (Ceratomyxa shasta and Parvicapsula minibicornis) that cause substantial mortalities in salmonid fishes in the Pacific Northwest. In a laboratory flume, we measured the shear stress associated with 2 mean flow velocities and 3 substrates and quantified associated dislodgement of polychaetes, evaluated survivorship of dislodged polychaetes, and observed behavioral responses of the polychaetes in response to increased flow. We used a generalized linear mixed model to estimate the probability of polychaete dislodgement for treatment combinations of velocity (mean flow velocity  =  55 cm/s with a shear velocity  =  3 cm/s, mean flow velocity  =  140 cm/s with a shear velocity  =  5 cm/s) and substrate type (depositional sediments and analogs of rock faces and the filamentous alga, Cladophora). Few polychaetes were dislodged at shear velocities <3 cm/s on any substrate. Above this level of shear, probability of dislodgement was strongly affected by both substrate type and velocity. After accounting for substrate, odds of dislodgement were 8× greater at the higher flow. After accounting for velocity, probability of dislodgement was greatest from fine sediments, intermediate from rock faces, and negligible from Cladophora. Survivorship of dislodged polychaetes was high. Polychaetes exhibited a variety of behaviors for avoiding increases in flow, including extrusion of mucus, burrowing into sediments, and movement to lower-flow microhabitats. Our findings suggest that polychaete populations probably exhibit high resilience to flow-mediated disturbances.

  18. Seven-Day Low Streamflows in the United States, 1940-2014

    EPA Pesticide Factsheets

    This map shows percentage changes in the minimum annual rate of water carried by rivers and streams across the country, based on the long-term rate of change from 1940 to 2014. Minimum streamflow is based on the consecutive seven-day period with the lowest average flow during a given year. Blue triangles represent an increase in low stream flow volumes, and brown triangles represent a decrease. Streamflow data were collected by the U.S. Geological Survey. For more information: www.epa.gov/climatechange/science/indicators

  19. Mining of high utility-probability sequential patterns from uncertain databases

    PubMed Central

    Zhang, Binbin; Fournier-Viger, Philippe; Li, Ting

    2017-01-01

    High-utility sequential pattern mining (HUSPM) has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs). They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM) for mining high utility-probability sequential patterns (HUPSPs) in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds. PMID:28742847

  20. Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.; Le Doussal, Pierre

    2014-01-01

    Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.

  1. Prediction of obliteration after gamma knife surgery for cerebral arteriovenous malformations.

    PubMed

    Karlsson, B; Lindquist, C; Steiner, L

    1997-03-01

    To define the factors of importance for the obliteration of cerebral arteriovenous malformations (AVMs), thus making a prediction of the probability for obliteration possible. In 945 AVMs of a series of 1319 patients treated with the gamma knife during 1970 to 1990, the relationship between patient, AVMs, and treatment parameters on the one hand and the obliteration of the nidus on the other was analyzed. The obliteration rate increased both with increased minimum (lowest periphery) and average dose and decreased with increased AVM volume. The minimum dose to the AVMs was the decisive dose factor for the treatment result. The higher the minimum dose, the higher the chance for total obliteration. The curve illustrating this relation increased logarithmically to a value of 87%. A higher average dose shortened the latency to AVM obliteration. For the obliterated cases, the larger the malformation, the lower the minimum dose used. This prompted us to relate the obliteration rate to the product minimum dose (AVM volume)1/3 (K index). The obliteration rate increased linearly with the K index up to a value of approximately 27, and for higher K values, the obliteration rate had a constant value of approximately 80%. For the group of 273 cases treated with a minimum dose of at least 25 Gy, the obliteration rate at the study end point (defined as 2-yr latency) was 80% (95% confidence interval = 75-85%). If obliterations that occurred beyond the end point are included, the obliteration rate increased to 85% (81-89%). The probability of obliteration of AVMs after gamma knife surgery is related both to the lowest dose to the AVMs and the AVM volume, and it can be predicted using the K index.

  2. 40 CFR 75.66 - Petitions to the Administrator.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... for each submission. (b) Alternative flow monitoring method petition. In cases where no location exists for installation of a flow monitor in either the stack or the ducts serving an affected unit that satisfies the minimum physical siting criteria in appendix A of this part or where installation of a flow...

  3. 40 CFR 75.66 - Petitions to the Administrator.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... for each submission. (b) Alternative flow monitoring method petition. In cases where no location exists for installation of a flow monitor in either the stack or the ducts serving an affected unit that satisfies the minimum physical siting criteria in appendix A of this part or where installation of a flow...

  4. 40 CFR 75.66 - Petitions to the Administrator.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... for each submission. (b) Alternative flow monitoring method petition. In cases where no location exists for installation of a flow monitor in either the stack or the ducts serving an affected unit that satisfies the minimum physical siting criteria in appendix A of this part or where installation of a flow...

  5. 40 CFR 75.66 - Petitions to the Administrator.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... for each submission. (b) Alternative flow monitoring method petition. In cases where no location exists for installation of a flow monitor in either the stack or the ducts serving an affected unit that satisfies the minimum physical siting criteria in appendix A of this part or where installation of a flow...

  6. A comparison between Bayes discriminant analysis and logistic regression for prediction of debris flow in southwest Sichuan, China

    NASA Astrophysics Data System (ADS)

    Xu, Wenbo; Jing, Shaocai; Yu, Wenjuan; Wang, Zhaoxian; Zhang, Guoping; Huang, Jianxi

    2013-11-01

    In this study, the high risk areas of Sichuan Province with debris flow, Panzhihua and Liangshan Yi Autonomous Prefecture, were taken as the studied areas. By using rainfall and environmental factors as the predictors and based on the different prior probability combinations of debris flows, the prediction of debris flows was compared in the areas with statistical methods: logistic regression (LR) and Bayes discriminant analysis (BDA). The results through the comprehensive analysis show that (a) with the mid-range scale prior probability, the overall predicting accuracy of BDA is higher than those of LR; (b) with equal and extreme prior probabilities, the overall predicting accuracy of LR is higher than those of BDA; (c) the regional predicting models of debris flows with rainfall factors only have worse performance than those introduced environmental factors, and the predicting accuracies of occurrence and nonoccurrence of debris flows have been changed in the opposite direction as the supplemented information.

  7. Individual Identification and Genetic Variation of Lions (Panthera leo) from Two Protected Areas in Nigeria

    PubMed Central

    Tende, Talatu; Hansson, Bengt; Ottosson, Ulf; Åkesson, Mikael; Bensch, Staffan

    2014-01-01

    This survey was conducted in two protected areas in Nigeria to genetically identify individual lions and to determine the genetic variation within and between the populations. We used faecal sample DNA, a non-invasive alternative to the risky and laborious task of taking samples directly from the animals, often preceded by catching and immobilization. Data collection in Yankari Game Reserve (YGR) spanned through a period of five years (2008 –2012), whereas data in Kainji Lake National Park (KLNP) was gathered for a period of three years (2009, 2010 and 2012). We identified a minimum of eight individuals (2 males, 3 females, 3 unknown) from YGR and a minimum of ten individuals (7 males, 3 females) from KLNP. The two populations were found to be genetically distinct as shown by the relatively high fixation index (FST  = 0.17) with each population exhibiting signs of inbreeding (YGR FIS  = 0.49, KLNP FIS  = 0.38). The genetic differentiation between the Yankari and Kainji lions is assumed to result from large spatial geographic distance and physical barriers reducing gene flow between these two remaining wild lion populations in Nigeria. To mitigate the probable inbreeding depression in the lion populations within Nigeria it might be important to transfer lions between parks or reserves or to reintroduce lions from the zoos back to the wild. PMID:24427283

  8. Individual identification and genetic variation of lions (Panthera leo) from two protected areas in Nigeria.

    PubMed

    Tende, Talatu; Hansson, Bengt; Ottosson, Ulf; Akesson, Mikael; Bensch, Staffan

    2014-01-01

    This survey was conducted in two protected areas in Nigeria to genetically identify individual lions and to determine the genetic variation within and between the populations. We used faecal sample DNA, a non-invasive alternative to the risky and laborious task of taking samples directly from the animals, often preceded by catching and immobilization. Data collection in Yankari Game Reserve (YGR) spanned through a period of five years (2008 -2012), whereas data in Kainji Lake National Park (KLNP) was gathered for a period of three years (2009, 2010 and 2012). We identified a minimum of eight individuals (2 males, 3 females, 3 unknown) from YGR and a minimum of ten individuals (7 males, 3 females) from KLNP. The two populations were found to be genetically distinct as shown by the relatively high fixation index (FST  = 0.17) with each population exhibiting signs of inbreeding (YGR FIS  = 0.49, KLNP FIS  = 0.38). The genetic differentiation between the Yankari and Kainji lions is assumed to result from large spatial geographic distance and physical barriers reducing gene flow between these two remaining wild lion populations in Nigeria. To mitigate the probable inbreeding depression in the lion populations within Nigeria it might be important to transfer lions between parks or reserves or to reintroduce lions from the zoos back to the wild.

  9. Transition of unsteady velocity profiles with reverse flow

    NASA Astrophysics Data System (ADS)

    Das, Debopam; Arakeri, Jaywant H.

    1998-11-01

    This paper deals with the stability and transition to turbulence of wall-bounded unsteady velocity profiles with reverse flow. Such flows occur, for example, during unsteady boundary layer separation and in oscillating pipe flow. The main focus is on results from experiments in time-developing flow in a long pipe, which is decelerated rapidly. The flow is generated by the controlled motion of a piston. We obtain analytical solutions for laminar flow in the pipe and in a two-dimensional channel for arbitrary piston motions. By changing the piston speed and the length of piston travel we cover a range of values of Reynolds number and boundary layer thickness. The velocity profiles during the decay of the flow are unsteady with reverse flow near the wall, and are highly unstable due to their inflectional nature. In the pipe, we observe from flow visualization that the flow becomes unstable with the formation of what appears to be a helical vortex. The wavelength of the instability [simeq R: similar, equals]3[delta] where [delta] is the average boundary layer thickness, the average being taken over the time the flow is unstable. The time of formation of the vortices scales with the average convective time scale and is [simeq R: similar, equals]39/([Delta]u/[delta]), where [Delta]u=(umax[minus sign]umin) and umax, umin and [delta] are the maximum velocity, minimum velocity and boundary layer thickness respectively at each instant of time. The time to transition to turbulence is [simeq R: similar, equals]33/([Delta]u/[delta]). Quasi-steady linear stability analysis of the velocity profiles brings out two important results. First that the stability characteristics of velocity profiles with reverse flow near the wall collapse when scaled with the above variables. Second that the wavenumber corresponding to maximum growth does not change much during the instability even though the velocity profile does change substantially. Using the results from the experiments and the stability analysis, we are able to explain many aspects of transition in oscillating pipe flow. We postulate that unsteady boundary layer separation at high Reynolds numbers is probably related to instability of the reverse flow region.

  10. Optimal plane search method in blood flow measurements by magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Bargiel, Pawel; Orkisz, Maciej; Przelaskowski, Artur; Piatkowska-Janko, Ewa; Bogorodzki, Piotr; Wolak, Tomasz

    2004-07-01

    This paper offers an algorithm for determining the blood flow parameters in the neck vessel segments using a single (optimal) measurement plane instead of the usual approach involving four planes orthogonal to the artery axis. This new approach aims at significantly shortening the time required to complete measurements using Nuclear Magnetic Resonance techniques. Based on a defined error function, the algorithm scans the solution space to find the minimum of the error function, and thus to determine a single plane characterized by a minimum measurement error, which allows for an accurate measurement of blood flow in the four carotid arteries. The paper also comprises a practical implementation of this method (as a module of a larger imaging-measuring system), including preliminary research results.

  11. Design of a High Intensity Turbulent Combustion System

    DTIC Science & Technology

    2015-05-01

    nth repetition of a turbulent-flow experiment. [1] .................... 8 Figure 2. 3: Velocity measurement on the n th repetition of a turbulent-flow...measurement on the n th repetition of a turbulent-flow experiment. u(t) = U + u’(t...event such as P ≈ [ U < N ms-1 ]. The random variable U can be characterized by its probability density function (PDF). The probability of an event

  12. Flood frequency analysis for nonstationary annual peak records in an urban drainage basin

    USGS Publications Warehouse

    Villarini, G.; Smith, J.A.; Serinaldi, F.; Bales, J.; Bates, P.D.; Krajewski, W.F.

    2009-01-01

    Flood frequency analysis in urban watersheds is complicated by nonstationarities of annual peak records associated with land use change and evolving urban stormwater infrastructure. In this study, a framework for flood frequency analysis is developed based on the Generalized Additive Models for Location, Scale and Shape parameters (GAMLSS), a tool for modeling time series under nonstationary conditions. GAMLSS is applied to annual maximum peak discharge records for Little Sugar Creek, a highly urbanized watershed which drains the urban core of Charlotte, North Carolina. It is shown that GAMLSS is able to describe the variability in the mean and variance of the annual maximum peak discharge by modeling the parameters of the selected parametric distribution as a smooth function of time via cubic splines. Flood frequency analyses for Little Sugar Creek (at a drainage area of 110 km2) show that the maximum flow with a 0.01-annual probability (corresponding to 100-year flood peak under stationary conditions) over the 83-year record has ranged from a minimum unit discharge of 2.1 m3 s- 1 km- 2 to a maximum of 5.1 m3 s- 1 km- 2. An alternative characterization can be made by examining the estimated return interval of the peak discharge that would have an annual exceedance probability of 0.01 under the assumption of stationarity (3.2 m3 s- 1 km- 2). Under nonstationary conditions, alternative definitions of return period should be adapted. Under the GAMLSS model, the return interval of an annual peak discharge of 3.2 m3 s- 1 km- 2 ranges from a maximum value of more than 5000 years in 1957 to a minimum value of almost 8 years for the present time (2007). The GAMLSS framework is also used to examine the links between population trends and flood frequency, as well as trends in annual maximum rainfall. These analyses are used to examine evolving flood frequency over future decades. ?? 2009 Elsevier Ltd.

  13. The probability of lava inundation at the proposed and existing Kulani prison sites

    USGS Publications Warehouse

    Kauahikaua, J.P.; Trusdell, F.A.; Heliker, C.C.

    1998-01-01

    The State of Hawai`i has proposed building a 2,300-bed medium-security prison about 10 km downslope from the existing Kulani medium-security correctional facility. The proposed and existing facilities lie on the northeast rift zone of Mauna Loa, which last erupted in 1984 in this same general area. We use the best available geologic mapping and dating with GIS software to estimate the average recurrence interval between lava flows that inundate these sites. Three different methods are used to adjust the number of flows exposed at the surface for those flows that are buried to allow a better representation of the recurrence interval. Probabilities are then computed, based on these recurrence intervals, assuming that the data match a Poisson distribution. The probability of lava inundation for the existing prison site is estimated to be 11- 12% in the next 50 years. The probability of lava inundation for the proposed sites B and C are 2- 3% and 1-2%, respectively, in the same period. The probabilities are based on estimated recurrence intervals for lava flows, which are approximately proportional to the area considered. The probability of having to evacuate the prison is certainly higher than the probability of lava entering the site. Maximum warning times between eruption and lava inundation of a site are estimated to be 24 hours for the existing prison site and 72 hours for proposed sites B and C. Evacuation plans should take these times into consideration.

  14. Potential postwildfire debris-flow hazards—A prewildfire evaluation for the Jemez Mountains, north-central New Mexico

    USGS Publications Warehouse

    Tillery, Anne C.; Haas, Jessica R.

    2016-08-11

    Wildfire can substantially increase the probability of debris flows, a potentially hazardous and destructive form of mass wasting, in landscapes that have otherwise been stable throughout recent history. Although the exact location, extent, and severity of wildfire or subsequent rainfall intensity and duration cannot be known, probabilities of fire and debris‑flow occurrence for given locations can be estimated with geospatial analysis and modeling. The purpose of this report is to provide information on which watersheds might constitute the most serious potential debris-flow hazards in the event of a large-scale wildfire and subsequent rainfall in the Jemez Mountains. Potential probabilities and estimated volumes of postwildfire debris flows in both the unburned and previously burned areas of the Jemez Mountains and surrounding areas were estimated using empirical debris-flow models developed by the U.S. Geological Survey in combination with fire behavior and burn probability models developed by the U.S. Forest Service.Of the 4,998 subbasins modeled for this study, computed debris-flow probabilities in 671 subbasins were greater than 80 percent in response to the 100-year recurrence interval, 30-minute duration rainfall event. These subbasins ranged in size from 0.01 to 6.57 square kilometers (km2), with an average area of 0.29 km2, and were mostly steep, upstream tributaries to larger channels in the area. Modeled debris-flow volumes in 465 subbasins were greater than 10,000 cubic meters (m3), and 14 of those subbasins had modeled debris‑flow volumes greater than 100,000 m3.The rankings of integrated relative debris-flow hazard indexes for each subbasin were generated by multiplying the individual subbasin values for debris-flow volume, debris‑flow probability, and average burn probability. The subbasins with integrated hazard index values in the top 2 percent typically are large, upland tributaries to canyons and channels primarily in the Upper Rio Grande and Rio Grande-Santa Fe watershed areas. No subbasins in this group have basin areas less than 1.0 km2. Many of these areas already had significant mass‑wasting episodes following the Las Conchas Fire in 2011. Other subbasins with integrated hazard index values in the top 2 percent are scattered throughout the Jemez River watershed area, including some subbasins in the interior of the Valles Caldera. Only a few subbasins in the top integrated hazard index group are in the Rio Chama watershed area.This prewildfire assessment approach is valuable to resource managers because the analysis of the debris-flow threat is made before a wildfire occurs, which facilitates prewildfire management, planning, and mitigation. In north‑central New Mexico, widespread watershed restoration efforts are being done to safeguard vital watersheds against the threat of catastrophic wildfire. This study was designed to help select ideal locations for the restoration efforts that could have the best return on investment.

  15. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic Conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    USGS Publications Warehouse

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-01-01

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.

  16. On the Importance of Cycle Minimum in Sunspot Cycle Prediction

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.

    1996-01-01

    The characteristics of the minima between sunspot cycles are found to provide important information for predicting the amplitude and timing of the following cycle. For example, the time of the occurrence of sunspot minimum sets the length of the previous cycle, which is correlated by the amplitude-period effect to the amplitude of the next cycle, with cycles of shorter (longer) than average length usually being followed by cycles of larger (smaller) than average size (true for 16 of 21 sunspot cycles). Likewise, the size of the minimum at cycle onset is correlated with the size of the cycle's maximum amplitude, with cycles of larger (smaller) than average size minima usually being associated with larger (smaller) than average size maxima (true for 16 of 22 sunspot cycles). Also, it was found that the size of the previous cycle's minimum and maximum relates to the size of the following cycle's minimum and maximum with an even-odd cycle number dependency. The latter effect suggests that cycle 23 will have a minimum and maximum amplitude probably larger than average in size (in particular, minimum smoothed sunspot number Rm = 12.3 +/- 7.5 and maximum smoothed sunspot number RM = 198.8 +/- 36.5, at the 95-percent level of confidence), further suggesting (by the Waldmeier effect) that it will have a faster than average rise to maximum (fast-rising cycles have ascent durations of about 41 +/- 7 months). Thus, if, as expected, onset for cycle 23 will be December 1996 +/- 3 months, based on smoothed sunspot number, then the length of cycle 22 will be about 123 +/- 3 months, inferring that it is a short-period cycle and that cycle 23 maximum amplitude probably will be larger than average in size (from the amplitude-period effect), having an RM of about 133 +/- 39 (based on the usual +/- 30 percent spread that has been seen between observed and predicted values), with maximum amplitude occurrence likely sometime between July 1999 and October 2000.

  17. Minimum-domain impulse theory for unsteady aerodynamic force

    NASA Astrophysics Data System (ADS)

    Kang, L. L.; Liu, L. Q.; Su, W. D.; Wu, J. Z.

    2018-01-01

    We extend the impulse theory for unsteady aerodynamics from its classic global form to finite-domain formulation then to minimum-domain form and from incompressible to compressible flows. For incompressible flow, the minimum-domain impulse theory raises the finding of Li and Lu ["Force and power of flapping plates in a fluid," J. Fluid Mech. 712, 598-613 (2012)] to a theorem: The entire force with discrete wake is completely determined by only the time rate of impulse of those vortical structures still connecting to the body, along with the Lamb-vector integral thereof that captures the contribution of all the rest disconnected vortical structures. For compressible flows, we find that the global form in terms of the curl of momentum ∇ × (ρu), obtained by Huang [Unsteady Vortical Aerodynamics (Shanghai Jiaotong University Press, 1994)], can be generalized to having an arbitrary finite domain, but the formula is cumbersome and in general ∇ × (ρu) no longer has discrete structures and hence no minimum-domain theory exists. Nevertheless, as the measure of transverse process only, the unsteady field of vorticity ω or ρω may still have a discrete wake. This leads to a minimum-domain compressible vorticity-moment theory in terms of ρω (but it is beyond the classic concept of impulse). These new findings and applications have been confirmed by our numerical experiments. The results not only open an avenue to combine the theory with computation-experiment in wide applications but also reveal a physical truth that it is no longer necessary to account for all wake vortical structures in computing the force and moment.

  18. Hydrogeologic unit flow characterization using transition probability geostatistics.

    PubMed

    Jones, Norman L; Walker, Justin R; Carle, Steven F

    2005-01-01

    This paper describes a technique for applying the transition probability geostatistics method for stochastic simulation to a MODFLOW model. Transition probability geostatistics has some advantages over traditional indicator kriging methods including a simpler and more intuitive framework for interpreting geologic relationships and the ability to simulate juxtapositional tendencies such as fining upward sequences. The indicator arrays generated by the transition probability simulation are converted to layer elevation and thickness arrays for use with the new Hydrogeologic Unit Flow package in MODFLOW 2000. This makes it possible to preserve complex heterogeneity while using reasonably sized grids and/or grids with nonuniform cell thicknesses.

  19. An Anaylsis of Control Requirements and Control Parameters for Direct-Coupled Turbojet Engines

    NASA Technical Reports Server (NTRS)

    Novik, David; Otto, Edward W.

    1947-01-01

    Requirements of an automatic engine control, as affected by engine characteristics, have been analyzed for a direct-coupled turbojet engine. Control parameters for various conditions of engine operation are discussed. A hypothetical engine control is presented to illustrate the use of these parameters. An adjustable speed governor was found to offer a desirable method of over-all engine control. The selection of a minimum value of fuel flow was found to offer a means of preventing unstable burner operation during steady-state operation. Until satisfactory high-temperature-measuring devices are developed, air-fuel ratio is considered to be a satisfactory acceleration-control parameter for the attainment of the maximum acceleration rates consistent with safe turbine temperatures. No danger of unstable burner operation exists during acceleration if a temperature-limiting acceleration control is assumed to be effective. Deceleration was found to be accompanied by the possibility of burner blow-out even if a minimum fuel-flow control that prevents burner blow-out during steady-state operation is assumed to be effective. Burner blow-out during deceleration may be eliminated by varying the value of minimum fuel flow as a function of compressor-discharge pressure, but in no case should the fuel flow be allowed to fall below the value required for steady-state burner operation.

  20. Statistical evaluation of metal fill widths for emulated metal fill in parasitic extraction methodology

    NASA Astrophysics Data System (ADS)

    J-Me, Teh; Noh, Norlaili Mohd.; Aziz, Zalina Abdul

    2015-05-01

    In the chip industry today, the key goal of a chip development organization is to develop and market chips within a short time frame to gain foothold on market share. This paper proposes a design flow around the area of parasitic extraction to improve the design cycle time. The proposed design flow utilizes the usage of metal fill emulation as opposed to the current flow which performs metal fill insertion directly. By replacing metal fill structures with an emulation methodology in earlier iterations of the design flow, this is targeted to help reduce runtime in fill insertion stage. Statistical design of experiments methodology utilizing the randomized complete block design was used to select an appropriate emulated metal fill width to improve emulation accuracy. The experiment was conducted on test cases of different sizes, ranging from 1000 gates to 21000 gates. The metal width was varied from 1 x minimum metal width to 6 x minimum metal width. Two-way analysis of variance and Fisher's least significant difference test were used to analyze the interconnect net capacitance values of the different test cases. This paper presents the results of the statistical analysis for the 45 nm process technology. The recommended emulated metal fill width was found to be 4 x the minimum metal width.

  1. 40 CFR Table 5 to Subpart Hhhhhhh... - Operating Parameters, Operating Limits and Data Monitoring, Recording and Compliance Frequencies...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... conductivity Continuous Every 15 minutes 3-hour block average. Regenerative Adsorber Regeneration stream flow. Minimum total flow per regeneration cycle Continuous N/A Total flow for each regeneration cycle. Adsorber bed temperature. Maximum temperature Continuously after regeneration and within 15 minutes of...

  2. 40 CFR Table 5 to Subpart Hhhhhhh... - Operating Parameters, Operating Limits and Data Monitoring, Recording and Compliance Frequencies...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... conductivity Continuous Every 15 minutes 3-hour block average. Regenerative Adsorber Regeneration stream flow. Minimum total flow per regeneration cycle Continuous N/A Total flow for each regeneration cycle. Adsorber bed temperature. Maximum temperature Continuously after regeneration and within 15 minutes of...

  3. 43 CFR 418.18 - Diversions at Derby Dam.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Dam must be managed to maintain minimum terminal flow to Lahontan Reservoir or the Carson River except... achieve an average terminal flow of 20 cfs or less during times when diversions to Lahontan Reservoir are not allowed (the flows must be averaged over the total time diversions are not allowed in that...

  4. Experimental investigation of a local recirculation photobioreactor for mass cultures of photosynthetic microorganisms.

    PubMed

    Moroni, Monica; Cicci, Agnese; Bravi, Marco

    2014-04-01

    The present work deals with the experimental fluid mechanics analysis of a wavy-bottomed cascade photobioreactor, to characterize the extent and period of recirculatory and straight-flowing streams establishing therein as a function of reactor inclination and liquid flow rate. The substream characterization via Feature Tracking (FT) showed that a local recirculation zone establishes in each vane only at inclinations ≤6° and that its location changes from the lower (≤3°) to the upper part of each vane (6°). A straight-flowing stream flows opposite (above or below) the local recirculation stream. The recirculation time ranges from 0.86 s to 0.23 s, corresponding, respectively, to the minimum flow rate at the minimum inclination and to the maximum flow rate at the maximum inclination where recirculation was observed. The increase of photosynthetic activity, resulting from the entailed "flash effect", was estimated to range between 102 and 113% with respect to equivalent tubular and bubble column photobioreactors. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Ice Flow in Debris Aprons and Central Peaks, and the Application of Crater Counts

    NASA Astrophysics Data System (ADS)

    Hartmann, W. K.; Quantin, C.; Werner, S. C.; Popova, O.

    2009-03-01

    We apply studies of decameter-scale craters to studies of probable ice-flow-related features on Mars, to interpret both chronometry and geological processes among the features. We find losses of decameter-scale craters relative to nearby plains, probably due to sublimation.

  6. Salmon-mediated nutrient flux in selected streams of the Columbia River basin, USA

    USGS Publications Warehouse

    Kohler, Andre E.; Kusnierz, Paul C.; Copeland, Timothy; Venditti, David A.; Denny, Lytle; Gable, Josh; Lewis, Bert; Kinzer, Ryan; Barnett, Bruce; Wipfli, Mark S.

    2013-01-01

    Salmon provide an important resource subsidy and linkage between marine and land-based ecosystems. This flow of energy and nutrients is not uni-directional (i.e., upstream only); in addition to passive nutrient export via stream flow, juvenile emigrants actively export nutrients from freshwater environments. In some cases, nutrient export can exceed import. We evaluated nutrient fluxes in streams across central Idaho, USA using Chinook salmon (Oncorhynchus tshawytscha) adult escapement and juvenile production data from 1998 to 2008. We found in the majority of stream-years evaluated, adults imported more nutrients than progeny exported; however, in 3% of the years, juveniles exported more nutrients than their parents imported. On average, juvenile emigrants exported 22 ± 3% of the nitrogen and 30 ± 4% of the phosphorus their parents imported. This relationship was density dependent and nonlinear; during periods of low adult abundance juveniles were larger and exported up to 194% and 268% of parental nitrogen and phosphorus inputs, respectively. We highlight minimum escapement thresholds that appear to 1) maintain consistently positive net nutrient flux and 2) reduce the average proportional rate of export across study streams. Our results suggest a state-shift occurs when adult spawner abundance falls below a threshold to a point where the probability of juvenile nutrient exports exceeding adult imports becomes increasingly likely.

  7. Updating estimates of low streamflow statistics to account for possible trends

    NASA Astrophysics Data System (ADS)

    Blum, A. G.; Archfield, S. A.; Hirsch, R. M.; Vogel, R. M.; Kiang, J. E.; Dudley, R. W.

    2017-12-01

    Given evidence of both increasing and decreasing trends in low flows in many streams, methods are needed to update estimators of low flow statistics used in water resources management. One such metric is the 10-year annual low-flow statistic (7Q10) calculated as the annual minimum seven-day streamflow which is exceeded in nine out of ten years on average. Historical streamflow records may not be representative of current conditions at a site if environmental conditions are changing. We present a new approach to frequency estimation under nonstationary conditions that applies a stationary nonparametric quantile estimator to a subset of the annual minimum flow record. Monte Carlo simulation experiments were used to evaluate this approach across a range of trend and no trend scenarios. Relative to the standard practice of using the entire available streamflow record, use of a nonparametric quantile estimator combined with selection of the most recent 30 or 50 years for 7Q10 estimation were found to improve accuracy and reduce bias. Benefits of data subset selection approaches were greater for higher magnitude trends annual minimum flow records with lower coefficients of variation. A nonparametric trend test approach for subset selection did not significantly improve upon always selecting the last 30 years of record. At 174 stream gages in the Chesapeake Bay region, 7Q10 estimators based on the most recent 30 years of flow record were compared to estimators based on the entire period of record. Given the availability of long records of low streamflow, using only a subset of the flow record ( 30 years) can be used to update 7Q10 estimators to better reflect current streamflow conditions.

  8. Extended target recognition in cognitive radar networks.

    PubMed

    Wei, Yimin; Meng, Huadong; Liu, Yimin; Wang, Xiqin

    2010-01-01

    We address the problem of adaptive waveform design for extended target recognition in cognitive radar networks. A closed-loop active target recognition radar system is extended to the case of a centralized cognitive radar network, in which a generalized likelihood ratio (GLR) based sequential hypothesis testing (SHT) framework is employed. Using Doppler velocities measured by multiple radars, the target aspect angle for each radar is calculated. The joint probability of each target hypothesis is then updated using observations from different radar line of sights (LOS). Based on these probabilities, a minimum correlation algorithm is proposed to adaptively design the transmit waveform for each radar in an amplitude fluctuation situation. Simulation results demonstrate performance improvements due to the cognitive radar network and adaptive waveform design. Our minimum correlation algorithm outperforms the eigen-waveform solution and other non-cognitive waveform design approaches.

  9. A Rational Approach to Determine Minimum Strength Thresholds in Novel Structural Materials

    NASA Technical Reports Server (NTRS)

    Schur, Willi W.; Bilen, Canan; Sterling, Jerry

    2003-01-01

    Design of safe and survivable structures requires the availability of guaranteed minimum strength thresholds for structural materials to enable a meaningful comparison of strength requirement and available strength. This paper develops a procedure for determining such a threshold with a desired degree of confidence, for structural materials with none or minimal industrial experience. The problem arose in attempting to use a new, highly weight-efficient structural load tendon material to achieve a lightweight super-pressure balloon. The developed procedure applies to lineal (one dimensional) structural elements. One important aspect of the formulation is that it extrapolates to expected probability distributions for long length specimen samples from some hypothesized probability distribution that has been obtained from a shorter length specimen sample. The use of the developed procedure is illustrated using both real and simulated data.

  10. Estimated probabilities, volumes, and inundation areas depths of potential postwildfire debris flows from Carbonate, Slate, Raspberry, and Milton Creeks, near Marble, Gunnison County, Colorado

    USGS Publications Warehouse

    Stevens, Michael R.; Flynn, Jennifer L.; Stephens, Verlin C.; Verdin, Kristine L.

    2011-01-01

    During 2009, the U.S. Geological Survey, in cooperation with Gunnison County, initiated a study to estimate the potential for postwildfire debris flows to occur in the drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble, Colorado. Currently (2010), these drainage basins are unburned but could be burned by a future wildfire. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the intermountain western United States were used to estimate the probability of postwildfire debris-flow occurrence and debris-flow volumes for drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble. Data for the postwildfire debris-flow models included drainage basin area; area burned and burn severity; percentage of burned area; soil properties; rainfall total and intensity for the 5- and 25-year-recurrence, 1-hour-duration-rainfall; and topographic and soil property characteristics of the drainage basins occupied by the four creeks. A quasi-two-dimensional floodplain computer model (FLO-2D) was used to estimate the spatial distribution and the maximum instantaneous depth of the postwildfire debris-flow material during debris flow on the existing debris-flow fans that issue from the outlets of the four major drainage basins. The postwildfire debris-flow probabilities at the outlet of each drainage basin range from 1 to 19 percent for the 5-year-recurrence, 1-hour-duration rainfall, and from 3 to 35 percent for 25-year-recurrence, 1-hour-duration rainfall. The largest probabilities for postwildfire debris flow are estimated for Raspberry Creek (19 and 35 percent), whereas estimated debris-flow probabilities for the three other creeks range from 1 to 6 percent. The estimated postwildfire debris-flow volumes at the outlet of each creek range from 7,500 to 101,000 cubic meters for the 5-year-recurrence, 1-hour-duration rainfall, and from 9,400 to 126,000 cubic meters for the 25-year-recurrence, 1-hour-duration rainfall. The largest postwildfire debris-flow volumes were estimated for Carbonate Creek and Milton Creek drainage basins, for both the 5- and 25-year-recurrence, 1-hour-duration rainfalls. Results from FLO-2D modeling of the 5-year and 25-year recurrence, 1-hour rainfalls indicate that the debris flows from the four drainage basins would reach or nearly reach the Crystal River. The model estimates maximum instantaneous depths of debris-flow material during postwildfire debris flows that exceeded 5 meters in some areas, but the differences in model results between the 5-year and 25-year recurrence, 1-hour rainfalls are small. Existing stream channels or topographic flow paths likely control the distribution of debris-flow material, and the difference in estimated debris-flow volume (about 25 percent more volume for the 25-year-recurrence, 1-hour-duration rainfall compared to the 5-year-recurrence, 1-hour-duration rainfall) does not seem to substantially affect the estimated spatial distribution of debris-flow material. Historically, the Marble area has experienced periodic debris flows in the absence of wildfire. This report estimates the probability and volume of debris flow and maximum instantaneous inundation area depths after hypothetical wildfire and rainfall. This postwildfire debris-flow report does not address the current (2010) prewildfire debris-flow hazards that exist near Marble.

  11. Environmental Assessment, Repair of the Dam at Non-Potable Reservoir #1, United States Air Force Academy, Colorado

    DTIC Science & Technology

    2015-08-01

    crimping alone is insufficient. Hydro-mulch shall be applied using a color dye and the manufacturer’s recommended rate of an organic tackifier. D...drainage areas where erosion is probable. All erosion control blanket shall be 100% biodegradable , net- free, wood fiber (excelsior) or coconut...Manufactured biodegradable stakes (6-inch minimum) or wooden stakes (8-inch minimum) shall be used to anchor any erosion materials; metal staples

  12. Development of a homogeneous pulse shape discriminating flow-cell radiation detection system

    NASA Astrophysics Data System (ADS)

    Hastie, K. H.; DeVol, T. A.; Fjeld, R. A.

    1999-02-01

    A homogeneous flow-cell radiation detection system which utilizes coincidence counting and pulse shape discrimination circuitry was assembled and tested with five commercially available liquid scintillation cocktails. Two of the cocktails, Ultima Flo (Packard) and Mono Flow 5 (National Diagnostics) have low viscosities and are intended for flow applications; and three of the cocktails, Optiphase HiSafe 3 (Wallac), Ultima Gold AB (Packard), and Ready Safe (Beckman), have higher viscosities and are intended for static applications. The low viscosity cocktails were modified with 1-methylnaphthalene to increase their capability for alpha/beta pulse shape discrimination. The sample loading and pulse shape discriminator setting were optimized to give the lowest minimum detectable concentration for alpha radiation in a 30 s count time. Of the higher viscosity cocktails, Optiphase HiSafe 3 had the lowest minimum detectable activities for alpha and beta radiation, 0.2 and 0.4 Bq/ml for 233U and 90Sr/ 90Y, respectively, for a 30 s count time. The sample loading was 70% and the corresponding alpha/beta spillover was 5.5%. Of the low viscosity cocktails, Mono Flow 5 modified with 2.5% (by volume) 1-methylnaphthalene resulted in the lowest minimum detectable activities for alpha and beta radiation; 0.3 and 0.5 Bq/ml for 233U and 90Sr/ 90Y, respectively, for a 30 s count time. The sample loading was 50%, and the corresponding alpha/beta spillover was 16.6%. HiSafe 3 at a 10% sample loading was used to evaluate the system under simulated flow conditions.

  13. 40 CFR 125.84 - As an owner or operator of a new facility, what must I do to comply with this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... following requirements: (1) You must reduce your intake flow, at a minimum, to a level commensurate with... that the total design intake flow from all cooling water intake structures at your facility meets the... total design intake flow must be no greater than five (5) percent of the source water annual mean flow...

  14. CPAP Devices for Emergency Prehospital Use: A Bench Study.

    PubMed

    Brusasco, Claudia; Corradi, Francesco; De Ferrari, Alessandra; Ball, Lorenzo; Kacmarek, Robert M; Pelosi, Paolo

    2015-12-01

    CPAP is frequently used in prehospital and emergency settings. An air-flow output minimum of 60 L/min and a constant positive pressure are 2 important features for a successful CPAP device. Unlike hospital CPAP devices, which require electricity, CPAP devices for ambulance use need only an oxygen source to function. The aim of the study was to evaluate and compare on a bench model the performance of 3 orofacial mask devices (Ventumask, EasyVent, and Boussignac CPAP system) and 2 helmets (Ventukit and EVE Coulisse) used to apply CPAP in the prehospital setting. A static test evaluated air-flow output, positive pressure applied, and FIO2 delivered by each device. A dynamic test assessed airway pressure stability during simulated ventilation. Efficiency of devices was compared based on oxygen flow needed to generate a minimum air flow of 60 L/min at each CPAP setting. The EasyVent and EVE Coulisse devices delivered significantly higher mean air-flow outputs compared with the Ventumask and Ventukit under all CPAP conditions tested. The Boussignac CPAP system never reached an air-flow output of 60 L/min. The EasyVent had significantly lower pressure excursion than the Ventumask at all CPAP levels, and the EVE Coulisse had lower pressure excursion than the Ventukit at 5, 15, and 20 cm H2O, whereas at 10 cm H2O, no significant difference was observed between the 2 devices. Estimated oxygen consumption was lower for the EasyVent and EVE Coulisse compared with the Ventumask and Ventukit. Air-flow output, pressure applied, FIO2 delivered, device oxygen consumption, and ability to maintain air flow at 60 L/min differed significantly among the CPAP devices tested. Only the EasyVent and EVE Coulisse achieved the required minimum level of air-flow output needed to ensure an effective therapy under all CPAP conditions. Copyright © 2015 by Daedalus Enterprises.

  15. Improving Conceptual Models Using AEM Data and Probability Distributions

    NASA Astrophysics Data System (ADS)

    Davis, A. C.; Munday, T. J.; Christensen, N. B.

    2012-12-01

    With emphasis being placed on uncertainty in groundwater modelling and prediction, coupled with questions concerning the value of geophysical methods in hydrogeology, it is important to ask meaningful questions of hydrogeophysical data and inversion results. For example, to characterise aquifers using electromagnetic (EM) data, we ask questions such as "Given that the electrical conductivity of aquifer 'A' is less than x, where is that aquifer elsewhere in the survey area?" The answer may be given by examining inversion models, selecting locations and layers that satisfy the condition 'conductivity <= x', and labelling them as aquifer 'A'. One difficulty with this approach is that the inversion model result often be considered to be the only model for the data. In reality it is just one image of the subsurface that, given the method and the regularisation imposed in the inversion, agrees with measured data within a given error bound. We have no idea whether the final model realised by the inversion satisfies the global minimum error, or whether it is simply in a local minimum. There is a distribution of inversion models that satisfy the error tolerance condition: the final model is not the only one, nor is it necessarily the correct one. AEM inversions are often linearised in the calculation of the parameter sensitivity: we rely on the second derivatives in the Taylor expansion, thus the minimum model has all layer parameters distributed about their mean parameter value with well-defined variance. We investigate the validity of the minimum model, and its uncertainty, by examining the full posterior covariance matrix. We ask questions of the minimum model, and answer them in a probabilistically. The simplest question we can pose is "What is the probability that all layer resistivity values are <= a cut-off value?" We can calculate through use of the erf or the erfc functions. The covariance values of the inversion become marginalised in the integration: only the main diagonal is used. Complications arise when we ask more specific questions, such as "What is the probability that the resistivity of layer 2 <= x, given that layer 1 <= y?" The probability then becomes conditional, calculation includes covariance terms, the integration is taken over many dimensions, and the cross-correlation of parameters becomes important. To illustrate, we examine the inversion results of a Tempest AEM survey over the Uley Basin aquifers in the Eyre Peninsula, South Australia. Key aquifers include the unconfined Bridgewater Formation that overlies the Uley and Wanilla Formations, which contain Tertiary clays and Tertiary sandstone. These Formations overlie weathered basement which define the lower bound of the Uley Basin aquifer systems. By correlating the conductivity of the sub-surface Formation types, we pose questions such as: "What is the probability-depth of the Bridgewater Formation in the Uley South Basin?", "What is the thickness of the Uley Formation?" and "What is the most probable depth to basement?" We use these questions to generate improved conceptual hydrogeological models of the Uley Basin in order to develop better estimates of aquifer extent and the available groundwater resource.

  16. Reynolds-Stress and Triple-Product Models Applied to Flows with Rotation and Curvature

    NASA Technical Reports Server (NTRS)

    Olsen, Michael E.

    2016-01-01

    Predictions for Reynolds-stress and triple product turbulence models are compared for flows with significant rotational effects. Driver spinning cylinder flowfield and Zaets rotating pipe case are to be investigated at a minimum.

  17. Analysis of trends of water quality and streamflow in the Blackstone, Branch, Pawtuxet, and Pawcatuck Rivers, Massachusetts and Rhode Island, 1979 to 2015

    USGS Publications Warehouse

    Savoie, Jennifer G.; Mullaney, John R.; Bent, Gardner C.

    2017-02-21

    Trends in long-term water-quality and streamflow data from six water-quality-monitoring stations within three major river basins in Massachusetts and Rhode Island that flow into Narragansett Bay and Little Narragansett Bay were evaluated for water years 1979–2015. In this study, conducted by the U.S. Geological Survey in cooperation with the Rhode Island Department of Environmental Management, the Rhode Island Water Resources Board, and the U.S. Environmental Protection Agency, water-quality and streamflow data were evaluated with a Weighted Regressions on Time, Discharge, and Season smoothing method, which removes the effects of year-to-year variation in water-quality conditions due to variations in streamflow (discharge). Trends in annual mean, annual median, annual maximum, and annual 7-day minimum flows at four continuous streamgages were evaluated by using a time-series smoothing method for water years 1979–2015.Water quality at all monitoring stations changed over the study period. Decreasing trends in flow-normalized nutrient concentrations and loads were observed during the period at most monitoring stations for total nitrogen, nitrite plus nitrate, and total phosphorus. Average flow-normalized loads for water years 1979–2015 decreased in the Blackstone River by up to 46 percent in total nitrogen, 17 percent in nitrite plus nitrate, and 69 percent in total phosphorus. The other rivers also had decreasing flow-normalized trends in nutrient concentrations and loads, except for the Pawtuxet River, which had an increasing trend in nitrite plus nitrate. Increasing trends in flow-normalized chloride concentrations and loads were observed during the study period at all of the rivers, with increases of more than 200 percent in the Blackstone River.Small increasing trends in annual mean daily streamflow were observed in 3 of the 4 rivers, with increases of 1.2 to 11 percent; however, the trends were not significant. All 4 rivers had decreases in streamflow for the annual 7-day minimums, but only 3 of the 4 rivers had decreases that were significant (34 to 54 percent). The Branch River had decreasing annual mean daily streamflow (7.5 percent) and the largest decrease in the annual 7-day minimum streamflow. The Blackstone and Pawtuxet Rivers had the largest increases in annual maximum daily flows but had decreases in the annual 7-day minimum flows.

  18. Estimated probability of postwildfire debris flows in the 2012 Whitewater-Baldy Fire burn area, southwestern New Mexico

    USGS Publications Warehouse

    Tillery, Anne C.; Matherne, Anne Marie; Verdin, Kristine L.

    2012-01-01

    In May and June 2012, the Whitewater-Baldy Fire burned approximately 1,200 square kilometers (300,000 acres) of the Gila National Forest, in southwestern New Mexico. The burned landscape is now at risk of damage from postwildfire erosion, such as that caused by debris flows and flash floods. This report presents a preliminary hazard assessment of the debris-flow potential from 128 basins burned by the Whitewater-Baldy Fire. A pair of empirical hazard-assessment models developed by using data from recently burned basins throughout the intermountain Western United States was used to estimate the probability of debris-flow occurrence and volume of debris flows along the burned area drainage network and for selected drainage basins within the burned area. The models incorporate measures of areal burned extent and severity, topography, soils, and storm rainfall intensity to estimate the probability and volume of debris flows following the fire. In response to the 2-year-recurrence, 30-minute-duration rainfall, modeling indicated that four basins have high probabilities of debris-flow occurrence (greater than or equal to 80 percent). For the 10-year-recurrence, 30-minute-duration rainfall, an additional 14 basins are included, and for the 25-year-recurrence, 30-minute-duration rainfall, an additional eight basins, 20 percent of the total, have high probabilities of debris-flow occurrence. In addition, probability analysis along the stream segments can identify specific reaches of greatest concern for debris flows within a basin. Basins with a high probability of debris-flow occurrence were concentrated in the west and central parts of the burned area, including tributaries to Whitewater Creek, Mineral Creek, and Willow Creek. Estimated debris-flow volumes ranged from about 3,000-4,000 cubic meters (m3) to greater than 500,000 m3 for all design storms modeled. Drainage basins with estimated volumes greater than 500,000 m3 included tributaries to Whitewater Creek, Willow Creek, Iron Creek, and West Fork Mogollon Creek. Drainage basins with estimated debris-flow volumes greater than 100,000 m3 for the 25-year-recurrence event, 24 percent of the basins modeled, also include tributaries to Deep Creek, Mineral Creek, Gilita Creek, West Fork Gila River, Mogollon Creek, and Turkey Creek, among others. Basins with the highest combined probability and volume relative hazard rankings for the 25-year-recurrence rainfall include tributaries to Whitewater Creek, Mineral Creek, Willow Creek, West Fork Gila River, West Fork Mogollon Creek, and Turkey Creek. Debris flows from Whitewater, Mineral, and Willow Creeks could affect the southwestern New Mexico communities of Glenwood, Alma, and Willow Creek. The maps presented herein may be used to prioritize areas where emergency erosion mitigation or other protective measures may be necessary within a 2- to 3-year period of vulnerability following the Whitewater-Baldy Fire. This work is preliminary and is subject to revision. It is being provided because of the need for timely "best science" information. The assessment herein is provided on the condition that neither the U.S. Geological Survey nor the U.S. Government may be held liable for any damages resulting from the authorized or unauthorized use of the assessment.

  19. Performance Mapping Studies in Redox Flow Cells

    NASA Technical Reports Server (NTRS)

    Hoberecht, M. A.; Thaller, L. H.

    1981-01-01

    Pumping power requirements in any flow battery system constitute a direct parasitic energy loss. It is therefore useful to determine the practical lower limit for reactant flow rates. Through the use of a theoretical framework based on electrochemical first principles, two different experimental flow mapping techniques were developed to evaluate and compare electrodes as a function of flow rate. For the carbon felt electrodes presently used in NASA-Lewis Redox cells, a flow rate 1.5 times greater than the stoichiometric rate seems to be the required minimum.

  20. Dissociation of end systole from end ejection in patients with long-term mitral regurgitation.

    PubMed

    Brickner, M E; Starling, M R

    1990-04-01

    To determine whether left ventricular (LV) end systole and end ejection uncouple in patients with long-term mitral regurgitation, 59 patients (22 control patients with atypical chest pain, 21 patients with aortic regurgitation, and 16 patients with mitral regurgitation) were studied with micromanometer LV catheters and radionuclide angiograms. End systole was defined as the time of occurrence (Tmax) of the maximum time-varying elastance (Emax), and end ejection was defined as the time of occurrence of minimum ventricular volume (minV) and zero systolic flow as approximated by the aortic dicrotic notch (Aodi). The temporal relation between end systole and end ejection in the control patients was Tmax (331 +/- 42 [SD] msec), minV (336 +/- 36 msec), and then, zero systolic flow (355 +/- 23 msec). This temporal relation was maintained in the patients with aortic regurgitation. In contrast, in the patients with mitral regurgitation, the temporal relation was Tmax (266 +/- 49 msec), zero systolic flow (310 +/- 37 msec, p less than 0.01 vs. Tmax), and then, minV (355 +/- 37 msec, p less than 0.001 vs. Tmax and p less than 0.01 vs. Aodi). Additionally, the average Tmax occurred earlier in the patients with mitral regurgitation than in the control patients and patients with aortic regurgitation (p less than 0.01, for both), whereas the average time to minimum ventricular volume was similar in all three patient groups. Moreover, the average time to zero systolic flow also occurred earlier in the patients with mitral regurgitation than in the control patients (p less than 0.01) and patients with aortic regurgitation (p less than 0.05). Because of the dissociation of end systole from minimum ventricular volume in the patients with mitral regurgitation, the end-ejection pressure-volume relations calculated at minimum ventricular volume did not correlate (r = -0.09), whereas those calculated at zero systolic flow did correlate (r = 0.88) with the Emax slope values. We conclude that end ejection, defined as minimum ventricular volume, dissociates from end systole in patients with mitral regurgitation because of the shortened time to LV end systole in association with preservation of the time to LV end ejection due to the low impedance to ejection presented by the left atrium. Therefore, pressure-volume relations calculated at minimum ventricular volume might not be useful for assessing LV chamber performance in some patients with mitral regurgitation.

  1. Statistical survey on the magnetic structure in magnetotail current sheets

    NASA Astrophysics Data System (ADS)

    Rong, Z. J.; Wan, W. X.; Shen, C.; Li, X.; Dunlop, M. W.; Petrukovich, A. A.; Zhang, T. L.; Lucek, E.

    2011-09-01

    On the basis of the multipoint magnetic observations of Cluster in the region 15-19 RE downtail, the magnetic field structure in magnetotail current sheet (CS) center is statistically surveyed. It is found that the By component (in GSM coordinates) is distributed mainly within ∣By∣ < 5nT, while the Bz component is mostly positive and distributes mainly within 1˜10 nT. The plane of the magnetic field lines (MFLs) is mostly vertical to the equatorial plane, with the radius of curvature (Rc) of the MFLs being directed earthward and the binormal (perpendicular to the curvature and magnetic field direction) being directed azimuthally westward. The curvature radius of MFLs reaches a minimum, Rc,min, at the CS center and is larger than the corresponding local half thickness of the neutral sheet, h. Statistically, it is found that the overall surface of the CS, with the normal pointing basically along the south-north direction, can be approximated to be a plane parallel to equatorial plane, although the local CS may be flapping and is frequently tilted to the equatorial plane. The tilted CS (normal inclined to the equatorial plane) is apt to be observed near both flanks and is mainly associated with the slippage of magnetic flux tubes. It is statistically verified that the minimum curvature radius, Rc,min, half thickness of neutral sheet, h, and the slipping angle of MFLs, δ, in the CS satisfies h = Rc,min cosδ. The current density, with a mean strength of 4-8 nA/m2, basically flows azimuthally and tangentially to the surface of the CS, from dawn side to the dusk side. There is an obvious dawn-dusk asymmetry of CS, however. For magnetic local times (MLT) ˜21:00-˜01:00, the CS is relatively thinner; the minimum curvature radius of MFLs, Rc,min (0.6-1 RE) and the half-thickness of neutral sheet, h (0.2-0.4 RE), are relatively smaller, and Bz (3-5 nT) and the minimum magnetic field, Bmin (5-7 nT), are weaker. It is also found that negative Bz has a higher probability of occurrence and the cross-tail current density jY is dominant (2-4 nA/m2) in comparison to those values near both flanks. This implies that magnetic activity, e.g., magnetic reconnection and current disruption, could be triggered more frequently in CS with ˜21:00-˜01:00 MLT. Accordingly, if mapped to the region in the auroral ionosphere, it is expected that substorm onset would be optically observed with higher probability for ˜21:00-˜01:00 MLT, which is well in agreement with statistical observations of auroral substorm onset.

  2. Simulation of the regional groundwater-flow system of the Menominee Indian Reservation, Wisconsin

    USGS Publications Warehouse

    Juckem, Paul F.; Dunning, Charles P.

    2015-01-01

    The likely extent of the Neopit wastewater plume was simulated by using the groundwater-flow model and Monte Carlo techniques to evaluate the sensitivity of predictive simulations to a range of model parameter values. Wastewater infiltrated from the currently operating lagoons flows predominantly south toward Tourtillotte Creek. Some of the infiltrated wastewater is simulated as having a low probability of flowing beneath Tourtillotte Creek to the nearby West Branch Wolf River. Results for the probable extent of the wastewater plume are considered to be qualitative because the method only considers advective flow and does not account for processes affecting contaminant transport in porous media. Therefore, results for the probable extent of the wastewater plume are sensitive to the number of particles used to represent flow from the lagoon and the resolution of a synthetic grid used for the analysis. Nonetheless, it is expected that the qualitative results may be of use for identifying potential downgradient areas of concern that can then be evaluated using the quantitative “area contributing recharge to wells” method or traditional contaminant-transport simulations.

  3. Projection of postgraduate students flow with a smoothing matrix transition diagram of Markov chain

    NASA Astrophysics Data System (ADS)

    Rahim, Rahela; Ibrahim, Haslinda; Adnan, Farah Adibah

    2013-04-01

    This paper presents a case study of modeling postgraduate students flow at the College of Art and Sciences, Universiti Utara Malaysia. First, full time postgraduate students and the semester they were in are identified. Then administrative data were used to estimate the transitions between these semesters for the year 2001-2005 periods. Markov chain model is developed to calculate the -5 and -10 years projection of postgraduate students flow at the college. The optimization question addressed in this study is 'Which transitions would sustain the desired structure in the dynamic situation such as trend towards graduation?' The smoothed transition probabilities are proposed to estimate the transition probabilities matrix of 16 × 16. The results shows that using smoothed transition probabilities, the projection number of postgraduate students enrolled in the respective semesters are closer to actual than using the conventional steady states transition probabilities.

  4. Flow Regime Based Climatologies of Lightning Probabilities for Spaceports and Airports

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III; Sharp, David; Spratt, Scott; Lafosse, Richard A.

    2008-01-01

    The objective of this work was to provide forecasters with a tool to indicate the warm season climatological probability of one or more lightning strikes within a circle at a site within a specified time interval. This paper described the AMU work conducted in developing flow regime based climatologies of lightning probabilities for the SLF and seven airports in the NWS MLB CWA in east-central Florida. The paper also described the GUI developed by the AMU that is used to display the data for the operational forecasters. There were challenges working with gridded lightning data as well as the code that accompanied the gridded data. The AMU modified the provided code to be able to produce the climatologies of lightning probabilities based on eight flow regimes for 5-, 10-, 20-, and 30-n mi circles centered on eight sites in 1-, 3-, and 6-hour increments.

  5. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory.

    PubMed

    Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A

    2016-08-25

    There are several applications in computational biophysics that require the optimization of discrete interacting states, for example, amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of "maximum flow-minimum cut" graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.

  6. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purvine, Emilie AH; Monson, Kyle E.; Jurrus, Elizabeth R.

    There are several applications in computational biophysics which require the optimization of discrete interacting states; e.g., amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial-time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of maximum flow-minimum cut graph analysis. The interaction energy graph, a graph in which verticesmore » (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein, and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial-time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.« less

  7. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory

    PubMed Central

    Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A.

    2016-01-01

    There are several applications in computational biophysics which require the optimization of discrete interacting states; e.g., amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial-time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of “maximum flow-minimum cut” graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein, and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial-time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered. PMID:27089174

  8. Stochastic mechanics of loose boundary particle transport in turbulent flow

    NASA Astrophysics Data System (ADS)

    Dey, Subhasish; Ali, Sk Zeeshan

    2017-05-01

    In a turbulent wall shear flow, we explore, for the first time, the stochastic mechanics of loose boundary particle transport, having variable particle protrusions due to various cohesionless particle packing densities. The mean transport probabilities in contact and detachment modes are obtained. The mean transport probabilities in these modes as a function of Shields number (nondimensional fluid induced shear stress at the boundary) for different relative particle sizes (ratio of boundary roughness height to target particle diameter) and shear Reynolds numbers (ratio of fluid inertia to viscous damping) are presented. The transport probability in contact mode increases with an increase in Shields number attaining a peak and then decreases, while that in detachment mode increases monotonically. For the hydraulically transitional and rough flow regimes, the transport probability curves in contact mode for a given relative particle size of greater than or equal to unity attain their peaks corresponding to the averaged critical Shields numbers, from where the transport probability curves in detachment mode initiate. At an inception of particle transport, the mean probabilities in both the modes increase feebly with an increase in shear Reynolds number. Further, for a given particle size, the mean probability in contact mode increases with a decrease in critical Shields number attaining a critical value and then increases. However, the mean probability in detachment mode increases with a decrease in critical Shields number.

  9. Estimating detection probability for Canada lynx Lynx canadensis using snow-track surveys in the northern Rocky Mountains, Montana, USA

    Treesearch

    John R. Squires; Lucretia E. Olson; David L. Turner; Nicholas J. DeCesare; Jay A. Kolbe

    2012-01-01

    We used snow-tracking surveys to determine the probability of detecting Canada lynx Lynx canadensis in known areas of lynx presence in the northern Rocky Mountains, Montana, USA during the winters of 2006 and 2007. We used this information to determine the minimum number of survey replicates necessary to infer the presence and absence of lynx in areas of similar lynx...

  10. Volcanic signature of Basin and Range extension on the shrinking Cascade arc, Klamath Falls-Keno area, Oregon

    NASA Astrophysics Data System (ADS)

    Priest, George R.; Hladky, Frank R.; Mertzman, Stanley A.; Murray, Robert B.; Wiley, Thomas J.

    2013-08-01

    geologic mapping of the Klamath Falls-Keno area revealed the complex relationship between subduction, crustal extension, and magmatic composition of the southern Oregon Cascade volcanic arc. Volcanism in the study area at 7-4 Ma consisted of calc-alkaline basaltic andesite and andesite lava flowing over a relatively flat landscape. Local angular unconformities are evidence that Basin and Range extension began at by at least 4 Ma and continues today with fault blocks tilting at a long-term rate of 2°/Ma to 3°/Ma. Minimum NW-SE extension is 1.5 km over 28 km ( 5%). High-alumina olivine tholeiite (HAOT) or low-K, low-Ti transitional high-alumina olivine tholeiite (LKLT) erupted within and adjacent to the back edge of the calc-alkaline arc as the edge receded westward at a rate of 10 km/Ma at 2.7-0.45 Ma. The volcanic front migrated east much slower than the back arc migrated west: 0 km/Ma for 6-0.4 Ma calc-alkaline rocks; 0.7 km/Ma, if 6 Ma HAOT-LKLT is included; and 1 km/Ma, if highly differentiated 17-30 Ma volcanic rocks of the early Western Cascades are included. Declining convergence probably decreased asthenospheric corner flow, decreasing width of calc-alkaline and HAOT-LKLT volcanism and the associated heat flow anomaly, the margins of which focused on Basin and Range extension and leakage of HAOT-LKLT magma to the surface. This declining corner flow combined with steepening slab dip shifted the back arc west. Compensation of extension by volcanic intrusion and extrusion allowed growth of imposing range-front fault scarps only behind the trailing edge of the shrinking arc.

  11. A method of predicting flow rates required to achieve anti-icing performance with a porous leading edge ice protection system

    NASA Technical Reports Server (NTRS)

    Kohlman, D. L.; Albright, A. E.

    1983-01-01

    An analytical method was developed for predicting minimum flow rates required to provide anti-ice protection with a porous leading edge fluid ice protection system. The predicted flow rates compare with an average error of less than 10 percent to six experimentally determined flow rates from tests in the NASA Icing Research Tunnel on a general aviation wing section.

  12. Minimum resolvable power contrast model

    NASA Astrophysics Data System (ADS)

    Qian, Shuai; Wang, Xia; Zhou, Jingjing

    2018-01-01

    Signal-to-noise ratio and MTF are important indexs to evaluate the performance of optical systems. However,whether they are used alone or joint assessment cannot intuitively describe the overall performance of the system. Therefore, an index is proposed to reflect the comprehensive system performance-Minimum Resolvable Radiation Performance Contrast (MRP) model. MRP is an evaluation model without human eyes. It starts from the radiance of the target and the background, transforms the target and background into the equivalent strips,and considers attenuation of the atmosphere, the optical imaging system, and the detector. Combining with the signal-to-noise ratio and the MTF, the Minimum Resolvable Radiation Performance Contrast is obtained. Finally the detection probability model of MRP is given.

  13. The role of magnetic fields in cluster cooling flows

    NASA Technical Reports Server (NTRS)

    Soker, Noam; Sarazin, Craig L.

    1990-01-01

    An investigation is made of the dynamical effects of the intracluster magnetic field, whose radial inflow and shear can produce a dramatic increase in the field's strength while rendering it more radial, with cooling flows. It is found that field reconnection is the most likely dominant-loss mechanism, so that buoyancy effects are probably not important. Attention is given to the effect of the magnetic field on thermal instabilities. The most important observable effect of the magnetic field in cooling flows will probably be very strong Faraday rotation of the polarization of radio sources within or behind the cooling flow.

  14. Entropy considerations applied to shock unsteadiness in hypersonic inlets

    NASA Astrophysics Data System (ADS)

    Bussey, Gillian Mary Harding

    The stability of curved or rectangular shocks in hypersonic inlets in response to flow perturbations can be determined analytically from the principle of minimum entropy. Unsteady shock wave motion can have a significant effect on the flow in a hypersonic inlet or combustor. According to the principle of minimum entropy, a stable thermodynamic state is one with the lowest entropy gain. A model based on piston theory and its limits has been developed for applying the principle of minimum entropy to quasi-steady flow. Relations are derived for analyzing the time-averaged entropy gain flux across a shock for quasi-steady perturbations in atmospheric conditions and angle as a perturbation in entropy gain flux from the steady state. Initial results from sweeping a wedge at Mach 10 through several degrees in AEDC's Tunnel 9 indicates the bow shock becomes unsteady near the predicted normal Mach number. Several curved shocks of varying curvature are compared to a straight shock with the same mean normal Mach number, pressure ratio, or temperature ratio. The present work provides analysis and guidelines for designing an inlet robust to off- design flight or perturbations in flow conditions an inlet is likely to face. It also suggests that inlets with curved shocks are less robust to off-design flight than those with straight shocks such as rectangular inlets. Relations for evaluating entropy perturbations for highly unsteady flow across a shock and limits on their use were also developed. The normal Mach number at which a shock could be stable to high frequency upstream perturbations increases as the speed of the shock motion increases and slightly decreases as the perturbation size increases. The present work advances the principle of minimum entropy theory by providing additional validity for using the theory for time-varying flows and applying it to shocks, specifically those in inlets. While this analytic tool is applied in the present work for evaluating the stability of shocks in hypersonic inlets, it can be used for an arbitrary application with a shock.

  15. Probability and volume of potential postwildfire debris flows in the 2012 Waldo Canyon Burn Area near Colorado Springs, Colorado

    USGS Publications Warehouse

    Verdin, Kristine L.; Dupree, Jean A.; Elliott, John G.

    2012-01-01

    This report presents a preliminary emergency assessment of the debris-flow hazards from drainage basins burned by the 2012 Waldo Canyon fire near Colorado Springs in El Paso County, Colorado. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the intermountain western United States were used to estimate the probability of debris-flow occurrence and potential volume of debris flows along the drainage network of the burned area and to estimate the same for 22 selected drainage basins along U.S. Highway 24 and the perimeter of the burned area. Input data for the models included topographic parameters, soil characteristics, burn severity, and rainfall totals and intensities for a (1) 2-year-recurrence, 1-hour-duration rainfall, referred to as a 2-year storm (29 millimeters); (2) 10-year-recurrence, 1-hour-duration rainfall, referred to as a 10-year storm (42 millimeters); and (3) 25-year-recurrence, 1-hour-duration rainfall, referred to as a 25-year storm (48 millimeters). Estimated debris-flow probabilities at the pour points of the the drainage basins of interest ranged from less than 1 to 54 percent in response to the 2-year storm; from less than 1 to 74 percent in response to the 10-year storm; and from less than 1 to 82 percent in response to the 25-year storm. Basins and drainage networks with the highest probabilities tended to be those on the southern and southeastern edge of the burn area where soils have relatively high clay contents and gradients are steep. Nine of the 22 drainage basins of interest have greater than a 40-percent probability of producing a debris flow in response to the 10-year storm. Estimated debris-flow volumes for all rainfalls modeled range from a low of 1,500 cubic meters to a high of greater than 100,000 cubic meters. Estimated debris-flow volumes increase with basin size and distance along the drainage network, but some smaller drainages were also predicted to produce substantial volumes of material. The predicted probabilities and some of the volumes predicted for the modeled storms indicate a potential for substantial debris-flow impacts on structures, reservoirs, roads, bridges, and culverts located both within and immediately downstream from the burned area. U.S. Highway 24, on the southern edge of the burn area, is also susceptible to impacts from debris flows.

  16. Postwildfire debris flows hazard assessment for the area burned by the 2011 Track Fire, northeastern New Mexico and southeastern Colorado

    USGS Publications Warehouse

    Tillery, Anne C.; Darr, Michael J.; Cannon, Susan H.; Michael, John A.

    2011-01-01

    In June 2011, the Track Fire burned 113 square kilometers in Colfax County, northeastern New Mexico, and Las Animas County, southeastern Colorado, including the upper watersheds of Chicorica and Raton Creeks. The burned landscape is now at risk of damage from postwildfire erosion, such as that caused by debris flows and flash floods. This report presents a preliminary hazard assessment of the debris-flow potential from basins burned by the Track Fire. A pair of empirical hazard-assessment models developed using data from recently burned basins throughout the intermountain western United States were used to estimate the probability of debris-flow occurrence and volume of debris flows at the outlets of selected drainage basins within the burned area. The models incorporate measures of burn severity, topography, soils, and storm rainfall to estimate the probability and volume of post-fire debris flows following the fire. In response to a design storm of 38 millimeters of rain in 30 minutes (10-year recurrence-interval), the probability of debris flow estimated for basins burned by the Track fire ranged between 2 and 97 percent, with probabilities greater than 80 percent identified for the majority of the tributary basins to Raton Creek in Railroad Canyon; six basins that flow into Lake Maloya, including the Segerstrom Creek and Swachheim Creek basins; two tributary basins to Sugarite Canyon, and an unnamed basin on the eastern flank of the burned area. Estimated debris-flow volumes ranged from 30 cubic meters to greater than 100,000 cubic meters. The largest volumes (greater than 100,000 cubic meters) were estimated for Segerstrom Creek and Swachheim Creek basins, which drain into Lake Maloya. The Combined Relative Debris-Flow Hazard Ranking identifies the Segerstrom Creek and Swachheim Creek basins as having the highest probability of producing the largest debris flows. This finding indicates the greatest post-fire debris-flow impacts may be expected to Lake Maloya. In addition, Interstate Highway 25, Raton Creek and the rail line in Railroad Canyon, County road A-27, and State Highway 526 in Sugarite Canyon may also be affected where they cross drainages downstream from recently burned basins. Although this assessment indicates that a rather large debris flow (approximately 42,000 cubic meters) may be generated from the basin above the City of Raton (basin 9) in response to the design storm, the probability of such an event is relatively low (approximately 10 percent). Additional assessment is necessary to determine if the estimated volume of material is sufficient to travel into the City of Raton. In addition, even small debris flows may affect structures at or downstream from basin outlets and increase the threat of flooding downstream by damaging or blocking flood mitigation structures. The maps presented here may be used to prioritize areas where erosion mitigation or other protective measures may be necessary within a 2- to 3-year window of vulnerability following the Track Fire.

  17. Simulating future uncertainty to guide the selection of survey designs for long-term monitoring

    USGS Publications Warehouse

    Garman, Steven L.; Schweiger, E. William; Manier, Daniel J.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.

    2012-01-01

    A goal of environmental monitoring is to provide sound information on the status and trends of natural resources (Messer et al. 1991, Theobald et al. 2007, Fancy et al. 2009). When monitoring observations are acquired by measuring a subset of the population of interest, probability sampling as part of a well-constructed survey design provides the most reliable and legally defensible approach to achieve this goal (Cochran 1977, Olsen et al. 1999, Schreuder et al. 2004; see Chapters 2, 5, 6, 7). Previous works have described the fundamentals of sample surveys (e.g. Hansen et al. 1953, Kish 1965). Interest in survey designs and monitoring over the past 15 years has led to extensive evaluations and new developments of sample selection methods (Stevens and Olsen 2004), of strategies for allocating sample units in space and time (Urquhart et al. 1993, Overton and Stehman 1996, Urquhart and Kincaid 1999), and of estimation (Lesser and Overton 1994, Overton and Stehman 1995) and variance properties (Larsen et al. 1995, Stevens and Olsen 2003) of survey designs. Carefully planned, “scientific” (Chapter 5) survey designs have become a standard in contemporary monitoring of natural resources. Based on our experience with the long-term monitoring program of the US National Park Service (NPS; Fancy et al. 2009; Chapters 16, 22), operational survey designs tend to be selected using the following procedures. For a monitoring indicator (i.e. variable or response), a minimum detectable trend requirement is specified, based on the minimum level of change that would result in meaningful change (e.g. degradation). A probability of detecting this trend (statistical power) and an acceptable level of uncertainty (Type I error; see Chapter 2) within a specified time frame (e.g. 10 years) are specified to ensure timely detection. Explicit statements of the minimum detectable trend, the time frame for detecting the minimum trend, power, and acceptable probability of Type I error (α) collectively form the quantitative sampling objective.

  18. Using Logistic Regression To Predict the Probability of Debris Flows Occurring in Areas Recently Burned By Wildland Fires

    USGS Publications Warehouse

    Rupert, Michael G.; Cannon, Susan H.; Gartner, Joseph E.

    2003-01-01

    Logistic regression was used to predict the probability of debris flows occurring in areas recently burned by wildland fires. Multiple logistic regression is conceptually similar to multiple linear regression because statistical relations between one dependent variable and several independent variables are evaluated. In logistic regression, however, the dependent variable is transformed to a binary variable (debris flow did or did not occur), and the actual probability of the debris flow occurring is statistically modeled. Data from 399 basins located within 15 wildland fires that burned during 2000-2002 in Colorado, Idaho, Montana, and New Mexico were evaluated. More than 35 independent variables describing the burn severity, geology, land surface gradient, rainfall, and soil properties were evaluated. The models were developed as follows: (1) Basins that did and did not produce debris flows were delineated from National Elevation Data using a Geographic Information System (GIS). (2) Data describing the burn severity, geology, land surface gradient, rainfall, and soil properties were determined for each basin. These data were then downloaded to a statistics software package for analysis using logistic regression. (3) Relations between the occurrence/non-occurrence of debris flows and burn severity, geology, land surface gradient, rainfall, and soil properties were evaluated and several preliminary multivariate logistic regression models were constructed. All possible combinations of independent variables were evaluated to determine which combination produced the most effective model. The multivariate model that best predicted the occurrence of debris flows was selected. (4) The multivariate logistic regression model was entered into a GIS, and a map showing the probability of debris flows was constructed. The most effective model incorporates the percentage of each basin with slope greater than 30 percent, percentage of land burned at medium and high burn severity in each basin, particle size sorting, average storm intensity (millimeters per hour), soil organic matter content, soil permeability, and soil drainage. The results of this study demonstrate that logistic regression is a valuable tool for predicting the probability of debris flows occurring in recently-burned landscapes.

  19. Probability and volume of potential postwildfire debris flows in the 2012 High Park Burn Area near Fort Collins, Colorado

    USGS Publications Warehouse

    Verdin, Kristine L.; Dupree, Jean A.; Elliott, John G.

    2012-01-01

    This report presents a preliminary emergency assessment of the debris-flow hazards from drainage basins burned by the 2012 High Park fire near Fort Collins in Larimer County, Colorado. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the intermountain western United States were used to estimate the probability of debris-flow occurrence and volume of debris flows along the burned area drainage network and to estimate the same for 44 selected drainage basins along State Highway 14 and the perimeter of the burned area. Input data for the models included topographic parameters, soil characteristics, burn severity, and rainfall totals and intensities for a (1) 2-year-recurrence, 1-hour-duration rainfall (25 millimeters); (2) 10-year-recurrence, 1-hour-duration rainfall (43 millimeters); and (3) 25-year-recurrence, 1-hour-duration rainfall (51 millimeters). Estimated debris-flow probabilities along the drainage network and throughout the drainage basins of interest ranged from 1 to 84 percent in response to the 2-year-recurrence, 1-hour-duration rainfall; from 2 to 95 percent in response to the 10-year-recurrence, 1-hour-duration rainfall; and from 3 to 97 in response to the 25-year-recurrence, 1-hour-duration rainfall. Basins and drainage networks with the highest probabilities tended to be those on the eastern edge of the burn area where soils have relatively high clay contents and gradients are steep. Estimated debris-flow volumes range from a low of 1,600 cubic meters to a high of greater than 100,000 cubic meters. Estimated debris-flow volumes increase with basin size and distance along the drainage network, but some smaller drainages were also predicted to produce substantial volumes of material. The predicted probabilities and some of the volumes predicted for the modeled storms indicate a potential for substantial debris-flow impacts on structures, roads, bridges, and culverts located both within and immediately downstream from the burned area. Colorado State Highway 14 is also susceptible to impacts from debris flows.

  20. Target Coverage in Wireless Sensor Networks with Probabilistic Sensors

    PubMed Central

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao

    2016-01-01

    Sensing coverage is a fundamental problem in wireless sensor networks (WSNs), which has attracted considerable attention. Conventional research on this topic focuses on the 0/1 coverage model, which is only a coarse approximation to the practical sensing model. In this paper, we study the target coverage problem, where the objective is to find the least number of sensor nodes in randomly-deployed WSNs based on the probabilistic sensing model. We analyze the joint detection probability of target with multiple sensors. Based on the theoretical analysis of the detection probability, we formulate the minimum ϵ-detection coverage problem. We prove that the minimum ϵ-detection coverage problem is NP-hard and present an approximation algorithm called the Probabilistic Sensor Coverage Algorithm (PSCA) with provable approximation ratios. To evaluate our design, we analyze the performance of PSCA theoretically and also perform extensive simulations to demonstrate the effectiveness of our proposed algorithm. PMID:27618902

  1. Minimum Income Allocation System (RMI): a longitudinal view.

    PubMed

    Cordazzo, Philippe

    2005-10-01

    In 2000, for the first time, the number of minimum income allocation system (RMI) recipients decreased. In 2001, this drop in the number of recipients began to stabilize, and the number started to increase again in 2002. The author observed a stabilization of the number of new recipients, whereas the number of exits decreased. This situation is different according to local countries (departments). The probability of RMI entries is more important for populations living in the south and southeast of France. RMI recipients of the more recent cohorts leave more quickly and in proportion more significantly than do the recipients of the older cohorts. This phenomenon is alarming because the exits occur massively during the first 2 years spent in the RMI device and because the probability of leaving decreases sharply. The author has thus observed that a significant portion of the recipients (28%) is present after 5 years or more in the RMI device.

  2. The Minimum Impulse Thruster

    NASA Technical Reports Server (NTRS)

    Parker, J. Morgan; Wilson, Michael J.

    2005-01-01

    The Minimum Impulse Thruster (MIT) was developed to improve the state-of-the-art minimum impulse capability of hydrazine monopropellant thrusters. Specifically, a new fast response solenoid valve was developed, capable of responding to a much shorter electrical pulse width, thereby reducing the propellant flow time and the minimum impulse bit. The new valve was combined with the Aerojet MR-103, 0.2 lbf (0.9 N) thruster and put through an extensive Delta-qualification test program, resulting in a factor of 5 reduction in the minimum impulse bit, from roughly 1.1 milli-lbf-seconds (5 milliNewton seconds) to - 0.22 milli-lbf-seconds (1 mN-s). To maintain it's extensive heritage, the thruster itself was left unchanged. The Minimum Impulse Thruster provides mission and spacecraft designers new design options for precision pointing and precision translation of spacecraft.

  3. Entanglement-enhanced Neyman-Pearson target detection using quantum illumination

    NASA Astrophysics Data System (ADS)

    Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.

    2017-08-01

    Quantum illumination (QI) provides entanglement-based target detection---in an entanglement-breaking environment---whose performance is significantly better than that of optimum classical-illumination target detection. QI's performance advantage was established in a Bayesian setting with the target presumed equally likely to be absent or present and error probability employed as the performance metric. Radar theory, however, eschews that Bayesian approach, preferring the Neyman-Pearson performance criterion to avoid the difficulties of accurately assigning prior probabilities to target absence and presence and appropriate costs to false-alarm and miss errors. We have recently reported an architecture---based on sum-frequency generation (SFG) and feedforward (FF) processing---for minimum error-probability QI target detection with arbitrary prior probabilities for target absence and presence. In this paper, we use our results for FF-SFG reception to determine the receiver operating characteristic---detection probability versus false-alarm probability---for optimum QI target detection under the Neyman-Pearson criterion.

  4. Effects of ice formation on hydrology and water quality in the lower Bradley River, Alaska; implications for salmon incubation habitat

    USGS Publications Warehouse

    Rickman, Ronald L.

    1998-01-01

    A minimum flow of 40 cubic feet per second is required in the lower Bradley River, near Homer, Alaska, from November 2 to April 30 to ensure adequate habitat for salmon incubation. The study that determined this minimum flow did not account for the effects of ice formation on habitat. The limiting factor for determining the minimal acceptable flow limit appears to be stream-water velocity. The minimum short-term flow needed to ensure adequate salmon incubation habitat when ice is present is about 30 cubic feet per second. For long-term flows, 40 cubic feet per second is adequate when ice is present. Long-term minimum discharge needed to ensure adequate incubation habitat--which is based on mean velocity alone--is as follows: 40 cubic feet per second when ice is forming; 35 cubic feet per second for stable and eroding ice conditions; and 30 cubic feet per second for ice-free conditions. The effects of long-term streamflow less than 40 cubic feet per second on fine-sediment deposition and dissolved-oxygen interchange could not be extrapolated from the data. Hydrologic properties and water-quality data were measured in winter only from March 1993 to April 1998 at six transects in the lower Bradley River under three phases of icing: forming, stable, and eroding. Discharge in the lower Bradley River ranged from 33.3 to 73.0 cubic feet per second during all phases of ice formation and ice conditions, which ranged from ice free to 100 percent ice cover. Hydrostatic head was adequate for habitat protection for all ice phases and discharges. Mean stream velocity was adequate for all but one ice-forming episode. Velocity distribution within each transect varied significantly from one sampling period to the next. No relation was found between ice phase, discharge, and wetted perimeter. Intragravel-water temperature was slightly warmer than surface-water temperature. Surface- and intragravel-water dissolved-oxygen levels were adequate for all ice phases and discharges. No apparent relation was found between dissolved-oxygen levels and streamflow or ice conditions. Fine-sediment deposition was greatest at the downstream end of the study reach because of low shear velocities and tide-induced deposition. Dissolved-oxygen interchange was adequate for all discharges and ice conditions. Stranding potential of salmon fry was found to be low throughout the study reach. Minimum flows from the fish-water bypass needed to maintain 40 cubic feet per second in the lower Bradley River are estimated.

  5. On the use of Bayesian Monte-Carlo in evaluation of nuclear data

    NASA Astrophysics Data System (ADS)

    De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles

    2017-09-01

    As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.

  6. Laser beam micro-milling of nickel alloy: dimensional variations and RSM optimization of laser parameters

    NASA Astrophysics Data System (ADS)

    Ahmed, Naveed; Alahmari, Abdulrahman M.; Darwish, Saied; Naveed, Madiha

    2016-12-01

    Micro-channels are considered as the integral part of several engineering devices such as micro-channel heat exchangers, micro-coolers, micro-pulsating heat pipes and micro-channels used in gas turbine blades for aerospace applications. In such applications, a fluid flow is required to pass through certain micro-passages such as micro-grooves and micro-channels. The fluid flow characteristics (flow rate, turbulence, pressure drop and fluid dynamics) are mainly established based on the size and accuracy of micro-passages. Variations (oversizing and undersizing) in micro-passage's geometry directly affect the fluid flow characteristics. In this study, the micro-channels of several sizes are fabricated in well-known aerospace nickel alloy (Inconel 718) through laser beam micro-milling. The variations in geometrical characteristics of different-sized micro-channels are studied under the influences of different parameters of Nd:YAG laser. In order to have a minimum variation in the machined geometries of each size of micro-channel, the multi-objective optimization of laser parameters has been carried out utilizing the response surface methodology approach. The objective was set to achieve the targeted top widths and depths of micro-channels with minimum degree of taperness associated with the micro-channel's sidewalls. The optimized sets of laser parameters proposed for each size of micro-channel can be used to fabricate the micro-channels in Inconel 718 with minimum amount of geometrical variations.

  7. A network flow model for load balancing in circuit-switched multicomputers

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1990-01-01

    In multicomputers that utilize circuit switching or wormhole routing, communication overhead depends largely on link contention - the variation due to distance between nodes is negligible. This has a major impact on the load balancing problem. In this case, there are some nodes with excess load (sources) and others with deficit load (sinks) and it is required to find a matching of sources to sinks that avoids contention. The problem is made complex by the hardwired routing on currently available machines: the user can control only which nodes communicate but not how the messages are routed. Network flow models of message flow in the mesh and the hypercube were developed to solve this problem. The crucial property of these models is the correspondence between minimum cost flows and correctly routed messages. To solve a given load balancing problem, a minimum cost flow algorithm is applied to the network. This permits one to determine efficiently a maximum contention free matching of sources to sinks which, in turn, tells one how much of the given imbalance can be eliminated without contention.

  8. FLO1K, global maps of mean, maximum and minimum annual streamflow at 1 km resolution from 1960 through 2015

    NASA Astrophysics Data System (ADS)

    Barbarossa, Valerio; Huijbregts, Mark A. J.; Beusen, Arthur H. W.; Beck, Hylke E.; King, Henry; Schipper, Aafke M.

    2018-03-01

    Streamflow data is highly relevant for a variety of socio-economic as well as ecological analyses or applications, but a high-resolution global streamflow dataset is yet lacking. We created FLO1K, a consistent streamflow dataset at a resolution of 30 arc seconds (~1 km) and global coverage. FLO1K comprises mean, maximum and minimum annual flow for each year in the period 1960-2015, provided as spatially continuous gridded layers. We mapped streamflow by means of artificial neural networks (ANNs) regression. An ensemble of ANNs were fitted on monthly streamflow observations from 6600 monitoring stations worldwide, i.e., minimum and maximum annual flows represent the lowest and highest mean monthly flows for a given year. As covariates we used the upstream-catchment physiography (area, surface slope, elevation) and year-specific climatic variables (precipitation, temperature, potential evapotranspiration, aridity index and seasonality indices). Confronting the maps with independent data indicated good agreement (R2 values up to 91%). FLO1K delivers essential data for freshwater ecology and water resources analyses at a global scale and yet high spatial resolution.

  9. Potential postwildfire debris-flow hazards: a prewildfire evaluation for the Sandia and Manzano Mountains and surrounding areas, central New Mexico

    USGS Publications Warehouse

    Tillery, Anne C.; Haas, Jessica R.; Miller, Lara W.; Scott, Joe H.; Thompson, Matthew P.

    2014-01-01

    Wildfire can drastically increase the probability of debris flows, a potentially hazardous and destructive form of mass wasting, in landscapes that have otherwise been stable throughout recent history. Although there is no way to know the exact location, extent, and severity of wildfire, or the subsequent rainfall intensity and duration before it happens, probabilities of fire and debris-flow occurrence for different locations can be estimated with geospatial analysis and modeling efforts. The purpose of this report is to provide information on which watersheds might constitute the most serious, potential, debris-flow hazards in the event of a large-scale wildfire and subsequent rainfall in the Sandia and Manzano Mountains. Potential probabilities and estimated volumes of postwildfire debris flows in the unburned Sandia and Manzano Mountains and surrounding areas were estimated using empirical debris-flow models developed by the U.S. Geological Survey in combination with fire behavior and burn probability models developed by the U.S. Department of Agriculture Forest Service. The locations of the greatest debris-flow hazards correlate with the areas of steepest slopes and simulated crown-fire behavior. The four subbasins with the highest computed debris-flow probabilities (greater than 98 percent) were all in the Manzano Mountains, two flowing east and two flowing west. Volumes in sixteen subbasins were greater than 50,000 square meters and most of these were in the central Manzanos and the western facing slopes of the Sandias. Five subbasins on the west-facing slopes of the Sandia Mountains, four of which have downstream reaches that lead into the outskirts of the City of Albuquerque, are among subbasins in the 98th percentile of integrated relative debris-flow hazard rankings. The bulk of the remaining subbasins in the 98th percentile of integrated relative debris-flow hazard rankings are located along the highest and steepest slopes of the Manzano Mountains. One of the subbasins is several miles upstream from the community of Tajique and another is several miles upstream from the community of Manzano, both on the eastern slopes of the Manzano Mountains. This prewildfire assessment approach is valuable to resource managers because the analysis of the debris-flow threat is made before a wildfire occurs, which facilitates prewildfire management, planning, and mitigation. In northern New Mexico, widespread watershed restoration efforts are being carried out to safeguard vital watersheds against the threat of catastrophic wildfire. This study was initiated to help select ideal locations for the restoration efforts that could have the best return on investment.

  10. 40 CFR 63.7741 - What are the installation, operation, and maintenance requirements for my monitors?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... paragraphs (a)(1)(i) through (iv) of this section. (i) Locate the flow sensor and other necessary equipment... sensor with a minimum measurement sensitivity of 2 percent of the flow rate. (iii) Conduct a flow sensor... paragraphs (a)(2)(i) through (vi) of this section. (i) Locate the pressure sensor(s) in or as close as...

  11. Low-flow study for southwest Ohio streams

    USGS Publications Warehouse

    Webber, Earl E.; Mayo, Ronald I.

    1971-01-01

    Low-flow discharges at 60 sites on streams in the Little Miami River, Mill Creek, Great Miami River and Wabash River basins are presented in this report. The average annual minimum flows in cubic feet per second (cfs) for a 7-day period of 10-year frequency and a 1-day period of 30-year frequency are computed for each of the 60 sites.

  12. 40 CFR 63.7741 - What are the installation, operation, and maintenance requirements for my monitors?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... paragraphs (a)(1)(i) through (iv) of this section. (i) Locate the flow sensor and other necessary equipment... sensor with a minimum measurement sensitivity of 2 percent of the flow rate. (iii) Conduct a flow sensor... paragraphs (a)(2)(i) through (vi) of this section. (i) Locate the pressure sensor(s) in or as close as...

  13. 40 CFR 63.7741 - What are the installation, operation, and maintenance requirements for my monitors?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... paragraphs (a)(1)(i) through (iv) of this section. (i) Locate the flow sensor and other necessary equipment... sensor with a minimum measurement sensitivity of 2 percent of the flow rate. (iii) Conduct a flow sensor... paragraphs (a)(2)(i) through (vi) of this section. (i) Locate the pressure sensor(s) in or as close as...

  14. 40 CFR 63.1385 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... applicable emission limits: (1) Method 1 (40 CFR part 60, appendix A) for the selection of the sampling port location and number of sampling ports; (2) Method 2 (40 CFR part 60, appendix A) for volumetric flow rate.... Each run shall consist of a minimum run time of 2 hours and a minimum sample volume of 60 dry standard...

  15. Evaluation of an active humidification system for inspired gas.

    PubMed

    Roux, Nicolás G; Plotnikow, Gustavo A; Villalba, Darío S; Gogniat, Emiliano; Feld, Vivivana; Ribero Vairo, Noelia; Sartore, Marisa; Bosso, Mauro; Scapellato, José L; Intile, Dante; Planells, Fernando; Noval, Diego; Buñirigo, Pablo; Jofré, Ricardo; Díaz Nielsen, Ernesto

    2015-03-01

    The effectiveness of the active humidification systems (AHS) in patients already weaned from mechanical ventilation and with an artificial airway has not been very well described. The objective of this study was to evaluate the performance of an AHS in chronically tracheostomized and spontaneously breathing patients. Measurements were quantified at three levels of temperature (T°) of the AHS: level I, low; level II, middle; and level III, high and at different flow levels (20 to 60 L/minute). Statistical analysis of repeated measurements was performed using analysis of variance and significance was set at a P<0.05. While the lowest temperature setting (level I) did not condition gas to the minimum recommended values for any of the flows that were used, the medium temperature setting (level II) only conditioned gas with flows of 20 and 30 L/minute. Finally, at the highest temperature setting (level III), every flow reached the minimum absolute humidity (AH) recommended of 30 mg/L. According to our results, to obtain appropiate relative humidity, AH and T° of gas one should have a device that maintains water T° at least at 53℃ for flows between 20 and 30 L/m, or at T° of 61℃ at any flow rate.

  16. Low-flow profiles of the upper Oconee River and tributaries in Georgia

    USGS Publications Warehouse

    Carter, R.F.; Hopkins, E.H.; Perlman, H.A.

    1988-01-01

    Low flow information is provided for use in an evaluation of the capacity of streams to permit withdrawals or to accept waste loads without exceeding the limits of State water quality standards. The purpose of this report is to present the results of a compilation of available low flow data in the form of tables and ' 7Q10 flow profiles ' (minimum average flow for 7 consecutive days with a 10-yr recurrence interval)(7Q10 flow plotted against distance along a stream channel) for all streams reaches of the Upper Oconee River and tributaries in Georgia where sufficient data of acceptable accuracy are available. Drainage area profiles are included for all stream basins larger than 5 sq mi, except for those in a few remote areas. This report is the second in a series of reports that will cover all stream basins north of the Fall Line in Georgia. It includes the Oconee River basin down to and including Camp Creek at stream mile 134.53, Town Creek in Baldwin and Hancock Counties down to County Road 213-141, and Buffalo Creek in Hancock County down to the Hancock-Washington County line. Flow records were not adjusted for diversions or other factors that cause measured flows to represent other than natural flow conditions. The 7-day minimum flow profile was omitted for stream reaches where natural flow was known to be altered significantly. (Lantz-PTT)

  17. Optimizing congestion and emissions via tradable credit charge and reward scheme without initial credit allocations

    NASA Astrophysics Data System (ADS)

    Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang

    2017-01-01

    This paper investigates the revenue-neutral tradable credit charge and reward scheme without initial credit allocations that can reassign network traffic flow patterns to optimize congestion and emissions. First, we prove the existence of the proposed schemes and further decentralize the minimum emission flow pattern to user equilibrium. Moreover, we design the solving method of the proposed credit scheme for minimum emission problem. Second, we investigate the revenue-neutral tradable credit charge and reward scheme without initial credit allocations for bi-objectives to obtain the Pareto system optimum flow patterns of congestion and emissions; and present the corresponding solutions are located in the polyhedron constituted by some inequalities and equalities system. Last, numerical example based on a simple traffic network is adopted to obtain the proposed credit schemes and verify they are revenue-neutral.

  18. Two-IMU FDI performance of the sequential probability ratio test during shuttle entry

    NASA Technical Reports Server (NTRS)

    Rich, T. M.

    1976-01-01

    Performance data for the sequential probability ratio test (SPRT) during shuttle entry are presented. Current modeling constants and failure thresholds are included for the full mission 3B from entry through landing trajectory. Minimum 100 percent detection/isolation failure levels and a discussion of the effects of failure direction are presented. Finally, a limited comparison of failures introduced at trajectory initiation shows that the SPRT algorithm performs slightly worse than the data tracking test.

  19. Multi-bottle, no compressor, mean pressure control system for a Stirling engine

    DOEpatents

    Corey, John A.

    1990-01-01

    The invention relates to an apparatus for mean pressure control of a Stirling engine without the need for a compressor. The invention includes a multi-tank system in which there is at least one high pressure level tank and one low pressure level tank wherein gas flows through a maximum pressure and supply line from the engine to the high pressure tank when a first valve is opened until the maximum pressure of the engine drops below that of the high pressure tank opening an inlet regulator to permit gas flow from the engine to the low pressure tank. When gas flows toward the engine it flows through the minimum pressure supply line 2 when a second valve is opened from the low pressure tank until the tank reaches the engine's minimum pressure level at which time the outlet regulator opens permitting gas to be supplied from the high pressure tank to the engine. Check valves between the two tanks prevent any backflow of gas from occurring.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maze, Grace M.

    STREAM II is the aqueous transport model of the Weather Information Display (WIND) emergency response system at Savannah River Site. It is used to calculate transport in the event of a chemical or radiological spill into the waterways on the Savannah River Site. Improvements were made to the code (STREAM II V7) to include flow from all site tributaries to the Savannah River total flow and utilize a 4 digit year input. The predicted downstream concentrations using V7 were generally on the same order of magnitude as V6 with slightly lower concentrations and quicker arrival times when all onsite streammore » flows are contributing to the Savannah River flow. The downstream arrival time at the Savannah River Water Plant ranges from no change to an increase of 8.77%, with minimum changes typically in March/April and maximum changes typically in October/November. The downstream concentrations are generally no more than 15% lower using V7 with the maximum percent change in January through April and minimum changes in June/July.« less

  1. Introduction of a National Minimum Wage Reduced Depressive Symptoms in Low-Wage Workers: A Quasi-Natural Experiment in the UK.

    PubMed

    Reeves, Aaron; McKee, Martin; Mackenbach, Johan; Whitehead, Margaret; Stuckler, David

    2017-05-01

    Does increasing incomes improve health? In 1999, the UK government implemented minimum wage legislation, increasing hourly wages to at least £3.60. This policy experiment created intervention and control groups that can be used to assess the effects of increasing wages on health. Longitudinal data were taken from the British Household Panel Survey. We compared the health effects of higher wages on recipients of the minimum wage with otherwise similar persons who were likely unaffected because (1) their wages were between 100 and 110% of the eligibility threshold or (2) their firms did not increase wages to meet the threshold. We assessed the probability of mental ill health using the 12-item General Health Questionnaire. We also assessed changes in smoking, blood pressure, as well as hearing ability (control condition). The intervention group, whose wages rose above the minimum wage, experienced lower probability of mental ill health compared with both control group 1 and control group 2. This improvement represents 0.37 of a standard deviation, comparable with the effect of antidepressants (0.39 of a standard deviation) on depressive symptoms. The intervention group experienced no change in blood pressure, hearing ability, or smoking. Increasing wages significantly improves mental health by reducing financial strain in low-wage workers. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.

  2. Cluster-based control of a separating flow over a smoothly contoured ramp

    NASA Astrophysics Data System (ADS)

    Kaiser, Eurika; Noack, Bernd R.; Spohn, Andreas; Cattafesta, Louis N.; Morzyński, Marek

    2017-12-01

    The ability to manipulate and control fluid flows is of great importance in many scientific and engineering applications. The proposed closed-loop control framework addresses a key issue of model-based control: The actuation effect often results from slow dynamics of strongly nonlinear interactions which the flow reveals at timescales much longer than the prediction horizon of any model. Hence, we employ a probabilistic approach based on a cluster-based discretization of the Liouville equation for the evolution of the probability distribution. The proposed methodology frames high-dimensional, nonlinear dynamics into low-dimensional, probabilistic, linear dynamics which considerably simplifies the optimal control problem while preserving nonlinear actuation mechanisms. The data-driven approach builds upon a state space discretization using a clustering algorithm which groups kinematically similar flow states into a low number of clusters. The temporal evolution of the probability distribution on this set of clusters is then described by a control-dependent Markov model. This Markov model can be used as predictor for the ergodic probability distribution for a particular control law. This probability distribution approximates the long-term behavior of the original system on which basis the optimal control law is determined. We examine how the approach can be used to improve the open-loop actuation in a separating flow dominated by Kelvin-Helmholtz shedding. For this purpose, the feature space, in which the model is learned, and the admissible control inputs are tailored to strongly oscillatory flows.

  3. Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network

    PubMed Central

    Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N.

    2015-01-01

    Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead. PMID:26426701

  4. Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network.

    PubMed

    Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N

    2015-01-01

    Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead.

  5. Quantifying avian predation on fish populations: integrating predator-specific deposition probabilities in tag-recovery studies

    USGS Publications Warehouse

    Hostetter, Nathan J.; Evans, Allen F.; Cramer, Bradley M.; Collis, Ken; Lyons, Donald E.; Roby, Daniel D.

    2015-01-01

    Accurate assessment of specific mortality factors is vital to prioritize recovery actions for threatened and endangered species. For decades, tag recovery methods have been used to estimate fish mortality due to avian predation. Predation probabilities derived from fish tag recoveries on piscivorous waterbird colonies typically reflect minimum estimates of predation due to an unknown and unaccounted-for fraction of tags that are consumed but not deposited on-colony (i.e., deposition probability). We applied an integrated tag recovery modeling approach in a Bayesian context to estimate predation probabilities that accounted for predator-specific tag detection and deposition probabilities in a multiple-predator system. Studies of PIT tag deposition were conducted across three bird species nesting at seven different colonies in the Columbia River basin, USA. Tag deposition probabilities differed significantly among predator species (Caspian ternsHydroprogne caspia: deposition probability = 0.71, 95% credible interval [CRI] = 0.51–0.89; double-crested cormorants Phalacrocorax auritus: 0.51, 95% CRI = 0.34–0.70; California gulls Larus californicus: 0.15, 95% CRI = 0.11–0.21) but showed little variation across trials within a species or across years. Data from a 6-year study (2008–2013) of PIT-tagged juvenile Snake River steelhead Oncorhynchus mykiss (listed as threatened under the Endangered Species Act) indicated that colony-specific predation probabilities ranged from less than 0.01 to 0.17 and varied by predator species, colony location, and year. Integrating the predator-specific deposition probabilities increased the predation probabilities by a factor of approximately 1.4 for Caspian terns, 2.0 for double-crested cormorants, and 6.7 for California gulls compared with traditional minimum predation rate methods, which do not account for deposition probabilities. Results supported previous findings on the high predation impacts from strictly piscivorous waterbirds nesting in the Columbia River estuary (i.e., terns and cormorants), but our findings also revealed greater impacts of a generalist predator species (i.e., California gulls) than were previously documented. Approaches used in this study allow for direct comparisons among multiple fish mortality factors and considerably improve the reliability of tag recovery models for estimating predation probabilities in multiple-predator systems.

  6. Colonial waterbird predation on Lost River and Shortnose suckers in the Upper Klamath Basin

    USGS Publications Warehouse

    Evans, Allen F.; Hewitt, David A.; Payton, Quinn; Cramer, Bradley M.; Collis, Ken; Roby, Daniel D.

    2016-01-01

    We evaluated predation on Lost River Suckers Deltistes luxatus and Shortnose Suckers Chasmistes brevirostris by American white pelicans Pelecanus erythrorhynchos and double-crested cormorants Phalacrocorax auritus nesting at mixed-species colonies in the Upper Klamath Basin of Oregon and California during 2009–2014. Predation was evaluated by recovering (detecting) PIT tags from tagged fish on bird colonies and calculating minimum predation rates, as the percentage of available suckers consumed, adjusted for PIT tag detection probabilities but not deposition probabilities (i.e., probability an egested tag was deposited on- or off-colony). Results indicate that impacts of avian predation varied by sucker species, age-class (adult, juvenile), bird colony location, and year, demonstrating dynamic predator–prey interactions. Tagged suckers ranging in size from 72 to 730 mm were susceptible to cormorant or pelican predation; all but the largest Lost River Suckers were susceptible to bird predation. Minimum predation rate estimates ranged annually from <0.1% to 4.6% of the available PIT-tagged Lost River Suckers and from <0.1% to 4.2% of the available Shortnose Suckers, and predation rates were consistently higher on suckers in Clear Lake Reservoir, California, than on suckers in Upper Klamath Lake, Oregon. There was evidence that bird predation on juvenile suckers (species unknown) in Upper Klamath Lake was higher than on adult suckers in Upper Klamath Lake, where minimum predation rates ranged annually from 5.7% to 8.4% of available juveniles. Results suggest that avian predation is a factor limiting the recovery of populations of Lost River and Shortnose suckers, particularly juvenile suckers in Upper Klamath Lake and adult suckers in Clear Lake Reservoir. Additional research is needed to measure predator-specific PIT tag deposition probabilities (which, based on other published studies, could increase predation rates presented herein by a factor of roughly 2.0) and to better understand biotic and abiotic factors that regulate sucker susceptibility to bird predation.

  7. Unraveling the relationship between arterial flow and intra-aneurysmal hemodynamics.

    PubMed

    Morales, Hernán G; Bonnefous, Odile

    2015-02-26

    Arterial flow rate affects intra-aneurysmal hemodynamics but it is not clear how their relationship is. This uncertainty hinders the comparison among studies, including clinical evaluations, like a pre- and post-treatment status, since arterial flow rates may differ at each time acquisition. The purposes of this work are as follows: (1) To study how intra-aneurysmal hemodynamics changes within the full physiological range of arterial flow rates. (2) To provide characteristic curves of intra-aneurysmal velocity, wall shear stress (WSS) and pressure as functions of the arterial flow rate. Fifteen image-based aneurysm models were studied using computational fluid dynamics (CFD) simulations. The full range of physiological arterial flow rates reported in the literature was covered by 11 pulsatile simulations. For each aneurysm, the spatiotemporal-averaged blood flow velocity, WSS and pressure were calculated. Spatiotemporal-averaged velocity inside the aneurysm linearly increases as a function of the mean arterial flow (minimum R(2)>0.963). Spatiotemporal-averaged WSS and pressure at the aneurysm wall can be represented by quadratic functions of the arterial flow rate (minimum R(2)>0.996). Quantitative characterizations of spatiotemporal-averaged velocity, WSS and pressure inside cerebral aneurysms can be obtained with respect to the arterial flow rate. These characteristic curves provide more information of the relationship between arterial flow and aneurysm hemodynamics since the full range of arterial flow rates is considered. Having these curves, it is possible to compare experimental studies and clinical evaluations when different flow conditions are used. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Effects of flow alterations on trout, angling, and recreation in the Chattahoochee River between Buford Dam and Peachtree Creek

    USGS Publications Warehouse

    Nestler, John M.; Milhouse, Robert T.; Troxel, Jay; Fritschen, Janet A.

    1985-01-01

    In 1974 county governments in the Atlanta vicinity realized that demands on the Chattahoochee River for water supply plus the streamflow required for water quality nearly equaled the minimum flow in the river. Increased demands for water supply in the following years could not be supplied under the then existing flow regime in the river. In response to the anticipated shortage of water, the Atlanta Regional Commission, a multicounty agency responsible for comprehensive regional planning in the Atlanta region, was contracted to prepare water demand projections to the year 2010 and identify alternatives for meeting projected water demands. The results of this study are published in an extensive final report, the Metropolitan Atlanta Area Water Resources Management Study (1981). Requests for copies should be directed to the District Engineer, Savannah District. Many of the identified alternatives to increase future water supply for the Atlanta area would result in modifications to the present flow regime within the Chattahoochee River between Buford Dam (river mile 348.3) and its confluence with Peachtree Creek (river mile 300.5). The present preferred alternative is construction of a reregulation dam at about river mile 342. The proposed reregulation dam would release a much more constant flow than the peaking flows presently released from Buford Dam (generally, a maximum release of approximately 9000 cfs or minimum release of about 550 cfs) by storing the generation releases from Buford Dam for gradual release during non-generation periods. The anticipated minimum release from the rereg dam would he approximately 1U5U cfs (based on contractual obligations to the Southeast Power Administration to supply a minimum of 11 hours of peaking power per week from Buford Dam). The average annual release from the proposed reregulation dam into the Chattahoochee River would be approximately 2000 cfs (based on USGS flow records) and the median release would he approximately 1500 cfs (value obtained from Savannah District). The proposed reregulation dam would have sufficient storage to provide some opportunity for flow management to optimize uses other than water supply and water quality. Flow modifications (and resultant water quality changes) within this reach of the Chattahoochee River to meet increased demands for water supply may have an effect on other beneficial uses of this important natural resource. In addition to supplying a significant proportion of the water supply for metropolitan Atlanta and providing for water quality, the Chattahoochee River also is used extensively for recreation and supports a valuable trout fishery. Altered flows in the channel to meet water supply needs may have an impact on river recreation and trout habitat.

  9. The X-Ray Lightcurve of Eta Carinae: Refinement of the Orbit and Evidence for Phase Dependent Mass Loss

    NASA Technical Reports Server (NTRS)

    Corcoran, M. F.; Ishibashi, K.; Swank, J. H.; Petre, R.; White, Nicholas E. (Technical Monitor)

    2000-01-01

    We solve the RXTE X-ray lightcurve of the extremely luminous and massive star eta Carinae with a colliding wind emission model to refine the ground-based orbital elements. The sharp decline to X-ray minimum at the end of 1997 fixes the date of the last periastron passage at 1997.95 +/- 0.05, not 1998.13 as derived from ground-based radial velocities. This helps resolve a discrepancy between the ground-based radial velocities and spatially-resolved velocity measures obtained by STIS. The X-ray data are consistent with a mass function f(M) approx. = 1.5, lower than the value f(M) approx. = 7.5 previously reported, so that the masses of eta Carinae and the companion are M(sub eta) greater than or = 80 solar mass and M(sub c) approx. 30 solar mass respectively. In addition the X-ray data suggest that the mass loss rate from eta Carinae is generally less than 3 x 10(exp -4) solar mass/yr, about a factor of 5 lower than that derived from some observations in other wavebands. We could not match the duration of the X-ray minimum with any standard colliding wind model in which the wind is spherically symmetric and the mass loss rate is constant. However we show that we can match the variations around X-ray minimum if we include an increase of a factor of approx. 20 in the mass loss rate from eta Carinae for approximately 80 days following periastron. If real, this excess in M would be the first evidence of enhanced mass flow off the primary when the two stars are close (presumably driven by tidal interactions). Our interpretation of the X-ray data suggest that the ASCA and RXTE X-ray spectra near the X-ray minimum are significantly contaminated by unresolved hard emission (E greater than or = 2 keV) from sonic other nearby source, probably associated with scattering of tile colliding wind emission by circumstellar dust. Based on the X-ray fluxes the distance to n Carinae is 2300 pc with formal uncertainties of only approx. 10%.

  10. Design flow duration curves for environmental flows estimation in Damodar River Basin, India

    NASA Astrophysics Data System (ADS)

    Verma, Ravindra Kumar; Murthy, Shankar; Verma, Sangeeta; Mishra, Surendra Kumar

    2017-06-01

    In this study, environmental flows (EFs) are estimated for six watersheds of Damodar River Basin (DRB) using flow duration curve (FDC) derived using two approaches: (a) period of record and (b) stochastic approaches for daily, 7-, 30-, 60-day moving averages, and 7-daily mean annual flows observed at Tenughat dam, Konar dam, Maithon dam, Panchet dam, Damodar bridge, Burnpur during 1981-2010 and at Phusro during 1988-2010. For stochastic FDCs, 7-day FDCs for 10, 20-, 50- and 100-year return periods were derived for extraction of discharge values at every 5% probability of exceedance. FDCs derived using the first approach show high probability of exceedance (5-75%) for the same discharge values. Furthermore, discharge values of 60-day mean are higher than those derived using daily, 7-, and 30-day mean values. The discharge values of 95% probability of exceedance (Q95) derived from 7Q10 (ranges from 2.04 to 5.56 cumec) and 7Q100 (ranges from 3.4 to 31.48 cumec) FDCs using the second approach are found more appropriate as EFs during drought/low flow and normal precipitation years.

  11. Lava flow hazards-An impending threat at Miyakejima volcano, Japan

    NASA Astrophysics Data System (ADS)

    Cappello, Annalisa; Geshi, Nobuo; Neri, Marco; Del Negro, Ciro

    2015-12-01

    The majority of the historic eruptions recorded at Miyakejima volcano were fissure eruptions that occurred on the flanks of the volcano. During the last 1100 years, 17 fissure eruptions have been reported with a mean interval of about 76-78 years. In the last century, the mean interval between fissure eruptions decreased to 21-22 years, increasing significantly the threat of lava flow inundations to people and property. Here we quantify the lava flow hazards posed by effusive eruptions in Miyakejima by combining field data, numerical simulations and probability analysis. Our analysis is the first to assess both the spatiotemporal probability of vent opening, which highlights the areas most likely to host a new eruption, and the lava flow hazard, which shows the probabilities of lava-flow inundation in the next 50 years. Future eruptive vents are expected in the vicinity of the Hatchodaira caldera, radiating from the summit of the volcano toward the costs. Areas more likely to be threatened by lava flows are Ako and Kamitsuki villages, as well as Miike port and Miyakejima airport. Thus, our results can be useful for risk evaluation, investment decisions, and emergency response preparation.

  12. Selected low-flow frequency statistics for continuous-record streamgages in Georgia, 2013

    USGS Publications Warehouse

    Gotvald, Anthony J.

    2016-04-13

    This report presents the annual and monthly minimum 1- and 7-day average streamflows with the 10-year recurrence interval (1Q10 and 7Q10) for 197 continuous-record streamgages in Georgia. Streamgages used in the study included active and discontinued stations having a minimum of 10 complete climatic years of record as of September 30, 2013. The 1Q10 and 7Q10 flow statistics were computed for 85 streamgages on unregulated streams with minimal diversions upstream, 43 streamgages on regulated streams, and 69 streamgages known, or considered, to be affected by varying degrees of diversions upstream. Descriptive information for each of these streamgages, including the U.S. Geological Survey (USGS) station number, station name, latitude, longitude, county, drainage area, and period of record analyzed also is presented.Kendall’s tau nonparametric test was used to determine the statistical significance of trends in annual and monthly minimum 1-day and 7-day average flows for the 197 streamgages. Significant negative trends in the minimum annual 1-day and 7-day average streamflow were indicated for 77 of the 197 streamgages. Many of these significant negative trends are due to the period of record ending during one of the recent droughts in Georgia, particularly those streamgages with record through the 2013 water year. Long-term unregulated streamgages with 70 or more years of record indicate significant negative trends in the annual minimum 7-day average flow for central and southern Georgia. Watersheds for some of these streamgages have experienced minimal human impact, thus indicating that the significant negative trends observed in flows at the long-term streamgages may be influenced by changing climatological conditions. A Kendall-tau trend analysis of the annual air temperature and precipitation totals for Georgia indicated no significant trends. A comprehensive analysis of causes of the trends in annual and monthly minimum 1-day and 7-day average flows in central and southern Georgia is outside the scope of this study. Further study is needed to determine some of the causes, including both climatological and human impacts, of the significant negative trends in annual minimum 1-day and 7-day average flows in central and southern Georgia.To assess the changes in the annual 1Q10 and 7Q10 statistics over time for long-term continuous streamgages with significant trends in record, the annual 1Q10 and 7Q10 statistics were computed on a decadal accumulated basis for 39 streamgages having 40 or more years of record that indicated a significant trend. Records from most of the streamgages showed a decline in 7Q10 statistics for the decades of 1980–89, 1990–99, and 2000–09 because of the recent droughts in Georgia. Twenty four of the 39 streamgages had complete records from 1980 to 2010, and records from 23 of these gages exhibited a decline in the 7Q10 statistics during this period, ranging from –6.3 to –76.2 percent with a mean of –27.3 percent. No attempts were made during this study to adjust streamflow records or statistical analyses on the basis of trends.The monthly and annual 1Q10 and 7Q10 flow statistics for the entire period of record analyzed in the study are incorporated into the USGS StreamStatsDB, which is a database accessible to users through the recently released USGS StreamStats application for Georgia. StreamStats is a Web-based geographic information system that provides users with access to an assortment of analytical tools that are useful for water-resources planning and management, and for engineering design applications, such as the design of bridges. StreamStats allows users to easily obtain streamflow statistics, basin characteristics, and other information for user-selected streamgages.

  13. Effects of weather on survival in populations of boreal toads in Colorado

    USGS Publications Warehouse

    Scherer, R. D.; Muths, E.; Lambert, B.A.

    2008-01-01

    Understanding the relationships between animal population demography and the abiotic and biotic elements of the environments in which they live is a central objective in population ecology. For example, correlations between weather variables and the probability of survival in populations of temperate zone amphibians may be broadly applicable to several species if such correlations can be validated for multiple situations. This study focuses on the probability of survival and evaluates hypotheses based on six weather variables in three populations of Boreal Toads (Bufo boreas) from central Colorado over eight years. In addition to suggesting a relationship between some weather variables and survival probability in Boreal Toad populations, this study uses robust methods and highlights the need for demographic estimates that are precise and have minimal bias. Capture-recapture methods were used to collect the data, and the Cormack-Jolly-Seber model in program MARK was used for analysis. The top models included minimum daily winter air temperature, and the sum of the model weights for these models was 0.956. Weaker support was found for the importance of snow depth and the amount of environmental moisture in winter in modeling survival probability. Minimum daily winter air temperature was positively correlated with the probability of survival in Boreal Toads at other sites in Colorado and has been identified as an important covariate in studies in other parts of the world. If air temperatures are an important component of survival for Boreal Toads or other amphibians, changes in climate may have profound impacts on populations. Copyright 2008 Society for the Study of Amphibians and Reptiles.

  14. Changes in tropical precipitation cluster size distributions under global warming

    NASA Astrophysics Data System (ADS)

    Neelin, J. D.; Quinn, K. M.

    2016-12-01

    The total amount of precipitation integrated across a tropical storm or other precipitation feature (contiguous clusters of precipitation exceeding a minimum rain rate) is a useful measure of the aggregate size of the disturbance. To establish baseline behavior in current climate, the probability distribution of cluster sizes from multiple satellite retrievals and National Center for Environmental Prediction (NCEP) reanalysis is compared to those from Coupled Model Intercomparison Project (CMIP5) models and the Geophysical Fluid Dynamics Laboratory high-resolution atmospheric model (HIRAM-360 and -180). With the caveat that a minimum rain rate threshold is important in the models (which tend to overproduce low rain rates), the models agree well with observations in leading properties. In particular, scale-free power law ranges in which the probability drops slowly with increasing cluster size are well modeled, followed by a rapid drop in probability of the largest clusters above a cutoff scale. Under the RCP 8.5 global warming scenario, the models indicate substantial increases in probability (up to an order of magnitude) of the largest clusters by the end of century. For models with continuous time series of high resolution output, there is substantial spread on when these probability increases for the largest precipitation clusters should be detectable, ranging from detectable within the observational period to statistically significant trends emerging only in the second half of the century. Examination of NCEP reanalysis and SSMI/SSMIS series of satellite retrievals from 1979 to present does not yield reliable evidence of trends at this time. The results suggest improvements in inter-satellite calibration of the SSMI/SSMIS retrievals could aid future detection.

  15. Probability of detection of defects in coatings with electronic shearography

    NASA Astrophysics Data System (ADS)

    Maddux, Gary A.; Horton, Charles M.; Lansing, Matthew D.; Gnacek, William J.; Newton, Patrick L.

    1994-07-01

    The goal of this research was to utilize statistical methods to evaluate the probability of detection (POD) of defects in coatings using electronic shearography. The coating system utilized in the POD studies was to be the paint system currently utilized on the external casings of the NASA Space Transportation System (STS) Revised Solid Rocket Motor (RSRM) boosters. The population of samples was to be large enough to determine the minimum defect size for 90 percent probability of detection of 95 percent confidence POD on these coatings. Also, the best methods to excite coatings on aerospace components to induce deformations for measurement by electronic shearography were to be determined.

  16. Probability of detection of defects in coatings with electronic shearography

    NASA Technical Reports Server (NTRS)

    Maddux, Gary A.; Horton, Charles M.; Lansing, Matthew D.; Gnacek, William J.; Newton, Patrick L.

    1994-01-01

    The goal of this research was to utilize statistical methods to evaluate the probability of detection (POD) of defects in coatings using electronic shearography. The coating system utilized in the POD studies was to be the paint system currently utilized on the external casings of the NASA Space Transportation System (STS) Revised Solid Rocket Motor (RSRM) boosters. The population of samples was to be large enough to determine the minimum defect size for 90 percent probability of detection of 95 percent confidence POD on these coatings. Also, the best methods to excite coatings on aerospace components to induce deformations for measurement by electronic shearography were to be determined.

  17. Streamflow characteristics and trends along Soldier Creek, Northeast Kansas

    USGS Publications Warehouse

    Juracek, Kyle E.

    2017-08-16

    Historical data for six selected U.S. Geological Survey streamgages along Soldier Creek in northeast Kansas were used in an assessment of streamflow characteristics and trends. This information is required by the Prairie Band Potawatomi Nation for the effective management of tribal water resources, including drought contingency planning. Streamflow data for the period of record at each streamgage were used to assess annual mean streamflow, annual mean base flow, mean monthly flow, annual peak flow, and annual minimum flow.Annual mean streamflows along Soldier Creek were characterized by substantial year-to-year variability with no pronounced long-term trends. On average, annual mean base flow accounted for about 20 percent of annual mean streamflow. Mean monthly flows followed a general seasonal pattern that included peak values in spring and low values in winter. Annual peak flows, which were characterized by considerable year-to-year variability, were most likely to occur in May and June and least likely to occur during November through February. With the exception of a weak yet statistically significant increasing trend at the Soldier Creek near Topeka, Kansas, streamgage, there were no pronounced long-term trends in annual peak flows. Annual 1-day, 30-day, and 90-day mean minimum flows were characterized by considerable year-to-year variability with no pronounced long-term trend. During an extreme drought, as was the case in the mid-1950s, there may be zero flow in Soldier Creek continuously for a period of one to several months.

  18. Numerical optimization using flow equations.

    PubMed

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  19. Numerical optimization using flow equations

    NASA Astrophysics Data System (ADS)

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  20. Emergency Assessment of Debris-Flow Hazards from Basins Burned by the Padua Fire of 2003, Southern California

    USGS Publications Warehouse

    Cannon, Susan H.; Gartner, Joseph E.; Rupert, Michael G.; Michael, John A.

    2004-01-01

    Results of a present preliminary assessment of the probability of debris-flow activity and estimates of peak discharges that can potentially be generated by debris flows issuing from basins burned by the Padua Fire of October 2003 in southern California in response to 25-year, 10-year, and 2-year recurrence, 1-hour duration rain storms are presented. The resulting probability maps are based on the application of a logistic multiple-regression model (Cannon and others, 2004) that describes the percent chance of debris-flow production from an individual basin as a function of burned extent, soil properties, basin gradients, and storm rainfall. The resulting peak discharge maps are based on application of a multiple-regression model (Cannon and others, 2004) that can be used to estimate debris-flow peak discharge at a basin outlet as a function of basin gradient, burn extent, and storm rainfall. Probabilities of debris-flow occurrence for the Padua Fire range between 0 and 99% and estimates of debris-flow peak discharges range between 1211 and 6,096 ft3/s (34 to 173 m3/s). These maps are intended to identify those basins that are most prone to the largest debris-flow events and provide information for the preliminary design of mitigation measures and for the planning of evacuation timing and routes.

  1. Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head

    PubMed Central

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-01-01

    Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Significance Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials. PMID:24099977

  2. Automated MRI segmentation for individualized modeling of current flow in the human head

    NASA Astrophysics Data System (ADS)

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-12-01

    Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.

  3. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi; Hixon, Duane

    1993-01-01

    The work done under this project was documented in detail as the Ph. D. dissertation of Dr. Duane Hixon. The objectives of the research project were evaluation of the generalized minimum residual method (GMRES) as a tool for accelerating 2-D and 3-D unsteady flows and evaluation of the suitability of the GMRES algorithm for unsteady flows, computed on parallel computer architectures.

  4. Can Low Frequency Measurements Be Good Enough? - A Statistical Assessment of Citizen Hydrology Streamflow Observations

    NASA Astrophysics Data System (ADS)

    Davids, J. C.; Rutten, M.; Van De Giesen, N.

    2016-12-01

    Hydrologic data has traditionally been collected with permanent installations of sophisticated and relatively accurate but expensive monitoring equipment at limited numbers of sites. Consequently, the spatial coverage of the data is limited and costs are high. Achieving adequate maintenance of sophisticated monitoring equipment often exceeds local technical and resource capacity, and permanently deployed monitoring equipment is susceptible to vandalism, theft, and other hazards. Rather than using expensive, vulnerable installations at a few points, SmartPhones4Water (S4W), a form of Citizen Hydrology, leverages widely available mobile technology to gather hydrologic data at many sites in a manner that is repeatable and scalable. However, there is currently a limited understanding of the impact of decreased observational frequency on the accuracy of key streamflow statistics like minimum flow, maximum flow, and runoff. As a first step towards evaluating the tradeoffs between traditional continuous monitoring approaches and emerging Citizen Hydrology methods, we randomly selected 50 active U.S. Geological Survey (USGS) streamflow gauges in California. We used historical 15 minute flow data from 01/01/2008 through 12/31/2014 to develop minimum flow, maximum flow, and runoff values (7 year total) for each gauge. In order to mimic lower frequency Citizen Hydrology observations, we developed a bootstrap randomized subsampling with replacement procedure. We calculated the same statistics, along with their respective distributions, from 50 subsample iterations with four different subsampling intervals (i.e. daily, three day, weekly, and monthly). Based on our results we conclude that, depending on the types of questions being asked, and the watershed characteristics, Citizen Hydrology streamflow measurements can provide useful and accurate information. Depending on watershed characteristics, minimum flows were reasonably estimated with subsample intervals ranging from daily to monthly. However, maximum flows in most cases were poorly characterized, even at daily subsample intervals. In general, runoff volumes were accurately estimated from daily, three day, weekly, and even in some cases, monthly observations.

  5. A ram-pressure threshold for star formation

    NASA Astrophysics Data System (ADS)

    Whitworth, A. P.

    2016-05-01

    In turbulent fragmentation, star formation occurs in condensations created by converging flows. The condensations must be sufficiently massive, dense and cool to be gravitationally unstable, so that they start to contract; and they must then radiate away thermal energy fast enough for self-gravity to remain dominant, so that they continue to contract. For the metallicities and temperatures in local star-forming clouds, this second requirement is only met robustly when the gas couples thermally to the dust, because this delivers the capacity to radiate across the full bandwidth of the continuum, rather than just in a few discrete spectral lines. This translates into a threshold for vigorous star formation, which can be written as a minimum ram pressure PCRIT ˜ 4 × 10-11 dyne. PCRIT is independent of temperature, and corresponds to flows with molecular hydrogen number density n_{{H_2.FLOW}} and velocity vFLOW satisfying n_{{H_2.FLOW}} v_{FLOW}^2≳ 800 cm^{-3} (km s^{-1})^2. This in turn corresponds to a minimum molecular hydrogen column density for vigorous star formation, N_{{H_2.CRIT}} ˜ 4 × 10^{21} cm^{-2} (ΣCRIT ˜ 100 M⊙ pc-2), and a minimum visual extinction AV, CRIT ˜ 9 mag. The characteristic diameter and line density for a star-forming filament when this threshold is just exceeded - a sweet spot for local star formation regions - are 2RFIL ˜ 0.1 pc and μFIL ˜ 13 M⊙ pc-2. The characteristic diameter and mass for a prestellar core condensing out of such a filament are 2RCORE ˜ 0.1 pc and MCORE ˜ 1 M⊙. We also show that fragmentation of a shock-compressed layer is likely to commence while the convergent flows creating the layer are still ongoing, and we stress that, under this circumstance, the phenomenology and characteristic scales for fragmentation of the layer are fundamentally different from those derived traditionally for pre-existing layers.

  6. Analyses of flood-flow frequency for selected gaging stations in South Dakota

    USGS Publications Warehouse

    Benson, R.D.; Hoffman, E.B.; Wipf, V.J.

    1985-01-01

    Analyses of flood flow frequency were made for 111 continuous-record gaging stations in South Dakota with 10 or more years of record. The analyses were developed using the log-Pearson Type III procedure recommended by the U.S. Water Resources Council. The procedure characterizes flood occurrence at a single site as a sequence of annual peak flows. The magnitudes of the annual peak flows are assumed to be independent random variables following a log-Pearson Type III probability distribution, which defines the probability that any single annual peak flow will exceed a specified discharge. By considering only annual peak flows, the flood-frequency analysis becomes the estimation of the log-Pearson annual-probability curve using the record of annual peak flows at the site. The recorded data are divided into two classes: systematic and historic. The systematic record includes all annual peak flows determined in the process of conducting a systematic gaging program at a site. In this program, the annual peak flow is determined for each and every year of the program. The systematic record is intended to constitute an unbiased and representative sample of the population of all possible annual peak flows at the site. In contrast to the systematic record, the historic record consists of annual peak flows that would not have been determined except for evidence indicating their unusual magnitude. Flood information acquired from historical sources almost invariably refers to floods of noteworthy, and hence extraordinary, size. Although historic records form a biased and unrepresentative sample, they can be used to supplement the systematic record. (Author 's abstract)

  7. 40 CFR Table 3 to Subpart Hhh of... - Operating Parameters To Be Monitored and Minimum Measurement and Recording Frequencies

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... rural HMIWI HMIWI a with dry scrubber followed by fabric filter HMIWI a with wet scrubber HMIWI a with dry scrubber followed by fabric filter and wet scrubber Maximum operating parameters: Maximum charge... mercury (Hg) sorbent flow rate Hourly Once per hour ✔ ✔ Minimum pressure drop across the wet scrubber or...

  8. Subsonic flight test evaluation of a performance seeking control algorithm on an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn B.; Orme, John S.

    1992-01-01

    The subsonic flight test evaluation phase of the NASA F-15 (powered by F 100 engines) performance seeking control program was completed for single-engine operation at part- and military-power settings. The subsonic performance seeking control algorithm optimizes the quasi-steady-state performance of the propulsion system for three modes of operation. The minimum fuel flow mode minimizes fuel consumption. The minimum thrust mode maximizes thrust at military power. Decreases in thrust-specific fuel consumption of 1 to 2 percent were measured in the minimum fuel flow mode; these fuel savings are significant, especially for supersonic cruise aircraft. Decreases of up to approximately 100 degree R in fan turbine inlet temperature were measured in the minimum temperature mode. Temperature reductions of this magnitude would more than double turbine life if inlet temperature was the only life factor. Measured thrust increases of up to approximately 15 percent in the maximum thrust mode cause substantial increases in aircraft acceleration. The system dynamics of the closed-loop algorithm operation were good. The subsonic flight phase has validated the performance seeking control technology, which can significantly benefit the next generation of fighter and transport aircraft.

  9. Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research

    PubMed Central

    Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi

    2016-01-01

    The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637

  10. Teleportation of Three-Qubit State via Six-qubit Cluster State

    NASA Astrophysics Data System (ADS)

    Yu, Li-zhi; Sun, Shao-xin

    2015-05-01

    A scheme of probabilistic teleportation was proposed. In this scheme, we took a six-qubit nonmaximally cluster state as the quantum channel to teleport an unknown three-qubit entangled state. Based on Bob's three times Bell state measurement (BSM) results, the receiver Bob can by introducing an auxiliary particle and the appropriate transformation to reconstruct the initial state with a certain probability. We found that, the successful transmission probability depend on the absolute value of coefficients of two of six particle cluster state minimum.

  11. Probabilistically Perfect Cloning of Two Pure States: Geometric Approach.

    PubMed

    Yerokhin, V; Shehu, A; Feldman, E; Bagan, E; Bergou, J A

    2016-05-20

    We solve the long-standing problem of making n perfect clones from m copies of one of two known pure states with minimum failure probability in the general case where the known states have arbitrary a priori probabilities. The solution emerges from a geometric formulation of the problem. This formulation reveals that cloning converges to state discrimination followed by state preparation as the number of clones goes to infinity. The convergence exhibits a phenomenon analogous to a second-order symmetry-breaking phase transition.

  12. Minimum Bayes risk image correlation

    NASA Technical Reports Server (NTRS)

    Minter, T. C., Jr.

    1980-01-01

    In this paper, the problem of designing a matched filter for image correlation will be treated as a statistical pattern recognition problem. It is shown that, by minimizing a suitable criterion, a matched filter can be estimated which approximates the optimum Bayes discriminant function in a least-squares sense. It is well known that the use of the Bayes discriminant function in target classification minimizes the Bayes risk, which in turn directly minimizes the probability of a false fix. A fast Fourier implementation of the minimum Bayes risk correlation procedure is described.

  13. 40 CFR 63.11646 - What are my compliance requirements?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... with Method 29 must collect a minimum sample volume of 0.85 dry standard cubic meters (30 dry standard... weight measurement device, mass flow meter, or densitometer and volumetric flow meter to measure ore...) Measure the weight of concentrate (produced by electrowinning, Merrill Crowe process, gravity feed, or...

  14. 40 CFR 63.11646 - What are my compliance requirements?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... with Method 29 must collect a minimum sample volume of 0.85 dry standard cubic meters (30 dry standard... weight measurement device, mass flow meter, or densitometer and volumetric flow meter to measure ore...) Measure the weight of concentrate (produced by electrowinning, Merrill Crowe process, gravity feed, or...

  15. 40 CFR 63.11646 - What are my compliance requirements?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... with Method 29 must collect a minimum sample volume of 0.85 dry standard cubic meters (30 dry standard... weight measurement device, mass flow meter, or densitometer and volumetric flow meter to measure ore...) Measure the weight of concentrate (produced by electrowinning, Merrill Crowe process, gravity feed, or...

  16. 40 CFR 63.11646 - What are my compliance requirements?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... with Method 29 must collect a minimum sample volume of 0.85 dry standard cubic meters (30 dry standard... weight measurement device, mass flow meter, or densitometer and volumetric flow meter to measure ore...) Measure the weight of concentrate (produced by electrowinning, Merrill Crowe process, gravity feed, or...

  17. STRUCTURAL CAPABILITIES OF NO-DIG MANHOLE REHABILITATION (WE&RF Report INFR1R12)

    EPA Science Inventory

    Failure of a manhole may have catastrophic consequences such as a sinkhole. At a minimum, wastewater flow will be blocked and flow upstream of the manhole will backup, causing a sanitary sewer overflow (SSO). Accordingly, the structural condition of a manhole is an important perf...

  18. 78 FR 1765 - Requirements for Chemical Oxygen Generators Installed on Transport Category Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-09

    ... the supplemental oxygen supply can also complicate activating the oxygen flow, since that is generally... oxygen quantity requirements of Sec. 25.1443, Minimum mass flow of supplemental oxygen. E. Related...-0812; Notice No. 13-01] RIN 2120-AK14 Requirements for Chemical Oxygen Generators Installed on...

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loef, P.A.; Smed, T.; Andersson, G.

    The minimum singular value of the power flow Jacobian matrix has been used as a static voltage stability index, indicating the distance between the studied operating point and the steady state voltage stability limit. In this paper a fast method to calculate the minimum singular value and the corresponding (left and right) singular vectors is presented. The main advantages of the developed algorithm are the small amount of computation time needed, and that it only requires information available from an ordinary program for power flow calculations. Furthermore, the proposed method fully utilizes the sparsity of the power flow Jacobian matrixmore » and hence the memory requirements for the computation are low. These advantages are preserved when applied to various submatrices of the Jacobian matrix, which can be useful in constructing special voltage stability indices. The developed algorithm was applied to small test systems as well as to a large (real size) system with over 1000 nodes, with satisfactory results.« less

  20. News and Views: Kleopatra a pile of rubble, shedding moons; Did plasma flow falter to stretch solar minimum? Amateurs hit 20 million variable-star observations; Climate maths; Planetary priorities; New roles in BGA

    NASA Astrophysics Data System (ADS)

    2011-04-01

    Metallic asteroid 216 Kleopatra is shaped like a dog's bone and has two tiny moons - which came from the asteroid itself - according to a team of astronomers from France and the US, who also measured its surprisingly low density and concluded that it is a collection of rubble. The recent solar minimum was longer and lower than expected, with a low polar field and an unusually large number of days with no sunspots visible. Models of the magnetic field and plasma flow within the Sun suggest that fast, then slow meridional flow could account for this pattern. Variable stars are a significant scientific target for amateur astronomers. The American Association of Variable Star Observers runs the world's largest database of variable star observations, from volunteers, and reached 20 million observations in February.

  1. Environmental flows in the context of unconventional natural gas development in the Marcellus Shale

    DOE PAGES

    Buchanan, Brian P.; Auerbach, Daniel A.; McManamay, Ryan A.; ...

    2017-01-04

    Quantitative flow-ecology relationships are needed to evaluate how water withdrawals for unconventional natural gas development may impact aquatic ecosystems. Addressing this need, we studied current patterns of hydrologic alteration in the Marcellus Shale region and related the estimated flow alteration to fish community measures. We then used these empirical flow-ecology relationships to evaluate alternative surface water withdrawals and environmental flow rules. Reduced high-flow magnitude, dampened rates of change, and increased low-flow magnitudes were apparent regionally, but changes in many of the flow metrics likely to be sensitive to withdrawals also showed substantial regional variation. Fish community measures were significantly relatedmore » to flow alteration, including declines in species richness with diminished annual runoff, winter low-flow, and summer median-flow. In addition, the relative abundance of intolerant taxa decreased with reduced winter high-flow and increased flow constancy, while fluvial specialist species decreased with reduced winter and annual flows. Stream size strongly mediated both the impact of withdrawal scenarios and the protection afforded by environmental flow standards. Under the most intense withdrawal scenario, 75% of reference headwaters and creeks (drainage areas <99 km 2) experienced at least 78% reduction in summer flow, whereas little change was predicted for larger rivers. Moreover, the least intense withdrawal scenario still reduced summer flows by at least 21% for 50% of headwaters and creeks. The observed 90th quantile flow-ecology relationships indicate that such alteration could reduce species richness by 23% or more. Seasonally varying environmental flow standards and high fixed minimum flows protected the most streams from hydrologic alteration, but common minimum flow standards left numerous locations vulnerable to substantial flow alteration. This study clarifies how additional water demands in the region may adversely affect freshwater biological integrity. Furthermore, the results make clear that policies to limit or prevent water withdrawals from smaller streams can reduce the risk of ecosystem impairment.« less

  2. Environmental flows in the context of unconventional natural gas development in the Marcellus Shale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buchanan, Brian P.; Auerbach, Daniel A.; McManamay, Ryan A.

    Quantitative flow-ecology relationships are needed to evaluate how water withdrawals for unconventional natural gas development may impact aquatic ecosystems. Addressing this need, we studied current patterns of hydrologic alteration in the Marcellus Shale region and related the estimated flow alteration to fish community measures. We then used these empirical flow-ecology relationships to evaluate alternative surface water withdrawals and environmental flow rules. Reduced high-flow magnitude, dampened rates of change, and increased low-flow magnitudes were apparent regionally, but changes in many of the flow metrics likely to be sensitive to withdrawals also showed substantial regional variation. Fish community measures were significantly relatedmore » to flow alteration, including declines in species richness with diminished annual runoff, winter low-flow, and summer median-flow. In addition, the relative abundance of intolerant taxa decreased with reduced winter high-flow and increased flow constancy, while fluvial specialist species decreased with reduced winter and annual flows. Stream size strongly mediated both the impact of withdrawal scenarios and the protection afforded by environmental flow standards. Under the most intense withdrawal scenario, 75% of reference headwaters and creeks (drainage areas <99 km 2) experienced at least 78% reduction in summer flow, whereas little change was predicted for larger rivers. Moreover, the least intense withdrawal scenario still reduced summer flows by at least 21% for 50% of headwaters and creeks. The observed 90th quantile flow-ecology relationships indicate that such alteration could reduce species richness by 23% or more. Seasonally varying environmental flow standards and high fixed minimum flows protected the most streams from hydrologic alteration, but common minimum flow standards left numerous locations vulnerable to substantial flow alteration. This study clarifies how additional water demands in the region may adversely affect freshwater biological integrity. Furthermore, the results make clear that policies to limit or prevent water withdrawals from smaller streams can reduce the risk of ecosystem impairment.« less

  3. Potential postwildfire debris-flow hazards - a prewildfire evaluation for the Sandia and Manzano Mountains and surrounding areas, central New Mexico

    Treesearch

    Anne C. Tillery; Jessica R. Haas; Lara W. Miller; Joe H. Scott; Matthew P. Thompson

    2014-01-01

    Wildfire can drastically increase the probability of debris flows, a potentially hazardous and destructive form of mass wasting, in landscapes that have otherwise been stable throughout recent history. Although there is no way to know the exact location, extent, and severity of wildfire, or the subsequent rainfall intensity and duration before it happens, probabilities...

  4. Assessment of risk to Boeing commerical transport aircraft from carbon fibers. [fiber release from graphite/epxoy materials

    NASA Technical Reports Server (NTRS)

    Clarke, C. A.; Brown, E. L.

    1980-01-01

    The possible effects of free carbon fibers on aircraft avionic equipment operation, removal costs, and safety were investigated. Possible carbon fiber flow paths, flow rates, and transfer functions into the Boeing 707, 727, 737, 747 aircraft and potentially vulnerable equipment were identified. Probabilities of equipment removal and probabilities of aircraft exposure to carbon fiber were derived.

  5. Unbiased multi-fidelity estimate of failure probability of a free plane jet

    NASA Astrophysics Data System (ADS)

    Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin

    2017-11-01

    Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.

  6. Hydrogeologic Unit Flow Characterization Using Transition Probability Geostatistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, N L; Walker, J R; Carle, S F

    2003-11-21

    This paper describes a technique for applying the transition probability geostatistics method for stochastic simulation to a MODFLOW model. Transition probability geostatistics has several advantages over traditional indicator kriging methods including a simpler and more intuitive framework for interpreting geologic relationships and the ability to simulate juxtapositional tendencies such as fining upwards sequences. The indicator arrays generated by the transition probability simulation are converted to layer elevation and thickness arrays for use with the new Hydrogeologic Unit Flow (HUF) package in MODFLOW 2000. This makes it possible to preserve complex heterogeneity while using reasonably sized grids. An application of themore » technique involving probabilistic capture zone delineation for the Aberjona Aquifer in Woburn, Ma. is included.« less

  7. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  8. Low-flow profiles of the upper Savannah and Ogeechee Rivers and tributaries in Georgia

    USGS Publications Warehouse

    Carter, R.F.; Hopkins, E.H.; Perlman, H.A.

    1988-01-01

    Low flow information is provided for use in an evaluation of the capacity of streams to permit withdrawals or to accept waste loads without exceeding the limits of State water quality standards. The purpose of this report is to present the results of a compilation of available low flow data in the form of tables and ' 7Q10 flow profiles ' (minimum average flow for 7 consecutive days with a 10-yr recurrence interval)(7Q10 flow plotted against distance along a stream channel) for all streams reaches of the Upper Savannah and Ogeechee Rivers and tributaries where sufficient data of acceptable accuracy are available. Drainage area profiles are included for all stream basins larger than 5 sq mi, except for those in a few remote areas. This report is the third in a series of reports that will cover all stream basins north of the Fall Line in Georgia. It includes the Georgia part of the Savannah River basin from its headwaters down to and including McBean Creek, and Brier Creek from its headwaters down to and including Boggy Gut Creek. It also includes the Ogeechee River from its headwaters down to and including Big Creek, and Rocky Comfort Creek (tributary to Ogeechee River) down to the Glascock-Jefferson County line. Flow records were not adjusted for diversions or other factors that cause measured flows to represent other than natural flow conditions. The 7-day minimum flow profile was omitted for stream reaches where natural flow was known to be altered significantly. (Lantz-PTT)

  9. Extrapolating regional probability of drying of headwater streams using discrete observations and gauging networks

    NASA Astrophysics Data System (ADS)

    Beaufort, Aurélien; Lamouroux, Nicolas; Pella, Hervé; Datry, Thibault; Sauquet, Eric

    2018-05-01

    Headwater streams represent a substantial proportion of river systems and many of them have intermittent flows due to their upstream position in the network. These intermittent rivers and ephemeral streams have recently seen a marked increase in interest, especially to assess the impact of drying on aquatic ecosystems. The objective of this paper is to quantify how discrete (in space and time) field observations of flow intermittence help to extrapolate over time the daily probability of drying (defined at the regional scale). Two empirical models based on linear or logistic regressions have been developed to predict the daily probability of intermittence at the regional scale across France. Explanatory variables were derived from available daily discharge and groundwater-level data of a dense gauging/piezometer network, and models were calibrated using discrete series of field observations of flow intermittence. The robustness of the models was tested using an independent, dense regional dataset of intermittence observations and observations of the year 2017 excluded from the calibration. The resulting models were used to extrapolate the daily regional probability of drying in France: (i) over the period 2011-2017 to identify the regions most affected by flow intermittence; (ii) over the period 1989-2017, using a reduced input dataset, to analyse temporal variability of flow intermittence at the national level. The two empirical regression models performed equally well between 2011 and 2017. The accuracy of predictions depended on the number of continuous gauging/piezometer stations and intermittence observations available to calibrate the regressions. Regions with the highest performance were located in sedimentary plains, where the monitoring network was dense and where the regional probability of drying was the highest. Conversely, the worst performances were obtained in mountainous regions. Finally, temporal projections (1989-2016) suggested the highest probabilities of intermittence (> 35 %) in 1989-1991, 2003 and 2005. A high density of intermittence observations improved the information provided by gauging stations and piezometers to extrapolate the temporal variability of intermittent rivers and ephemeral streams.

  10. Optimizing Natural Gas Networks through Dynamic Manifold Theory and a Decentralized Algorithm: Belgium Case Study

    NASA Astrophysics Data System (ADS)

    Koch, Caleb; Winfrey, Leigh

    2014-10-01

    Natural Gas is a major energy source in Europe, yet political instabilities have the potential to disrupt access and supply. Energy resilience is an increasingly essential construct and begins with transmission network design. This study proposes a new way of thinking about modelling natural gas flow. Rather than relying on classical economic models, this problem is cast into a time-dependent Hamiltonian dynamics discussion. Traditional Natural Gas constraints, including inelastic demand and maximum/minimum pipe flows, are portrayed as energy functions and built into the dynamics of each pipe flow. Doing so allows the constraints to be built into the dynamics of each pipeline. As time progresses in the model, natural gas flow rates find the minimum energy, thus the optimal gas flow rates. The most important result of this study is using dynamical principles to ensure the output of natural gas at demand nodes remains constant, which is important for country to country natural gas transmission. Another important step in this study is building the dynamics of each flow in a decentralized algorithm format. Decentralized regulation has solved congestion problems for internet data flow, traffic flow, epidemiology, and as demonstrated in this study can solve the problem of Natural Gas congestion. A mathematical description is provided for how decentralized regulation leads to globally optimized network flow. Furthermore, the dynamical principles and decentralized algorithm are applied to a case study of the Fluxys Belgium Natural Gas Network.

  11. Assault frequency and preformation probability of the {alpha} emission process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, H. F.; Royer, G.; Li, J. Q.

    2011-08-15

    A study of the assault frequency and preformation factor of the {alpha}-decay description is performed from the experimental {alpha}-decay constant and the penetration probabilities calculated from the generalized liquid-drop model (GLDM) potential barriers. To determine the assault frequency a quantum-mechanical method using a harmonic oscillator is introduced and leads to values of around 10{sup 21} s{sup -1}, similar to the ones calculated within the classical method. The preformation probability is around 10{sup -1}-10{sup -2}. The results for even-even Po isotopes are discussed for illustration. While the assault frequency presents only a shallow minimum in the vicinity of the magic neutronmore » number 126, the preformation factor and mainly the penetrability probability diminish strongly around N=126.« less

  12. A numerical study of mixing in stationary, nonpremixed, turbulent reacting flows

    NASA Astrophysics Data System (ADS)

    Overholt, Matthew Ryan

    1998-10-01

    In this work a detailed numerical study is made of a statistically-stationary, non-premixed, turbulent reacting model flow known as Periodic Reaction Zones. The mixture fraction-progress variable approach is used, with a mean gradient in the mixture fraction and a model, single-step, reversible, finite-rate thermochemistry, yielding both stationary and local extinction behavior. The passive scalar is studied first, using a statistical forcing scheme to achieve stationarity of the velocity field. Multiple independent direct numerical simulations (DNS) are performed for a wide range of Reynolds numbers with a number of results including a bilinear model for scalar mixing jointly conditioned on the scalar and x2-component of velocity, Gaussian scalar probability density function tails which were anticipated to be exponential, and the quantification of the dissipation of scalar flux. A new deterministic forcing scheme for DNS is then developed which yields reduced fluctuations in many quantities and a more natural evolution of the velocity fields. This forcing method is used for the final portion of this work. DNS results for Periodic Reaction Zones are compared with the Conditional Moment Closure (CMC) model, the Quasi-Equilibrium Distributed Reaction (QEDR) model, and full probability density function (PDF) simulations using the Euclidean Minimum Spanning Tree (EMST) and the Interaction by Exchange with the Mean (IEM) mixing models. It is shown that CMC and QEDR results based on the local scalar dissipation match DNS wherever local extinction is not present. However, due to the large spatial variations of scalar dissipation, and hence local Damkohler number, local extinction is present even when the global Damkohler number is twenty-five times the critical value for extinction. Finally, in the PDF simulations the EMST mixing model closely reproduces CMC and DNS results when local extinction is not present, whereas the IEM model results in large error.

  13. Estimating the probability that the Taser directly causes human ventricular fibrillation.

    PubMed

    Sun, H; Haemmerich, D; Rahko, P S; Webster, J G

    2010-04-01

    This paper describes the first methodology and results for estimating the order of probability for Tasers directly causing human ventricular fibrillation (VF). The probability of an X26 Taser causing human VF was estimated using: (1) current density near the human heart estimated by using 3D finite-element (FE) models; (2) prior data of the maximum dart-to-heart distances that caused VF in pigs; (3) minimum skin-to-heart distances measured in erect humans by echocardiography; and (4) dart landing distribution estimated from police reports. The estimated mean probability of human VF was 0.001 for data from a pig having a chest wall resected to the ribs and 0.000006 for data from a pig with no resection when inserting a blunt probe. The VF probability for a given dart location decreased with the dart-to-heart horizontal distance (radius) on the skin surface.

  14. Multiple neutral density measurements in the lower thermosphere with cold-cathode ionization gauges

    NASA Astrophysics Data System (ADS)

    Lehmacher, G. A.; Gaulden, T. M.; Larsen, M. F.; Craven, J. D.

    2013-01-01

    Cold-cathode ionization gauges were used for rocket-borne measurements of total neutral density and temperature in the aurorally forced lower thermosphere between 90 and 200 km. A commercial gauge was adapted as a low-cost instrument with a spherical antechamber for measurements in molecular flow conditions. Three roll-stabilized payloads on different trajectories each carried two instruments for measurements near the ram flow direction along the respective upleg and downleg segments of a flight path, and six density profiles were obtained within a period of 22 min covering spatial separations up to 200 km. The density profiles were integrated below 125 km to yield temperatures. The mean temperature structure was similar for all six profiles with two mesopause minima near 110 and 101 km, however, for the downleg profiles, the upper minimum was warmer and the lower minimum was colder by 20-30 K indicating significant variability over horizontal scales of 100-200 km. The upper temperature minimum coincided with maximum horizontal winds speeds, exceeding 170 m/s.

  15. Probability density function approach for compressible turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Tsai, Y.-L. P.; Raju, M. S.

    1994-01-01

    The objective of the present work is to extend the probability density function (PDF) tubulence model to compressible reacting flows. The proability density function of the species mass fractions and enthalpy are obtained by solving a PDF evolution equation using a Monte Carlo scheme. The PDF solution procedure is coupled with a compression finite-volume flow solver which provides the velocity and pressure fields. A modeled PDF equation for compressible flows, capable of treating flows with shock waves and suitable to the present coupling scheme, is proposed and tested. Convergence of the combined finite-volume Monte Carlo solution procedure is discussed. Two super sonic diffusion flames are studied using the proposed PDF model and the results are compared with experimental data; marked improvements over solutions without PDF are observed.

  16. Minimum Flows and Levels Method of the St. Johns River Water Management District, Florida, USA

    NASA Astrophysics Data System (ADS)

    Neubauer, Clifford P.; Hall, Greeneville B.; Lowe, Edgar F.; Robison, C. Price; Hupalo, Richard B.; Keenan, Lawrence W.

    2008-12-01

    The St. Johns River Water Management District (SJRWMD) has developed a minimum flows and levels (MFLs) method that has been applied to rivers, lakes, wetlands, and springs. The method is primarily focused on ecological protection to ensure systems meet or exceed minimum eco-hydrologic requirements. MFLs are not calculated from past hydrology. Information from elevation transects is typically used to determine MFLs. Multiple MFLs define a minimum hydrologic regime to ensure that high, intermediate, and low hydrologic conditions are protected. MFLs are often expressed as statistics of long-term hydrology incorporating magnitude (flow and/or level), duration (days), and return interval (years). Timing and rates of change, the two other critical hydrologic components, should be sufficiently natural. The method is an event-based, non-equilibrium approach. The method is used in a regulatory water management framework to ensure that surface and groundwater withdrawals do not cause significant harm to the water resources and ecology of the above referenced system types. MFLs are implemented with hydrologic water budget models that simulate long-term system hydrology. The method enables a priori hydrologic assessments that include the cumulative effects of water withdrawals. Additionally, the method can be used to evaluate management options for systems that may be over-allocated or for eco-hydrologic restoration projects. The method can be used outside of the SJRWMD. However, the goals, criteria, and indicators of protection used to establish MFLs are system-dependent. Development of regionally important criteria and indicators of protection may be required prior to use elsewhere.

  17. Evaluation of an Active Humidification System for Inspired Gas

    PubMed Central

    Roux, Nicolás G.; Villalba, Darío S.; Gogniat, Emiliano; Feld, Vivivana; Ribero Vairo, Noelia; Sartore, Marisa; Bosso, Mauro; Scapellato, José L.; Intile, Dante; Planells, Fernando; Noval, Diego; Buñirigo, Pablo; Jofré, Ricardo; Díaz Nielsen, Ernesto

    2015-01-01

    Objectives The effectiveness of the active humidification systems (AHS) in patients already weaned from mechanical ventilation and with an artificial airway has not been very well described. The objective of this study was to evaluate the performance of an AHS in chronically tracheostomized and spontaneously breathing patients. Methods Measurements were quantified at three levels of temperature (T°) of the AHS: level I, low; level II, middle; and level III, high and at different flow levels (20 to 60 L/minute). Statistical analysis of repeated measurements was performed using analysis of variance and significance was set at a P<0.05. Results While the lowest temperature setting (level I) did not condition gas to the minimum recommended values for any of the flows that were used, the medium temperature setting (level II) only conditioned gas with flows of 20 and 30 L/minute. Finally, at the highest temperature setting (level III), every flow reached the minimum absolute humidity (AH) recommended of 30 mg/L. Conclusion According to our results, to obtain appropiate relative humidity, AH and T° of gas one should have a device that maintains water T° at least at 53℃ for flows between 20 and 30 L/m, or at T° of 61℃ at any flow rate. PMID:25729499

  18. Canonical fluid thermodynamics. [variational principles of stability for compressible adiabatic flow

    NASA Technical Reports Server (NTRS)

    Schmid, L. A.

    1974-01-01

    The space-time integral of the thermodynamic pressure plays in a certain sense the role of the thermodynamic potential for compressible adiabatic flow. The stability criterion can be converted into a variational minimum principle by requiring the molar free-enthalpy and temperature to be generalized velocities. In the fluid context, the definition of proper-time differentiation involves the fluid velocity expressed in terms of three particle identity parameters. The pressure function is then converted into a functional which is the Lagrangian density of the variational principle. Being also a minimum principle, the variational principle provides a means for comparing the relative stability of different flows. For boundary conditions with a high degree of symmetry, as in the case of a uniformly expanding spherical gas box, the most stable flow is a rectilinear flow for which the world-trajectory of each particle is a straight line. Since the behavior of the interior of a freely expanding cosmic cloud may be expected to be similar to that of the fluid in the spherical box of gas, this suggests that the cosmic principle is a consequence of the laws of thermodynamics, rather than just an ad hoc postulate.

  19. 40 CFR Table 3 to Subpart Ddddd of... - Operating Limits for Boilers and Process Heaters With Mercury Emission Limits and Boilers and...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... minimum pressure drop and liquid flow-rate at or above the operating levels established during the... leak detection system alarm does not sound more than 5 percent of the operating time during a 6-month... control Maintain the minimum sorbent or carbon injection rate at or above the operating levels established...

  20. 40 CFR Table 3 to Subpart Ddddd of... - Operating Limits for Boilers and Process Heaters With Mercury Emission Limits and Boilers and...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... minimum pressure drop and liquid flow-rate at or above the operating levels established during the... leak detection system alarm does not sound more than 5 percent of the operating time during a 6-month... control Maintain the minimum sorbent or carbon injection rate at or above the operating levels established...

  1. Postwildfire debris-flow hazard assessment of the area burned by the 2013 West Fork Fire Complex, southwestern Colorado

    USGS Publications Warehouse

    Verdin, Kristine L.; Dupree, Jean A.; Stevens, Michael R.

    2013-01-01

    This report presents a preliminary emergency assessment of the debris-flow hazards from drainage basins burned by the 2013 West Fork Fire Complex near South Fork in southwestern Colorado. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the intermountain western United States were used to estimate the probability of debris-flow occurrence, potential volume of debris flows, and the combined debris-flow hazard ranking along the drainage network within and just downstream from the burned area, and to estimate the same for 54 drainage basins of interest within the perimeter of the burned area. Input data for the debris-flow models included topographic variables, soil characteristics, burn severity, and rainfall totals and intensities for a (1) 2-year-recurrence, 1-hour-duration rainfall, referred to as a 2-year storm; (2) 10-year-recurrence, 1-hour-duration rainfall, referred to as a 10-year storm; and (3) 25-year-recurrence, 1-hour-duration rainfall, referred to as a 25-year storm. Estimated debris-flow probabilities at the pour points of the 54 drainage basins of interest ranged from less than 1 to 65 percent in response to the 2-year storm; from 1 to 77 percent in response to the 10-year storm; and from 1 to 83 percent in response to the 25-year storm. Twelve of the 54 drainage basins of interest have a 30-percent probability or greater of producing a debris flow in response to the 25-year storm. Estimated debris-flow volumes for all rainfalls modeled range from a low of 2,400 cubic meters to a high of greater than 100,000 cubic meters. Estimated debris-flow volumes increase with basin size and distance along the drainage network, but some smaller drainages also were predicted to produce substantial debris flows. One of the 54 drainage basins of interest had the highest combined hazard ranking, while 9 other basins had the second highest combined hazard ranking. Of these 10 basins with the 2 highest combined hazard rankings, 7 basins had predicted debris-flow volumes exceeding 100,000 cubic meters, while 3 had predicted probabilities of debris flows exceeding 60 percent. The 10 basins with high combined hazard ranking include 3 tributaries in the headwaters of Trout Creek, four tributaries to the West Fork San Juan River, Hope Creek draining toward a county road on the eastern edge of the burn, Lake Fork draining to U.S. Highway 160, and Leopard Creek on the northern edge of the burn. The probabilities and volumes for the modeled storms indicate a potential for debris-flow impacts on structures, reservoirs, roads, bridges, and culverts located within and immediately downstream from the burned area. U.S. Highway 160, on the eastern edge of the burn area, also is susceptible to impacts from debris flows.

  2. Restoring a flow regime through the coordinated operation of a multireservoir system: The case of the Zambezi River basin

    NASA Astrophysics Data System (ADS)

    Tilmant, A.; Beevers, L.; Muyunda, B.

    2010-07-01

    Large storage facilities in hydropower-dominated river basins have traditionally been designed and managed to maximize revenues from energy generation. In an attempt to mitigate the externalities downstream due to a reduction in flow fluctuation, minimum flow requirements have been imposed to reservoir operators. However, it is now recognized that a varying flow regime including flow pulses provides the best conditions for many aquatic ecosystems. This paper presents a methodology to derive a trade-off relationship between hydropower generation and ecological preservation in a system with multiple reservoirs and stochastic inflows. Instead of imposing minimum flow requirements, the method brings more flexibility to the allocation process by building upon environmental valuation studies to derive simple demand curves for environmental goods and services, which are then used in a reservoir optimization model together with the demand for energy. The objective here is not to put precise monetary values on environmental flows but to see the marginal changes in release policies should those values be considered. After selecting appropriate risk indicators for hydropower generation and ecological preservation, the trade-off curve provides a concise way of exploring the extent to which one of the objectives must be sacrificed in order to achieve more of the other. The methodology is illustrated with the Zambezi River basin where large man-made reservoirs have disrupted the hydrological regime.

  3. Tuberculosis in a South African prison – a transmission modelling analysis

    PubMed Central

    Johnstone-Robertson, Simon; Lawn, Stephen D; Welte, Alex; Bekker, Linda-Gail; Wood, Robin

    2015-01-01

    Background Prisons are recognised internationally as institutions with very high tuberculosis (TB) burdens where transmission is predominantly determined by contact between infectious and susceptible prisoners. A recent South African court case described the conditions under which prisoners awaiting trial were kept. With the use of these data, a mathematical model was developed to explore the interactions between incarceration conditions and TB control measures. Methods Cell dimensions, cell occupancy, lock-up time, TB incidence and treatment delays were derived from court evidence and judicial reports. Using the Wells-Riley equation and probability analyses of contact between prisoners, we estimated the current TB transmission probability within prison cells, and estimated transmission probabilities of improved levels of case finding in combination with implementation of national and international minimum standards for incarceration. Results Levels of overcrowding (230%) in communal cells and poor TB case finding result in annual TB transmission risks of 90% per annum. Implementing current national or international cell occupancy recommendations would reduce TB transmission probabilities by 30% and 50%, respectively. Improved passive case finding, modest ventilation increase or decreased lock-up time would minimally impact on transmission if introduced individually. However, active case finding together with implementation of minimum national and international standards of incarceration could reduce transmission by 50% and 94%, respectively. Conclusions Current conditions of detention for awaiting-trial prisoners are highly conducive for spread of drug-sensitive and drug-resistant TB. Combinations of simple well-established scientific control measures should be implemented urgently. PMID:22272961

  4. Simple graph models of information spread in finite populations

    PubMed Central

    Voorhees, Burton; Ryder, Bergerud

    2015-01-01

    We consider several classes of simple graphs as potential models for information diffusion in a structured population. These include biases cycles, dual circular flows, partial bipartite graphs and what we call ‘single-link’ graphs. In addition to fixation probabilities, we study structure parameters for these graphs, including eigenvalues of the Laplacian, conductances, communicability and expected hitting times. In several cases, values of these parameters are related, most strongly so for partial bipartite graphs. A measure of directional bias in cycles and circular flows arises from the non-zero eigenvalues of the antisymmetric part of the Laplacian and another measure is found for cycles as the value of the transition probability for which hitting times going in either direction of the cycle are equal. A generalization of circular flow graphs is used to illustrate the possibility of tuning edge weights to match pre-specified values for graph parameters; in particular, we show that generalizations of circular flows can be tuned to have fixation probabilities equal to the Moran probability for a complete graph by tuning vertex temperature profiles. Finally, single-link graphs are introduced as an example of a graph involving a bottleneck in the connection between two components and these are compared to the partial bipartite graphs. PMID:26064661

  5. PDF approach for compressible turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Tsai, Y.-L. P.; Raju, M. S.

    1993-01-01

    The objective of the present work is to develop a probability density function (pdf) turbulence model for compressible reacting flows for use with a CFD flow solver. The probability density function of the species mass fraction and enthalpy are obtained by solving a pdf evolution equation using a Monte Carlo scheme. The pdf solution procedure is coupled with a compressible CFD flow solver which provides the velocity and pressure fields. A modeled pdf equation for compressible flows, capable of capturing shock waves and suitable to the present coupling scheme, is proposed and tested. Convergence of the combined finite-volume Monte Carlo solution procedure is discussed, and an averaging procedure is developed to provide smooth Monte-Carlo solutions to ensure convergence. Two supersonic diffusion flames are studied using the proposed pdf model and the results are compared with experimental data; marked improvements over CFD solutions without pdf are observed. Preliminary applications of pdf to 3D flows are also reported.

  6. Using Logistic Regression to Predict the Probability of Debris Flows in Areas Burned by Wildfires, Southern California, 2003-2006

    USGS Publications Warehouse

    Rupert, Michael G.; Cannon, Susan H.; Gartner, Joseph E.; Michael, John A.; Helsel, Dennis R.

    2008-01-01

    Logistic regression was used to develop statistical models that can be used to predict the probability of debris flows in areas recently burned by wildfires by using data from 14 wildfires that burned in southern California during 2003-2006. Twenty-eight independent variables describing the basin morphology, burn severity, rainfall, and soil properties of 306 drainage basins located within those burned areas were evaluated. The models were developed as follows: (1) Basins that did and did not produce debris flows soon after the 2003 to 2006 fires were delineated from data in the National Elevation Dataset using a geographic information system; (2) Data describing the basin morphology, burn severity, rainfall, and soil properties were compiled for each basin. These data were then input to a statistics software package for analysis using logistic regression; and (3) Relations between the occurrence or absence of debris flows and the basin morphology, burn severity, rainfall, and soil properties were evaluated, and five multivariate logistic regression models were constructed. All possible combinations of independent variables were evaluated to determine which combinations produced the most effective models, and the multivariate models that best predicted the occurrence of debris flows were identified. Percentage of high burn severity and 3-hour peak rainfall intensity were significant variables in all models. Soil organic matter content and soil clay content were significant variables in all models except Model 5. Soil slope was a significant variable in all models except Model 4. The most suitable model can be selected from these five models on the basis of the availability of independent variables in the particular area of interest and field checking of probability maps. The multivariate logistic regression models can be entered into a geographic information system, and maps showing the probability of debris flows can be constructed in recently burned areas of southern California. This study demonstrates that logistic regression is a valuable tool for developing models that predict the probability of debris flows occurring in recently burned landscapes.

  7. Forest practices and stream flow in western Oregon.

    Treesearch

    R. Dennis. Harr

    1976-01-01

    Forest management activities, including roadbuilding, clearcut logging, and broadcast burning, can change certain portions of the forest hydrologic cycle. Watershed studies and other hydrologic research in the Coast and western Cascade Ranges of Oregon have shown that these changes may increase annual water yield up to 62 centimeters, double minimum flows in summer,...

  8. Laryngeal Aerodynamics in Healthy Older Adults and Adults with Parkinson's Disease

    ERIC Educational Resources Information Center

    Matheron, Deborah; Stathopoulos, Elaine T.; Huber, Jessica E.; Sussman, Joan E.

    2017-01-01

    Purpose: The present study compared laryngeal aerodynamic function of healthy older adults (HOA) to adults with Parkinson's disease (PD) while speaking at a comfortable and increased vocal intensity. Method: Laryngeal aerodynamic measures (subglottal pressure, peak-to-peak flow, minimum flow, and open quotient [OQ]) were compared between HOAs and…

  9. 50 CFR 679.93 - Amendment 80 Program recordkeeping, permits, monitoring, and catch accounting.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE (CONTINUED... space to accommodate a minimum of 10 observer sampling baskets. This space must be within or adjacent to... observers assigned to the vessel. (8) Belt and flow operations. The vessel operator stops the flow of fish...

  10. 40 CFR 1065.546 - Validation of minimum dilution ratio for PM batch sampling.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the raw exhaust flow rate based on the measured intake air molar flow rate and the chemical balance..., fuel rate measurements, and fuel properties, consistent with good engineering judgment. (b) Determine...) and dilute exhaust corrected for any removed water. (c) Use good engineering judgment to develop your...

  11. 43 CFR 418.18 - Diversions at Derby Dam.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Operations and Management § 418.18 Diversions at Derby Dam. (a) Diversions of Truckee River water at Derby Dam must be managed to maintain minimum terminal flow to Lahontan Reservoir or the Carson River except... achieve an average terminal flow of 20 cfs or less during times when diversions to Lahontan Reservoir are...

  12. 46 CFR 98.25-40 - Valves, fittings, and accessories.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., United States of America Standard 300-pound standard minimum, fitted with suitable soft gasket material... shut-off valves located as close to the tank as possible. (d) Excess flow valves where required by this section shall close automatically at the rated flow of vapor or liquid as specified by the manufacturer...

  13. Organic Over-the-Horizon Targeting for the 2025 Surface Fleet

    DTIC Science & Technology

    2015-06-01

    Detection Phit Probability of Hit Pk Probability of Kill PLAN People’s Liberation Army Navy PMEL Pacific Marine Environmental Laboratory...probability of hit ( Phit ). 2. Top-Level Functional Flow Block Diagram With the high-level functions of the project’s systems of systems properly

  14. The flame structure and vorticity generated by a chemically reacting transverse jet

    NASA Technical Reports Server (NTRS)

    Karagozian, A. R.

    1986-01-01

    An analytical model describing the behavior of a turbulent fuel jet injected normally into a cross flow is developed. The model places particular emphasis on the contrarotating vortex pair associated with the jet, and predicts the flame length and shape based on entrainment of the oxidizer by the fuel jet. Effects of buoyancy and density variations in the flame are neglected in order to isolate the effects of large-scale mixing. The results are compared with a simulation of the transverse reacting jet in a liquid (acid-base) system. For a wide range of ratios of the cross flow to jet velocity, the model predicts flame length quite well. In particular, the observed transitional behavior in the flame length between cross-flow velocity to jet velocity of orifice ratios of 0.0 to 0.1, yielding an approximate minimum at the ratio 0.05, is reproduced very clearly by the present model. The transformation in flow structure that accounts for this minimum arises from the differing components of vorticity dominant in the near-field and far-field regions of the jet.

  15. The use of an in situ portable flume to examine the effect of flow properties on the capture probability of juvenile Atlantic salmon

    NASA Astrophysics Data System (ADS)

    Roy, M. L.; Roy, A. G.; Grant, J. W.

    2013-12-01

    For stream fish, flow properties have been shown to influence energy expenses and habitat selection. Furthermore, flow properties directly influence the velocity of drifting prey items, therefore influencing the probability of fish at catch prey. Flow properties might also have an effect on prey trajectories that can become more unpredictable with increased turbulence. In this study, we combined field and experimental approaches to examine the foraging behaviour and position choice of juvenile Atlantic salmon in various flow conditions. We used an in situ portable flume, which consists in a transparent enclosure (observation section) equipped with hinged doors upstream allowing to funnel the water inside and modify flow properties. Portable flumes have been developed and used to simulate benthic invertebrate drift and sediment transport, but have not been previously been used to examine fish behaviour. Specifically, we tested the predictions that 1) capture probability declined with turbulence, 2) the number of attacks and the proportion of time spent on the substrate decreased with turbulence and 3) parr will preferably selected focal positions with lower turbulence than random locations across the observation section. The portable flume allowed creating four flow treatments on a gradient of mean downstream velocity and turbulence. Fish were fed with brine shrimps and filmed through translucent panels using a submerged camera. Twenty-three juvenile salmon were captured and submitted to each flow treatment for 20 minutes feeding trials. Our results showed high inter-individual variability in the foraging success and time budget within each flow treatment associated to levels of velocity and turbulence. However, the average prey capture probability for the two lower velocity treatments was higher than that for the two higher velocity treatments. An inverse relationship between flow velocity and prey capture probability was observed and might have resulted from a diminution in prey detection distance. Fish preferentially selected focal positions in moderate velocity, and low turbulence areas and avoided the highly turbulent locations. Similarly, selection of average downward velocity and avoidance of upward velocity might be associated to the ease at maintaining position. Considering the streamlined shape providing high hydrodynamism, average vertical velocity might be an important feature driving microhabitat selection. Our results do not rule out the effect of turbulence on fish foraging but rather highlights the need to further investigate this question with a wider range of hydraulic values in order to possibly implement a turbulence-dependent prey capture function that might be useful to mechanistic foraging models.

  16. Distribution pattern of public transport passenger in Yogyakarta, Indonesia

    NASA Astrophysics Data System (ADS)

    Narendra, Alfa; Malkhamah, Siti; Sopha, Bertha Maya

    2018-03-01

    The arrival and departure distribution pattern of Trans Jogja bus passenger is one of the fundamental model for simulation. The purpose of this paper is to build models of passengers flows. This research used passengers data from January to May 2014. There is no policy that change the operation system affecting the nature of this pattern nowadays. The roads, buses, land uses, schedule, and people are relatively still the same. The data then categorized based on the direction, days, and location. Moreover, each category was fitted into some well-known discrete distributions. Those distributions are compared based on its AIC value and BIC. The chosen distribution model has the smallest AIC and BIC value and the negative binomial distribution found has the smallest AIC and BIC value. Probability mass function (PMF) plots of those models were compared to draw generic model from each categorical negative binomial distribution models. The value of accepted generic negative binomial distribution is 0.7064 and 1.4504 of mu. The minimum and maximum passenger vector value of distribution are is 0 and 41.

  17. YELLOWSTONE MAGMATIC-HYDROTHERMAL SYSTEM, U. S. A.

    USGS Publications Warehouse

    Fournier, R.O.; Pitt, A.M.; ,

    1985-01-01

    At Yellowstone National Park, the deep permeability and fluid circulation are probably controlled and maintained by repeated brittle fracture of rocks in response to local and regional stress. Focal depths of earthquakes beneath the Yellowstone caldera suggest that the transition from brittle fracture to quasi-plastic flow takes place at about 3 to 4 km. The maximum temperature likely to be attained by the hydrothermal system is 350 to 450 degree C, the convective thermal output is about 5. 5 multiplied by 10**9 watts, and the minimum average thermal flux is about 1800 mW/m**2 throughout 2,500 km**2. The average thermal gradient between the heat source and the convecting hydrothermal system must be at least 700 to 1000 degree C/km. Crystallization and partial cooling of about 0. 082 km**3 of basalt or 0. 10 km**3 of rhyolite annually could furnish the heat discharged in the hot-spring system. The Yellowstone magmatic-hydrothermal system as a whole appears to be cooling down, in spite of a relatively large rate of inflation of the Yellowstone caldera.

  18. Modelling of Field-Reversed Configuration Experiment with Large Safety Factor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinhauer, L; Guo, H; Hoffman, A

    2005-11-28

    The Translation-Confinement-Sustainment facility has been operated in the 'translation-formation' mode in which a plasma is ejected at high-speed from a {theta}-pinch-like source into a confinement chamber where it settles into a field-reversed-configuration state. Measurements of the poloidal and toroidal field have been the basis of modeling to infer the safety factor. It is found that the edge safety factor exceeds two, and that there is strong forward magnetic shear. The high-q arises because the large elongation compensates for the modest ratio of toroidal-to-poloidal field in the plasma. This is the first known instance of a very high-{beta} plasma with amore » safety factor greater than unity. Two-fluid modeling of the measurements also indicate several other significant features: a broad 'transition layer' at the plasma boundary with probable line-tying effects, complex high-speed flows, and the appearance of a two-fluid minimum-energy state in the plasma core. All these features may contribute to both the stability and good confinement of the plasma.« less

  19. Minimum error discrimination between similarity-transformed quantum states

    NASA Astrophysics Data System (ADS)

    Jafarizadeh, M. A.; Sufiani, R.; Mazhari Khiavi, Y.

    2011-07-01

    Using the well-known necessary and sufficient conditions for minimum error discrimination (MED), we extract an equivalent form for the MED conditions. In fact, by replacing the inequalities corresponding to the MED conditions with an equivalent but more suitable and convenient identity, the problem of mixed state discrimination with optimal success probability is solved. Moreover, we show that the mentioned optimality conditions can be viewed as a Helstrom family of ensembles under some circumstances. Using the given identity, MED between N similarity transformed equiprobable quantum states is investigated. In the case that the unitary operators are generating a set of irreducible representation, the optimal set of measurements and corresponding maximum success probability of discrimination can be determined precisely. In particular, it is shown that for equiprobable pure states, the optimal measurement strategy is the square-root measurement (SRM), whereas for the mixed states, SRM is not optimal. In the case that the unitary operators are reducible, there is no closed-form formula in the general case, but the procedure can be applied in each case in accordance to that case. Finally, we give the maximum success probability of optimal discrimination for some important examples of mixed quantum states, such as generalized Bloch sphere m-qubit states, spin-j states, particular nonsymmetric qudit states, etc.

  20. Minimum time search in uncertain dynamic domains with complex sensorial platforms.

    PubMed

    Lanillos, Pablo; Besada-Portas, Eva; Lopez-Orozco, Jose Antonio; de la Cruz, Jesus Manuel

    2014-08-04

    The minimum time search in uncertain domains is a searching task, which appears in real world problems such as natural disasters and sea rescue operations, where a target has to be found, as soon as possible, by a set of sensor-equipped searchers. The automation of this task, where the time to detect the target is critical, can be achieved by new probabilistic techniques that directly minimize the Expected Time (ET) to detect a dynamic target using the observation probability models and actual observations collected by the sensors on board the searchers. The selected technique, described in algorithmic form in this paper for completeness, has only been previously partially tested with an ideal binary detection model, in spite of being designed to deal with complex non-linear/non-differential sensorial models. This paper covers the gap, testing its performance and applicability over different searching tasks with searchers equipped with different complex sensors. The sensorial models under test vary from stepped detection probabilities to continuous/discontinuous differentiable/non-differentiable detection probabilities dependent on distance, orientation, and structured maps. The analysis of the simulated results of several static and dynamic scenarios performed in this paper validates the applicability of the technique with different types of sensor models.

  1. Minimum Time Search in Uncertain Dynamic Domains with Complex Sensorial Platforms

    PubMed Central

    Lanillos, Pablo; Besada-Portas, Eva; Lopez-Orozco, Jose Antonio; de la Cruz, Jesus Manuel

    2014-01-01

    The minimum time search in uncertain domains is a searching task, which appears in real world problems such as natural disasters and sea rescue operations, where a target has to be found, as soon as possible, by a set of sensor-equipped searchers. The automation of this task, where the time to detect the target is critical, can be achieved by new probabilistic techniques that directly minimize the Expected Time (ET) to detect a dynamic target using the observation probability models and actual observations collected by the sensors on board the searchers. The selected technique, described in algorithmic form in this paper for completeness, has only been previously partially tested with an ideal binary detection model, in spite of being designed to deal with complex non-linear/non-differential sensorial models. This paper covers the gap, testing its performance and applicability over different searching tasks with searchers equipped with different complex sensors. The sensorial models under test vary from stepped detection probabilities to continuous/discontinuous differentiable/non-differentiable detection probabilities dependent on distance, orientation, and structured maps. The analysis of the simulated results of several static and dynamic scenarios performed in this paper validates the applicability of the technique with different types of sensor models. PMID:25093345

  2. Minimum error discrimination between similarity-transformed quantum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jafarizadeh, M. A.; Institute for Studies in Theoretical Physics and Mathematics, Tehran 19395-1795; Research Institute for Fundamental Sciences, Tabriz 51664

    2011-07-15

    Using the well-known necessary and sufficient conditions for minimum error discrimination (MED), we extract an equivalent form for the MED conditions. In fact, by replacing the inequalities corresponding to the MED conditions with an equivalent but more suitable and convenient identity, the problem of mixed state discrimination with optimal success probability is solved. Moreover, we show that the mentioned optimality conditions can be viewed as a Helstrom family of ensembles under some circumstances. Using the given identity, MED between N similarity transformed equiprobable quantum states is investigated. In the case that the unitary operators are generating a set of irreduciblemore » representation, the optimal set of measurements and corresponding maximum success probability of discrimination can be determined precisely. In particular, it is shown that for equiprobable pure states, the optimal measurement strategy is the square-root measurement (SRM), whereas for the mixed states, SRM is not optimal. In the case that the unitary operators are reducible, there is no closed-form formula in the general case, but the procedure can be applied in each case in accordance to that case. Finally, we give the maximum success probability of optimal discrimination for some important examples of mixed quantum states, such as generalized Bloch sphere m-qubit states, spin-j states, particular nonsymmetric qudit states, etc.« less

  3. Evidence for chaos in an experimental time series from serrated plastic flow

    NASA Astrophysics Data System (ADS)

    Venkadesan, S.; Valsakumar, M. C.; Murthy, K. P. N.; Rajasekar, S.

    1996-07-01

    An experimental time series from a tensile test of an Al-Mg alloy in the serrated plastic flow domain is analyzed for signature of chaos. We employ state space reconstruction by embedding of time delay vectors. The minimum embedding dimension is found to be 4 and the largest Lyapunov exponent is positive, thereby providing prima facie evidence for chaos in an experimental time series of serrated plastic flow data.

  4. Hydrogeologic setting and hydrologic data of the Smoke Creek Desert basin, Washoe County, Nevada, and Lassen County, California, water years 1988-90

    USGS Publications Warehouse

    Maurer, D.K.

    1993-01-01

    Smoke Creek Desert is a potential source of water for urban development in Washoe County, Nevada. Hydrogeologic data were collected from 1988 to 1990 to learn more about surface- and ground-water flow in the basin. Impermeable rocks form a boundary to ground-water flow on the east side of the basin and at unknown depths at the base of the flow system. Permeable volcanic rocks on the west and north sides of the basin represent a previously unrecognized aquifer and provide potential avenues for interbasin flow. Geophysical data indicate that basin-fill sediments are about 2,000 feet thick near the center of the basin. The geometry of the aquifers, however, remains largely unknown. Measurements of water levels, pressure head, flow rate, water temperature, and specific conductance at 19 wells show little change from 1988 to 1990. Chemically, ground water begins as a dilute sodium and calcium bicarbonate water in the mountain blocks, changes to a slightly saline sodium bicarbonate solution beneath the alluvial fans, and becomes a briny sodium chloride water near the playa. Concentrations of several inorganic constituents in the briny water near the playa commonly exceed Nevada drinking-water standards. Ground water in the Honey Lake basin and Smoke Creek Desert basin has similar stable-isotope composition, except near Sand Pass. If interbasin flow takes place, it likely occurs at depths greater than 400-600 feet beneath Sand Pass or through volcanic rocks to the north of Sand Pass. Measure- ments of streamflow indicate that about 2,800 acre-feet/year discharged from volcanic rocks to streamflow and a minimum of 7.300 acre-feet/year infiltrated and recharged unconsolidated sediments near Smoke, Buffalo, and Squaw Creeks during the period of study. Also about 1,500 acre-feet per year was lost to evapotranspiration along the channel of Smoke Creek, and about 1,680 acre-feet per year of runoff from Smoke, Buffalo, and Squaw Creeks was probably lost to evaporation from the playa.

  5. Nonlocality and the critical Reynolds numbers of the minimum state magnetohydrodynamic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Ye; Oughton, Sean

    2011-07-15

    Magnetohydrodynamic (MHD) systems can be strongly nonlinear (turbulent) when their kinetic and magnetic Reynolds numbers are high, as is the case in many astrophysical and space plasma flows. Unfortunately these high Reynolds numbers are typically much greater than those currently attainable in numerical simulations of MHD turbulence. A natural question to ask is how can researchers be sure that their simulations have reproduced all of the most influential physics of the flows and magnetic fields? In this paper, a metric is defined to indicate whether the necessary physics of interest has been captured. It is found that current computing resourcesmore » will typically not be sufficient to achieve this minimum state metric.« less

  6. Determining the Optimal Solution for Quadratically Constrained Quadratic Programming (QCQP) on Energy-Saving Generation Dispatch Problem

    NASA Astrophysics Data System (ADS)

    Lesmana, E.; Chaerani, D.; Khansa, H. N.

    2018-03-01

    Energy-Saving Generation Dispatch (ESGD) is a scheme made by Chinese Government in attempt to minimize CO2 emission produced by power plant. This scheme is made related to global warming which is primarily caused by too much CO2 in earth’s atmosphere, and while the need of electricity is something absolute, the power plants producing it are mostly thermal-power plant which produced many CO2. Many approach to fulfill this scheme has been made, one of them came through Minimum Cost Flow in which resulted in a Quadratically Constrained Quadratic Programming (QCQP) form. In this paper, ESGD problem with Minimum Cost Flow in QCQP form will be solved using Lagrange’s Multiplier Method

  7. DNS, LES and Stochastic Modeling of Turbulent Reacting Flows

    DTIC Science & Technology

    1994-03-01

    NY, 1972. 3 [181 Miller , R. S., Frankel, S. H., Madnia, C. K., and Givi, P., Johnson-Edgeworth Trans- lation for Probability Modeling of Binary Mixing...Givi, " Modeling of Isotropic are also grateful to Richard Miller for many useful discussions. This Reacting Turbulence by a Hybrid Mapping-EDQNM...United State of America * Johnson-Edgeworth Translation for Probability Modeling of Binary Scalar Mixing in Turbulent Flows I R. S. MILLER , S. H

  8. Performance seeking control: Program overview and future directions

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn B.; Orme, John S.

    1993-01-01

    A flight test evaluation of the performance-seeking control (PSC) algorithm on the NASA F-15 highly integrated digital electronic control research aircraft was conducted for single-engine operation at subsonic and supersonic speeds. The model-based PSC system was developed with three optimization modes: minimum fuel flow at constant thrust, minimum turbine temperature at constant thrust, and maximum thrust at maximum dry and full afterburner throttle settings. Subsonic and supersonic flight testing were conducted at the NASA Dryden Flight Research Facility covering the three PSC optimization modes and over the full throttle range. Flight results show substantial benefits. In the maximum thrust mode, thrust increased up to 15 percent at subsonic and 10 percent at supersonic flight conditions. The minimum fan turbine inlet temperature mode reduced temperatures by more than 100 F at high altitudes. The minimum fuel flow mode results decreased fuel consumption up to 2 percent in the subsonic regime and almost 10 percent supersonically. These results demonstrate that PSC technology can benefit the next generation of fighter or transport aircraft. NASA Dryden is developing an adaptive aircraft performance technology system that is measurement based and uses feedback to ensure optimality. This program will address the technical weaknesses identified in the PSC program and will increase performance gains.

  9. A survey of the role of thermodynamic stability in viscous flow

    NASA Technical Reports Server (NTRS)

    Horne, W. C.; Smith, C. A.; Karamcheti, K.

    1991-01-01

    The stability of near-equilibrium states has been studied as a branch of the general field of nonequilibrium thermodynamics. By treating steady viscous flow as an open thermodynamic system, nonequilibrium principles such as the condition of minimum entropy-production rate for steady, near-equilibrium processes can be used to generate flow distributions from variational analyses. Examples considered in this paper are steady heat conduction, channel flow, and unconstrained three-dimensional flow. The entropy-production-rate condition has also been used for hydrodynamic stability criteria, and calculations of the stability of a laminar wall jet support this interpretation.

  10. Inflation and dark energy from the Brans-Dicke theory

    NASA Astrophysics Data System (ADS)

    Artymowski, Michał; Lalak, Zygmunt; Lewicki, Marek

    2015-06-01

    We consider the Brans-Dicke theory motivated by the f(R) = R + α Rn - β R2-n model to obtain a stable minimum of the Einstein frame scalar potential of the Brans-Dicke field. As a result we have obtained an inflationary scalar potential with non-zero value of residual vacuum energy, which may be a source of dark energy. In addition we discuss the probability of quantum tunnelling from the minimum of the potential. Our results can be easily consistent with PLANCK or BICEP2 data for appropriate choices of the value of n and ω.

  11. Probabilities of having minimum amounts of available soil water at wheat planting

    USDA-ARS?s Scientific Manuscript database

    Winter wheat (Triticum aestivum L.)-fallow (WF) remains a prominent cropping system throughout the Central Great Plains despite documentation confirming the inefficiency of precipitation storage during the second summer fallow period. Wheat yield is greatly influenced by available soil water at plan...

  12. Minimum Action Path Theory Reveals the Details of Stochastic Transitions Out of Oscillatory States

    NASA Astrophysics Data System (ADS)

    de la Cruz, Roberto; Perez-Carrasco, Ruben; Guerrero, Pilar; Alarcon, Tomas; Page, Karen M.

    2018-03-01

    Cell state determination is the outcome of intrinsically stochastic biochemical reactions. Transitions between such states are studied as noise-driven escape problems in the chemical species space. Escape can occur via multiple possible multidimensional paths, with probabilities depending nonlocally on the noise. Here we characterize the escape from an oscillatory biochemical state by minimizing the Freidlin-Wentzell action, deriving from it the stochastic spiral exit path from the limit cycle. We also use the minimized action to infer the escape time probability density function.

  13. Minimum Action Path Theory Reveals the Details of Stochastic Transitions Out of Oscillatory States.

    PubMed

    de la Cruz, Roberto; Perez-Carrasco, Ruben; Guerrero, Pilar; Alarcon, Tomas; Page, Karen M

    2018-03-23

    Cell state determination is the outcome of intrinsically stochastic biochemical reactions. Transitions between such states are studied as noise-driven escape problems in the chemical species space. Escape can occur via multiple possible multidimensional paths, with probabilities depending nonlocally on the noise. Here we characterize the escape from an oscillatory biochemical state by minimizing the Freidlin-Wentzell action, deriving from it the stochastic spiral exit path from the limit cycle. We also use the minimized action to infer the escape time probability density function.

  14. A simple device for measuring the minimum current velocity to maintain semi-buoyant fish eggs in suspension

    USGS Publications Warehouse

    Mueller, Julia S.; Cheek, Brandon D.; Chen, Qingman; Groeschel, Jillian R.; Brewer, Shannon K.; Grabowski, Timothy B.

    2013-01-01

    Pelagic broadcast spawning cyprinids are common to Great Plains rivers and streams. This reproductive guild produces non-adhesive semi-buoyant eggs that require sufficient current velocity to remain in suspension during development. Although studies have shown that there may be a minimum velocity needed to keep the eggs in suspension, this velocity has not been estimated directly nor has the influence of physicochemical factors on egg buoyancy been determined. We developed a simple, inexpensive flow chamber that allowed for evaluation of minimum current velocity needed to keep semi-buoyant eggs in suspension at any time frame during egg development. The device described here has the capability of testing the minimum current velocity needed to keep semi-buoyant eggs in suspension at a wide range of physicochemical conditions. We used gellan beads soaked in freshwater for 0, 24, and 48 hrs as egg surrogates and evaluated minimum current velocities necessary to keep them in suspension at different combinations of temperature (20.0 ± 1.0° C, 25.0 ± 1.0° C, and 28.0 ± 1.0° C) and total dissolved solids (TDS; 1,000 mg L-1, 3,000 mg L-1, and 6,000 mg L-1). We found that our methodology generated consistent, repeatable results within treatment groups. Current velocities ranging from 0.001–0.026 needed to keep the gellan beads in suspension were negatively correlated to soak times and TDS and positively correlated with temperature. The flow chamber is a viable approach for evaluating minimum current velocities needed to keep the eggs of pelagic broadcast spawning cyprinids in suspension during development.

  15. The influence of climate variables on dengue in Singapore.

    PubMed

    Pinto, Edna; Coelho, Micheline; Oliver, Leuda; Massad, Eduardo

    2011-12-01

    In this work we correlated dengue cases with climatic variables for the city of Singapore. This was done through a Poisson Regression Model (PRM) that considers dengue cases as the dependent variable and the climatic variables (rainfall, maximum and minimum temperature and relative humidity) as independent variables. We also used Principal Components Analysis (PCA) to choose the variables that influence in the increase of the number of dengue cases in Singapore, where PC₁ (Principal component 1) is represented by temperature and rainfall and PC₂ (Principal component 2) is represented by relative humidity. We calculated the probability of occurrence of new cases of dengue and the relative risk of occurrence of dengue cases influenced by climatic variable. The months from July to September showed the highest probabilities of the occurrence of new cases of the disease throughout the year. This was based on an analysis of time series of maximum and minimum temperature. An interesting result was that for every 2-10°C of variation of the maximum temperature, there was an average increase of 22.2-184.6% in the number of dengue cases. For the minimum temperature, we observed that for the same variation, there was an average increase of 26.1-230.3% in the number of the dengue cases from April to August. The precipitation and the relative humidity, after analysis of correlation, were discarded in the use of Poisson Regression Model because they did not present good correlation with the dengue cases. Additionally, the relative risk of the occurrence of the cases of the disease under the influence of the variation of temperature was from 1.2-2.8 for maximum temperature and increased from 1.3-3.3 for minimum temperature. Therefore, the variable temperature (maximum and minimum) was the best predictor for the increased number of dengue cases in Singapore.

  16. Geochemistry of the Springfield Plateau aquifer of the Ozark Plateaus Province in Arkansas, Kansas, Missouri and Oklahoma, USA

    USGS Publications Warehouse

    Adamski, J.C.

    2000-01-01

    Geochemical data indicate that the Springfield Plateau aquifer, a carbonate aquifer of the Ozark Plateaus Province in central USA, has two distinct hydrochemical zones. Within each hydrochemical zone, water from springs is geochemically and isotopically different than water from wells. Geochemical data indicate that spring water generally interacts less with the surrounding rock and has a shorter residence time, probably as a result of flowing along discrete fractures and solution openings, than water from wells. Water type throughout most of the aquifer was calcium bicarbonate, indicating that carbonate-rock dissolution is the primary geochemical process occurring in the aquifer. Concentrations of calcium, bicarbonate, dissolved oxygen and tritium indicate that most ground water in the aquifer recharged rapidly and is relatively young (less than 40 years). In general, field-measured properties, concentrations of many chemical constituents, and calcite saturation indices were greater in samples from the northern part of the aquifer (hydrochemical zone A) than in samples from the southern part of the aquifer (hydrochemical zone B). Factors affecting differences in the geochemical composition of ground water between the two zones are difficult to identify, but could be related to differences in chert content and possibly primary porosity, solubility of the limestone, and amount and type of cementation between zone A than in zone B. In addition, specific conductance, pH, alkalinity, concentrations of many chemical constituents and calcite saturation indices were greater in samples from wells than in samples from springs in each hydrochemical zone. In contrast, concentrations of dissolved oxygen, nitrite plus nitrate, and chloride generally were greater in samples from springs than in samples from wells. Water from springs generally flows rapidly through large conduits with minimum water-rock interactions. Water from wells flow through small fractures, which restrict flow and increase water-rock interactions. As a result, springs tend to be more susceptible to surface contamination than wells. The results of this study have important implications for the geochemical and hydrogeological processes of similar carbonate aquifers in other geographical locations. Copyright (C) 2000 John Wiley and Sons, Ltd.Geochemical data indicate that the Springfield Plateau carbonate aquifer has two distinct hydrochemical zones. With each hydrochemical zone, water from springs is geochemically and isotopically different from the water from wells. Spring water generally interacts less with the surrounding rock and has a shorter residence time, probably as a result of flowing along discrete fractures and solution openings, than water from wells. Factors affecting the differences in the geochemical composition of groundwater between the two zones are difficult to identify, but could be related to differences in chert content and possibly primary porosity, solubility of the limestone, and amount and type of cementation between zones.

  17. Testing founder effect speciation: Divergence population genetics of the Spoonbills Platalea regia and Pl. minor (Threskiornithidae, Aves)

    USGS Publications Warehouse

    Yeung, Carol K.L.; Tsai, Pi-Wen; Chesser, R. Terry; Lin, Rong-Chien; Yao, Cheng-Te; Tian, Xiu-Hua; Li, Shou-Hsien

    2011-01-01

    Although founder effect speciation has been a popular theoretical model for the speciation of geographically isolated taxa, its empirical importance has remained difficult to evaluate due to the intractability of past demography, which in a founder effect speciation scenario would involve a speciational bottleneck in the emergent species and the complete cessation of gene flow following divergence. Using regression-weighted approximate Bayesian computation, we tested the validity of these two fundamental conditions of founder effect speciation in a pair of sister species with disjunct distributions: the royal spoonbill Platalea regia in Australasia and the black-faced spoonbill Pl. minor in eastern Asia. When compared with genetic polymorphism observed at 20 nuclear loci in the two species, simulations showed that the founder effect speciation model had an extremely low posterior probability (1.55 × 10-8) of producing the extant genetic pattern. In contrast, speciation models that allowed for postdivergence gene flow were much more probable (posterior probabilities were 0.37 and 0.50 for the bottleneck with gene flow and the gene flow models, respectively) and postdivergence gene flow persisted for a considerable period of time (more than 80% of the divergence history in both models) following initial divergence (median = 197,000 generations, 95% credible interval [CI]: 50,000-478,000, for the bottleneck with gene flow model; and 186,000 generations, 95% CI: 45,000-477,000, for the gene flow model). Furthermore, the estimated population size reduction in Pl. regia to 7,000 individuals (median, 95% CI: 487-12,000, according to the bottleneck with gene flow model) was unlikely to have been severe enough to be considered a bottleneck. Therefore, these results do not support founder effect speciation in Pl. regia but indicate instead that the divergence between Pl. regia and Pl. minor was probably driven by selection despite continuous gene flow. In this light, we discuss the potential importance of evolutionarily labile traits with significant fitness consequences, such as migratory behavior and habitat preference, in facilitating divergence of the spoonbills.

  18. Interpretation of impeller flow calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuzson, J.

    1993-09-01

    Most available computer programs are analysis and not design programs. Therefore the intervention of the designer is indispensable. Guidelines are needed to evaluate the degree of fluid mechanic perfection of a design which is compromised for practical reasons. A new way of plotting the computer output is proposed here which illustrates the energy distribution throughout the flow. The consequence of deviating from optimal flow pattern is discussed and specific cases are reviewed. A criterion is derived for the existence of a jet/wake flow pattern and for the minimum wake mixing loss.

  19. Assessing the direct effects of streamflow on recreation: a literature review

    USGS Publications Warehouse

    Brown, Thomas C.; Taylor, Jonathan G.; Shelby, Bo

    1991-01-01

    A variety of methods have been used to learn about the relation between streamflow and recreation quality. Regardless of method, nearly all studies found a similar nonlinear relation of recreation to flow, with quality increasing with flow to a point, and then decreasing for further increases in flow. Points of minimum, optimum, and maximum flow differ across rivers and activities. Knowledge of the effects of streamflow on recreation, for the variety of relevant activities and skill levels, is an important ingredient in the determination of wise streamflow policies.

  20. 40 CFR 1065.546 - Validation of minimum dilution ratio for PM batch sampling.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... chemical balance terms as given in § 1065.655(e). You may determine the raw exhaust flow rate based on the measured intake air and dilute exhaust molar flow rates and the dilute exhaust chemical balance terms as... air, fuel rate measurements, and fuel properties, consistent with good engineering judgment. (b...

  1. 40 CFR 1065.546 - Validation of minimum dilution ratio for PM batch sampling.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... chemical balance terms as given in § 1065.655(e). You may determine the raw exhaust flow rate based on the measured intake air and dilute exhaust molar flow rates and the dilute exhaust chemical balance terms as... air, fuel rate measurements, and fuel properties, consistent with good engineering judgment. (b...

  2. 40 CFR 1065.546 - Verification of minimum dilution ratio for PM batch sampling.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... chemical balance terms as given in § 1065.655(e). You may determine the raw exhaust flow rate based on the measured intake air and dilute exhaust molar flow rates and the dilute exhaust chemical balance terms as... air, fuel rate measurements, and fuel properties, consistent with good engineering judgment. (b...

  3. Informed Decision Making Process for Managing Environmental Flows in Small River Basins

    NASA Astrophysics Data System (ADS)

    Padikkal, S.; Rema, K. P.

    2013-03-01

    Numerous examples exist worldwide of partial or complete alteration to the natural flow regime of river systems as a consequence of large scale water abstraction from upstream reaches. The effects may not be conspicuous in the case of very large rivers, but the ecosystems of smaller rivers or streams may be completely destroyed over a period of time. While restoration of the natural flow regime may not be possible, at present there is increased effort to implement restoration by regulating environmental flow. This study investigates the development of an environmental flow management model at an icon site in the small river basin of Bharathapuzha, west India. To determine optimal environmental flow regimes, a historic flow model based on data assimilated since 1978 indicated a satisfactory minimum flow depth for river ecosystem sustenance is 0.907 m (28.8 m3/s), a value also obtained from the hydraulic model; however, as three of the reservoirs were already operational at this time a flow depth of 0.922 m is considered a more viable estimate. Analysis of daily stream flow in 1997-2006, indicated adequate flow regimes during the monsoons in June-November, but that sections of the river dried out in December-May with alarming water quality conditions near the river mouth. Furthermore, the preferred minimum `dream' flow regime expressed by stakeholders of the region is a water depth of 1.548 m, which exceeds 50 % of the flood discharge in July. Water could potentially be conserved for environmental flow purposes by (1) the de-siltation of existing reservoirs or (2) reducing water spillage in the transfer between river basins. Ultimately environmental flow management of the region requires the establishment of a co-ordinated management body and the regular assimilation of water flow information from which science based decisions are made, to ensure both economic and environmental concerns are adequately addressed.

  4. Ensemble modeling of stochastic unsteady open-channel flow in terms of its time-space evolutionary probability distribution - Part 2: numerical application

    NASA Astrophysics Data System (ADS)

    Dib, Alain; Kavvas, M. Levent

    2018-03-01

    The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.

  5. What Are the ACT College Readiness Benchmarks? Information Brief

    ERIC Educational Resources Information Center

    ACT, Inc., 2013

    2013-01-01

    The ACT College Readiness Benchmarks are the minimum ACT® college readiness assessment scores required for students to have a high probability of success in credit-bearing college courses--English Composition, social sciences courses, College Algebra, or Biology. This report identifies the College Readiness Benchmarks on the ACT Compass scale…

  6. Algebra 2u, Mathematics (Experimental): 5216.26.

    ERIC Educational Resources Information Center

    Crawford, Glenda

    The sixth in a series of six guidebooks on minimum course content for second-year algebra, this booklet presents an introduction to sequences, series, permutation, combinations, and probability. Included are arithmetic and geometric progressions and problems solved by counting and factorials. Overall course goals are specified, a course outline is…

  7. Regional flood probabilities

    USGS Publications Warehouse

    Troutman, Brent M.; Karlinger, Michael R.

    2003-01-01

    The T‐year annual maximum flood at a site is defined to be that streamflow, that has probability 1/T of being exceeded in any given year, and for a group of sites the corresponding regional flood probability (RFP) is the probability that at least one site will experience a T‐year flood in any given year. The RFP depends on the number of sites of interest and on the spatial correlation of flows among the sites. We present a Monte Carlo method for obtaining the RFP and demonstrate that spatial correlation estimates used in this method may be obtained with rank transformed data and therefore that knowledge of the at‐site peak flow distribution is not necessary. We examine the extent to which the estimates depend on specification of a parametric form for the spatial correlation function, which is known to be nonstationary for peak flows. It is shown in a simulation study that use of a stationary correlation function to compute RFPs yields satisfactory estimates for certain nonstationary processes. Application of asymptotic extreme value theory is examined, and a methodology for separating channel network and rainfall effects on RFPs is suggested. A case study is presented using peak flow data from the state of Washington. For 193 sites in the Puget Sound region it is estimated that a 100‐year flood will occur on the average every 4.5 years.

  8. Determination of critical epitope of PcMab-47 against human podocalyxin.

    PubMed

    Itai, Shunsuke; Yamada, Shinji; Kaneko, Mika K; Kato, Yukinari

    2018-07-01

    Podocalyxin (PODXL) is a type I transmembrane protein, which is highly glycosylated. PODXL is expressed in some types of human cancer tissues including oral, breast, and lung cancer tissues and may promote tumor growth, invasion, and metastasis. We previously produced PcMab-47, a novel anti-PODXL monoclonal antibody (mAb) which reacts with endogenous PODXL-expressing cancer cell lines and normal cells independently of glycosylation in Western blot, flow cytometry, and immunohistochemical analysis. In this study, we used enzyme-linked immunosorbent assay (ELISA), flow cytometry, and immunohistochemical analysis to determine the epitope of PcMab-47. The minimum epitope of PcMab-47 was found to be Asp207, His208, Leu209, and Met210. A blocking peptide containing this minimum epitope completely neutralized PcMab-47 reaction against oral cancer cells by flow cytometry and immunohistochemical analysis. These findings could lead to the production of more functional anti-PODXL mAbs, which are advantageous for antitumor activities.

  9. On the Relation Between Spotless Days and the Sunspot Cycle

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    2005-01-01

    Spotless days are examined as a predictor for the size and timing of a sunspot cycle. For cycles 16-23 the first spotless day for a new cycle, which occurs during the decline of the old cycle, is found to precede minimum amplitude for the new cycle by about approximately equal to 34 mo, having a range of 25-40 mo. Reports indicate that the first spotless day for cycle 24 occurred in January 2004, suggesting that minimum amplitude for cycle 24 should be expected before April 2007, probably sometime during the latter half of 2006. If true, then cycle 23 will be classified as a cycle of shorter period, inferring further that cycle 24 likely will be a cycle of larger than average minimum and maximum amplitudes and faster than average rise, peaking sometime in 2010.

  10. Process for heating coal-oil slurries

    DOEpatents

    Braunlin, W.A.; Gorski, A.; Jaehnig, L.J.; Moskal, C.J.; Naylor, J.D.; Parimi, K.; Ward, J.V.

    1984-01-03

    Controlling gas to slurry volume ratio to achieve a gas holdup of about 0.4 when heating a flowing coal-oil slurry and a hydrogen containing gas stream allows operation with virtually any coal to solvent ratio and permits operation with efficient heat transfer and satisfactory pressure drops. The critical minimum gas flow rate for any given coal-oil slurry will depend on numerous factors such as coal concentration, coal particle size distribution, composition of the solvent (including recycle slurries), and type of coal. Further system efficiency can be achieved by operating with multiple heating zones to provide a high heat flux when the apparent viscosity of the gas saturated slurry is highest. Operation with gas flow rates below the critical minimum results in system instability indicated by temperature excursions in the fluid and at the tube wall, by a rapid increase and then decrease in overall pressure drop with decreasing gas flow rate, and by increased temperature differences between the temperature of the bulk fluid and the tube wall. At the temperatures and pressures used in coal liquefaction preheaters the coal-oil slurry and hydrogen containing gas stream behaves essentially as a Newtonian fluid at shear rates in excess of 150 sec[sup [minus]1]. The gas to slurry volume ratio should also be controlled to assure that the flow regime does not shift from homogeneous flow to non-homogeneous flow. Stable operations have been observed with a maximum gas holdup as high as 0.72. 29 figs.

  11. Process for heating coal-oil slurries

    DOEpatents

    Braunlin, Walter A.; Gorski, Alan; Jaehnig, Leo J.; Moskal, Clifford J.; Naylor, Joseph D.; Parimi, Krishnia; Ward, John V.

    1984-01-03

    Controlling gas to slurry volume ratio to achieve a gas holdup of about 0.4 when heating a flowing coal-oil slurry and a hydrogen containing gas stream allows operation with virtually any coal to solvent ratio and permits operation with efficient heat transfer and satisfactory pressure drops. The critical minimum gas flow rate for any given coal-oil slurry will depend on numerous factors such as coal concentration, coal particle size distribution, composition of the solvent (including recycle slurries), and type of coal. Further system efficiency can be achieved by operating with multiple heating zones to provide a high heat flux when the apparent viscosity of the gas saturated slurry is highest. Operation with gas flow rates below the critical minimum results in system instability indicated by temperature excursions in the fluid and at the tube wall, by a rapid increase and then decrease in overall pressure drop with decreasing gas flow rate, and by increased temperature differences between the temperature of the bulk fluid and the tube wall. At the temperatures and pressures used in coal liquefaction preheaters the coal-oil slurry and hydrogen containing gas stream behaves essentially as a Newtonian fluid at shear rates in excess of 150 sec.sup. -1. The gas to slurry volume ratio should also be controlled to assure that the flow regime does not shift from homogeneous flow to non-homogeneous flow. Stable operations have been observed with a maximum gas holdup as high as 0.72.

  12. Economic policy and the double burden of malnutrition: cross-national longitudinal analysis of minimum wage and women's underweight and obesity.

    PubMed

    Conklin, Annalijn I; Ponce, Ninez A; Crespi, Catherine M; Frank, John; Nandi, Arijit; Heymann, Jody

    2018-04-01

    To examine changes in minimum wage associated with changes in women's weight status. Longitudinal study of legislated minimum wage levels (per month, purchasing power parity-adjusted, 2011 constant US dollar values) linked to anthropometric and sociodemographic data from multiple Demographic and Health Surveys (2000-2014). Separate multilevel models estimated associations of a $10 increase in monthly minimum wage with the rate of change in underweight and obesity, conditioning on individual and country confounders. Post-estimation analysis computed predicted mean probabilities of being underweight or obese associated with higher levels of minimum wage at study start and end. Twenty-four low-income countries. Adult non-pregnant women (n 150 796). Higher minimum wages were associated (OR; 95 % CI) with reduced underweight in women (0·986; 0·977, 0·995); a decrease that accelerated over time (P-interaction=0·025). Increasing minimum wage was associated with higher obesity (1·019; 1·008, 1·030), but did not alter the rate of increase in obesity prevalence (P-interaction=0·8). A $10 rise in monthly minimum wage was associated (prevalence difference; 95 % CI) with an average decrease of about 0·14 percentage points (-0·14; -0·23, -0·05) for underweight and an increase of about 0·1 percentage points (0·12; 0·04, 0·20) for obesity. The present longitudinal multi-country study showed that a $10 rise in monthly minimum wage significantly accelerated the decline in women's underweight prevalence, but had no association with the pace of growth in obesity prevalence. Thus, modest rises in minimum wage may be beneficial for addressing the protracted underweight problem in poor countries, especially South Asia and parts of Africa.

  13. Flow-duration-frequency behaviour of British rivers based on annual minima data

    NASA Astrophysics Data System (ADS)

    Zaidman, Maxine D.; Keller, Virginie; Young, Andrew R.; Cadman, Daniel

    2003-06-01

    A comparison of different probability distribution models for describing the flow-duration-frequency behaviour of annual minima flow events in British rivers is reported. Twenty-five catchments were included in the study, each having stable and natural flow records of at least 30 years in length. Time series of annual minima D-day average flows were derived for each record using durations ( D) of 1, 7, 30, 60, 90, and 365 days and used to construct low flow frequency curves. In each case the Gringorten plotting position formula was used to determine probabilities (of non-exceedance). Four distribution types—Generalised Extreme Value (GEV), Generalised Logistic (GL), Pearson Type-3 (PE3) and Generalised Pareto (GP)—were used to model the probability distribution function for each site. L-moments were used to parameterise individual models, whilst goodness-of-fit tests were used to assess their match to the sample data. The study showed that where short durations (i.e. 60 days or less) were considered, high storage catchments tended to be best represented by GL and GEV distribution models whilst low storage catchments were best described by PE3 or GEV models. However, these models produced reasonable results only within a limited range (e.g. models for high storage catchments did not produce sensible estimates of return periods where the prescribed flow was less than 10% of the mean flow). For annual minima series derived using long duration flow averages (e.g. more than 90 days), GP and GEV models were generally more applicable. The study suggests that longer duration minima do not conform to the same distribution types as short durations, and that catchment properties can influence the type of distribution selected.

  14. Optimum measurement for unambiguously discriminating two mixed states: General considerations and special cases

    NASA Astrophysics Data System (ADS)

    Herzog, Ulrike; Bergou, János A.

    2006-04-01

    Based on our previous publication [U. Herzog and J. A. Bergou, Phys. Rev. A 71, 050301(R)(2005)] we investigate the optimum measurement for the unambiguous discrimination of two mixed quantum states that occur with given prior probabilities. Unambiguous discrimination of nonorthogonal states is possible in a probabilistic way, at the expense of a nonzero probability of inconclusive results, where the measurement fails. Along with a discussion of the general problem, we give an example illustrating our method of solution. We also provide general inequalities for the minimum achievable failure probability and discuss in more detail the necessary conditions that must be fulfilled when its absolute lower bound, proportional to the fidelity of the states, can be reached.

  15. The minimum record time for PIV measurement in a vessel agitated by a Rushton turbine

    NASA Astrophysics Data System (ADS)

    Šulc, Radek; Ditl, Pavel; Fořt, Ivan; Jašíkova, Darina; Kotek, Michal; Kopecký, Václav; Kysela, Bohuš

    In PIV studies published in the literature focusing on the investigation of the flow field in an agitated vessel the record time is ranging from the tenths and the units of seconds. The aim of this work was to determine minimum record time for PIV measurement in a vessel agitated by a Rushton turbine that is necessary to obtain relevant results of velocity field. The velocity fields were measured in a fully baffled cylindrical flat bottom vessel 400 mm in inner diameter agitated by a Rushton turbine 133 mm in diameter using 2-D Time Resolved Particle Image Velocimetry in the impeller Reynolds number range from 50 000 to 189 000. This Re range secures the fully-developed turbulent flow of agitated liquid. Three liquids of different viscosities were used as the agitated liquid. On the basis of the analysis of the radial and axial components of the mean- and fluctuation velocities measured outside the impeller region it was found that dimensionless minimum record time is independent of impeller Reynolds number and is equalled N.tRmin = 103 ± 19.

  16. Energetics of swimming by the ferret: consequences of forelimb paddling.

    PubMed

    Fish, Frank E; Baudinette, Russell V

    2008-06-01

    The domestic ferret (Mustela putorius furo) swims by alternate strokes of the forelimbs. This pectoral paddling is rare among semi-aquatic mammals. The energetic implications of swimming by pectoral paddling were examined by kinematic analysis and measurement of oxygen consumption. Ferrets maintained a constant stroke frequency, but increased swimming speed by increasing stroke amplitude. The ratio of swimming velocity to foot stroke velocity was low, indicating a low propulsive efficiency. Metabolic rate increased linearly with increasing speed. The cost of transport decreased with increasing swimming speed to a minimum of 3.59+/-0.28 J N(-1) m(-1) at U=0.44 m s(-1). The minimum cost of transport for the ferret was greater than values for semi-aquatic mammals using hind limb paddling, but lower than the minimum cost of transport for the closely related quadrupedally paddling mink. Differences in energetic performance may be due to the amount of muscle recruited for propulsion and the interrelationship hydrodynamic drag and interference between flow over the body surface and flow induced by propulsive appendages.

  17. Concentrated energy addition for active drag reduction in hypersonic flow regime

    NASA Astrophysics Data System (ADS)

    Ashwin Ganesh, M.; John, Bibin

    2018-01-01

    Numerical optimization of hypersonic drag reduction technique based on concentrated energy addition is presented in this study. A reduction in wave drag is realized through concentrated energy addition in the hypersonic flowfield upstream of the blunt body. For the exhaustive optimization presented in this study, an in-house high precision inviscid flow solver has been developed. Studies focused on the identification of "optimum energy addition location" have revealed the existence of multiple minimum drag points. The wave drag coefficient is observed to drop from 0.85 to 0.45 when 50 Watts of energy is added to an energy bubble of 1 mm radius located at 74.7 mm upstream of the stagnation point. A direct proportionality has been identified between energy bubble size and wave drag coefficient. Dependence of drag coefficient on the upstream added energy magnitude is also revealed. Of the observed multiple minimum drag points, the energy deposition point (EDP) that offers minimum wave drag just after a sharp drop in drag is proposed as the most optimum energy addition location.

  18. Emergency assessment of postwildfire debris-flow hazards for the 2011 Motor Fire, Sierra and Stanislaus National Forests, California

    USGS Publications Warehouse

    Cannon, Susan H.; Michael, John A.

    2011-01-01

    This report presents an emergency assessment of potential debris-flow hazards from basins burned by the 2011 Motor fire in the Sierra and Stanislaus National Forests, Calif. Statistical-empirical models are used to estimate the probability and volume of debris flows that may be produced from burned drainage basins as a function of different measures of basin burned extent, gradient, and soil physical properties, and in response to a 30-minute-duration, 10-year-recurrence rainstorm. Debris-flow probability and volume estimates are then combined to form a relative hazard ranking for each basin. This assessment provides critical information for issuing warnings, locating and designing mitigation measures, and planning evacuation timing and routes within the first two years following the fire.

  19. Hydrologic considerations for estimation of storage-capacity requirements of impounding and side-channel reservoirs for water supply in Ohio

    USGS Publications Warehouse

    Koltun, G.F.

    2001-01-01

    This report provides data and methods to aid in the hydrologic design or evaluation of impounding reservoirs and side-channel reservoirs used for water supply in Ohio. Data from 117 streamflow-gaging stations throughout Ohio were analyzed by means of nonsequential-mass-curve-analysis techniques to develop relations between storage requirements, water demand, duration, and frequency. Information also is provided on minimum runoff for selected durations and frequencies. Systematic record lengths for the streamflow-gaging stations ranged from about 10 to 75 years; however, in many cases, additional streamflow record was synthesized. For impounding reservoirs, families of curves are provided to facilitate the estimation of storage requirements as a function of demand and the ratio of the 7-day, 2-year low flow to the mean annual flow. Information is provided with which to evaluate separately the effects of evaporation on storage requirements. Comparisons of storage requirements for impounding reservoirs determined by nonsequential-mass-curve-analysis techniques with storage requirements determined by annual-mass-curve techniques that employ probability routing to account for carryover-storage requirements indicate that large differences in computed required storages can result from the two methods, particularly for conditions where demand cannot be met from within-year storage. For side-channel reservoirs, tables of demand-storage-frequency information are provided for a primary pump relation consisting of one variable-speed pump with a pumping capacity that ranges from 0.1 to 20 times demand. Tables of adjustment ratios are provided to facilitate determination of storage requirements for 19 other pump sets consisting of assorted combinations of fixed-speed pumps or variable-speed pumps with aggregate pumping capacities smaller than or equal to the primary pump relation. The effects of evaporation on side-channel reservoir storage requirements are incorporated into the storage-requirement estimates. The effects of an instream-flow requirement equal to the 80-percent-duration flow are also incorporated into the storage-requirement estimates.

  20. Convection-Enhanced Transport into Open Cavities : Effect of Cavity Aspect Ratio.

    PubMed

    Horner, Marc; Metcalfe, Guy; Ottino, J M

    2015-09-01

    Recirculating fluid regions occur in the human body both naturally and pathologically. Diffusion is commonly considered the predominant mechanism for mass transport into a recirculating flow region. While this may be true for steady flows, one must also consider the possibility of convective fluid exchange when the outer (free stream) flow is transient. In the case of an open cavity, convective exchange occurs via the formation of lobes at the downstream attachment point of the separating streamline. Previous studies revealed the effect of forcing amplitude and frequency on material transport rates into a square cavity (Horner in J Fluid Mech 452:199-229, 2002). This paper summarizes the effect of cavity aspect ratio on exchange rates. The transport process is characterized using both computational fluid dynamics modeling and dye-advection experiments. Lagrangian analysis of the computed flow field reveals the existence of turnstile lobe transport for this class of flows. Experiments show that material exchange rates do not vary linearly as a function of the cavity aspect ratio (A = W/H). Rather, optima are predicted for A ≈ 2 and A ≈ 2.73, with a minimum occurring at A ≈ 2.5. The minimum occurs at the point where the cavity flow structure bifurcates from a single recirculating flow cell into two corner eddies. These results have significant implications for mass transport environments where the geometry of the flow domain evolves with time, such as coronary stents and growing aneurysms. Indeed, device designers may be able to take advantage of the turnstile-lobe transport mechanism to tailor deposition rates near newly implanted medical devices.

  1. Modeled streamflow metrics on small, ungaged stream reaches in the Upper Colorado River Basin

    USGS Publications Warehouse

    Reynolds, Lindsay V.; Shafroth, Patrick B.

    2016-01-20

    Modeling streamflow is an important approach for understanding landscape-scale drivers of flow and estimating flows where there are no streamgage records. In this study conducted by the U.S. Geological Survey in cooperation with Colorado State University, the objectives were to model streamflow metrics on small, ungaged streams in the Upper Colorado River Basin and identify streams that are potentially threatened with becoming intermittent under drier climate conditions. The Upper Colorado River Basin is a region that is critical for water resources and also projected to experience large future climate shifts toward a drying climate. A random forest modeling approach was used to model the relationship between streamflow metrics and environmental variables. Flow metrics were then projected to ungaged reaches in the Upper Colorado River Basin using environmental variables for each stream, represented as raster cells, in the basin. Last, the projected random forest models of minimum flow coefficient of variation and specific mean daily flow were used to highlight streams that had greater than 61.84 percent minimum flow coefficient of variation and less than 0.096 specific mean daily flow and suggested that these streams will be most threatened to shift to intermittent flow regimes under drier climate conditions. Map projection products can help scientists, land managers, and policymakers understand current hydrology in the Upper Colorado River Basin and make informed decisions regarding water resources. With knowledge of which streams are likely to undergo significant drying in the future, managers and scientists can plan for stream-dependent ecosystems and human water users.

  2. Assessing Airflow Sensitivity to Healthy and Diseased Lung Conditions in a Computational Fluid Dynamics Model Validated In Vitro.

    PubMed

    Sul, Bora; Oppito, Zachary; Jayasekera, Shehan; Vanger, Brian; Zeller, Amy; Morris, Michael; Ruppert, Kai; Altes, Talissa; Rakesh, Vineet; Day, Steven; Robinson, Risa; Reifman, Jaques; Wallqvist, Anders

    2018-05-01

    Computational models are useful for understanding respiratory physiology. Crucial to such models are the boundary conditions specifying the flow conditions at truncated airway branches (terminal flow rates). However, most studies make assumptions about these values, which are difficult to obtain in vivo. We developed a computational fluid dynamics (CFD) model of airflows for steady expiration to investigate how terminal flows affect airflow patterns in respiratory airways. First, we measured in vitro airflow patterns in a physical airway model, using particle image velocimetry (PIV). The measured and computed airflow patterns agreed well, validating our CFD model. Next, we used the lobar flow fractions from a healthy or chronic obstructive pulmonary disease (COPD) subject as constraints to derive different terminal flow rates (i.e., three healthy and one COPD) and computed the corresponding airflow patterns in the same geometry. To assess airflow sensitivity to the boundary conditions, we used the correlation coefficient of the shape similarity (R) and the root-mean-square of the velocity magnitude difference (Drms) between two velocity contours. Airflow patterns in the central airways were similar across healthy conditions (minimum R, 0.80) despite variations in terminal flow rates but markedly different for COPD (minimum R, 0.26; maximum Drms, ten times that of healthy cases). In contrast, those in the upper airway were similar for all cases. Our findings quantify how variability in terminal and lobar flows contributes to airflow patterns in respiratory airways. They highlight the importance of using lobar flow fractions to examine physiologically relevant airflow characteristics.

  3. 45 CFR 155.1210 - Maintenance of records.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) of this section include, at a minimum, the following: (1) Information concerning management and..., including cash flow statements, and accounts receivable and matters pertaining to the costs of operations...

  4. Emergency assessment of post-fire debris-flow hazards for the 2013 Powerhouse fire, southern California

    USGS Publications Warehouse

    Staley, Dennis M.; Smoczyk, Gregory M.; Reeves, Ryan R.

    2013-01-01

    Wildfire dramatically alters the hydrologic response of a watershed such that even modest rainstorms can produce dangerous flash floods and debris flows. Existing empirical models were used to predict the probability and magnitude of debris-flow occurrence in response to a 10-year recurrence interval rainstorm for the 2013 Powerhouse fire near Lancaster, California. Overall, the models predict a relatively low probability for debris-flow occurrence in response to the design storm. However, volumetric predictions suggest that debris flows that occur may entrain a significant volume of material, with 44 of the 73 basins identified as having potential debris-flow volumes between 10,000 and 100,000 cubic meters. These results suggest that even though the likelihood of debris flow is relatively low, the consequences of post-fire debris-flow initiation within the burn area may be significant for downstream populations, infrastructure, and wildlife and water resources. Given these findings, we recommend that residents, emergency managers, and public works departments pay close attention to weather forecasts and National-Weather-Service-issued Debris Flow and Flash Flood Outlooks, Watches, and Warnings and that residents adhere to any evacuation orders.

  5. Estimation of Leakage Potential of Selected Sites in Interstate and Tri-State Canals Using Geostatistical Analysis of Selected Capacitively Coupled Resistivity Profiles, Western Nebraska, 2004

    USGS Publications Warehouse

    Vrabel, Joseph; Teeple, Andrew; Kress, Wade H.

    2009-01-01

    With increasing demands for reliable water supplies and availability estimates, groundwater flow models often are developed to enhance understanding of surface-water and groundwater systems. Specific hydraulic variables must be known or calibrated for the groundwater-flow model to accurately simulate current or future conditions. Surface geophysical surveys, along with selected test-hole information, can provide an integrated framework for quantifying hydrogeologic conditions within a defined area. In 2004, the U.S. Geological Survey, in cooperation with the North Platte Natural Resources District, performed a surface geophysical survey using a capacitively coupled resistivity technique to map the lithology within the top 8 meters of the near-surface for 110 kilometers of the Interstate and Tri-State Canals in western Nebraska and eastern Wyoming. Assuming that leakage between the surface-water and groundwater systems is affected primarily by the sediment directly underlying the canal bed, leakage potential was estimated from the simple vertical mean of inverse-model resistivity values for depth levels with geometrically increasing layer thickness with depth which resulted in mean-resistivity values biased towards the surface. This method generally produced reliable results, but an improved analysis method was needed to account for situations where confining units, composed of less permeable material, underlie units with greater permeability. In this report, prepared by the U.S. Geological Survey in cooperation with the North Platte Natural Resources District, the authors use geostatistical analysis to develop the minimum-unadjusted method to compute a relative leakage potential based on the minimum resistivity value in a vertical column of the resistivity model. The minimum-unadjusted method considers the effects of homogeneous confining units. The minimum-adjusted method also is developed to incorporate the effect of local lithologic heterogeneity on water transmission. Seven sites with differing geologic contexts were selected following review of the capacitively coupled resistivity data collected in 2004. A reevaluation of these sites using the mean, minimum-unadjusted, and minimum-adjusted methods was performed to compare the different approaches for estimating leakage potential. Five of the seven sites contained underlying confining units, for which the minimum-unadjusted and minimum-adjusted methods accounted for the confining-unit effect. Estimates of overall leakage potential were lower for the minimum-unadjusted and minimum-adjusted methods than those estimated by the mean method. For most sites, the local heterogeneity adjustment procedure of the minimum-adjusted method resulted in slightly larger overall leakage-potential estimates. In contrast to the mean method, the two minimum-based methods allowed the least permeable areas to control the overall vertical permeability of the subsurface. The minimum-adjusted method refined leakage-potential estimation by additionally including local lithologic heterogeneity effects.

  6. Uncertainties in predicting debris flow hazards following wildfire [Chapter 19

    Treesearch

    Kevin D. Hyde; Karin Riley; Cathelijne Stoof

    2017-01-01

    Wildfire increases the probability of debris flows posing hazardous conditions where values‐at‐risk exist downstream of burned areas. Conditions and processes leading to postfire debris flows usually follow a general sequence defined here as the postfire debris flow hazard cascade: biophysical setting, fire processes, fire effects, rainfall, debris flow, and values‐at‐...

  7. How Life History Can Sway the Fixation Probability of Mutants

    PubMed Central

    Li, Xiang-Yi; Kurokawa, Shun; Giaimo, Stefano; Traulsen, Arne

    2016-01-01

    In this work, we study the effects of demographic structure on evolutionary dynamics when selection acts on reproduction, survival, or both. In contrast to the previously discovered pattern that the fixation probability of a neutral mutant decreases while the population becomes younger, we show that a mutant with a constant selective advantage may have a maximum or a minimum of the fixation probability in populations with an intermediate fraction of young individuals. This highlights the importance of life history and demographic structure in studying evolutionary dynamics. We also illustrate the fundamental differences between selection on reproduction and selection on survival when age structure is present. In addition, we evaluate the relative importance of size and structure of the population in determining the fixation probability of the mutant. Our work lays the foundation for also studying density- and frequency-dependent effects in populations when demographic structures cannot be neglected. PMID:27129737

  8. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  9. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  10. A computer program for estimating instream travel times and concentrations of a potential contaminant in the Yellowstone River, Montana

    USGS Publications Warehouse

    McCarthy, Peter M.

    2006-01-01

    The Yellowstone River is very important in a variety of ways to the residents of southeastern Montana; however, it is especially vulnerable to spilled contaminants. In 2004, the U.S. Geological Survey, in cooperation with Montana Department of Environmental Quality, initiated a study to develop a computer program to rapidly estimate instream travel times and concentrations of a potential contaminant in the Yellowstone River using regression equations developed in 1999 by the U.S. Geological Survey. The purpose of this report is to describe these equations and their limitations, describe the development of a computer program to apply the equations to the Yellowstone River, and provide detailed instructions on how to use the program. This program is available online at [http://pubs.water.usgs.gov/sir2006-5057/includes/ytot.xls]. The regression equations provide estimates of instream travel times and concentrations in rivers where little or no contaminant-transport data are available. Equations were developed and presented for the most probable flow velocity and the maximum probable flow velocity. These velocity estimates can then be used to calculate instream travel times and concentrations of a potential contaminant. The computer program was developed so estimation equations for instream travel times and concentrations can be solved quickly for sites along the Yellowstone River between Corwin Springs and Sidney, Montana. The basic types of data needed to run the program are spill data, streamflow data, and data for locations of interest along the Yellowstone River. Data output from the program includes spill location, river mileage at specified locations, instantaneous discharge, mean-annual discharge, drainage area, and channel slope. Travel times and concentrations are provided for estimates of the most probable velocity of the peak concentration and the maximum probable velocity of the peak concentration. Verification of estimates of instream travel times and concentrations for the Yellowstone River requires information about the flow velocity throughout the 520 mi of river in the study area. Dye-tracer studies would provide the best data about flow velocities and would provide the best verification of instream travel times and concentrations estimated from this computer program; however, data from such studies does not currently (2006) exist and new studies would be expensive and time-consuming. An alternative approach used in this study for verification of instream travel times is based on the use of flood-wave velocities determined from recorded streamflow hydrographs at selected mainstem streamflow-gaging stations along the Yellowstone River. The ratios of flood-wave velocity to the most probable velocity for the base flow estimated from the computer program are within the accepted range of 2.5 to 4.0 and indicate that flow velocities estimated from the computer program are reasonable for the Yellowstone River. The ratios of flood-wave velocity to the maximum probable velocity are within a range of 1.9 to 2.8 and indicate that the maximum probable flow velocities estimated from the computer program, which corresponds to the shortest travel times and maximum probable concentrations, are conservative and reasonable for the Yellowstone River.

  11. Link importance incorporated failure probability measuring solution for multicast light-trees in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo

    2018-03-01

    The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.

  12. Flow visualization techniques in the Airborne Laser Laboratory program

    NASA Technical Reports Server (NTRS)

    Walterick, R. E.; Vankuren, J. T.

    1980-01-01

    A turret/fairing assembly for laser applications was designed and tested. Wind tunnel testing was conducted using flow visualization techniques. The techniques used have included the methods of tufting, encapsulated liquid crystals, oil flow, sublimation and schlieren and shadowgraph photography. The results were directly applied to the design of fairing shapes for minimum drag and reduced turret buffet. In addition, the results are of primary importance to the study of light propagation paths in the near flow field of the turret cavity. Results indicate that the flow in the vicinity of the turret is an important factor for consideration in the design of suitable turret/fairing or aero-optic assemblies.

  13. A pore-network model for foam formation and propagation in porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kharabaf, H.; Yortsos, Y.C.

    1996-12-31

    We present a pore-network model, based on a pores-and-throats representation of the porous medium, to simulate the generation and mobilization of foams in porous media. The model allows for various parameters or processes, empirically treated in current models, to be quantified and interpreted. Contrary to previous works, we also consider a dynamic (invasion) in addition to a static process. We focus on the properties of the displacement, the onset of foam flow and mobilization, the foam texture and the sweep efficiencies obtained. The model simulates an invasion process, in which gas invades a porous medium occupied by a surfactant solution.more » The controlling parameter is the snap-off probability, which in turn determines the foam quality for various size distributions of pores and throats. For the front to advance, the applied pressure gradient needs to be sufficiently high to displace a series of lamellae along a minimum capillary resistance (threshold) path. We determine this path using a novel algorithm. The fraction of the flowing lamellae, X{sub f} (and, consequently, the fraction of the trapped lamellae, X{sub f}) which are currently empirical, are also calculated. The model allows the delineation of conditions tinder which high-quality (strong) or low-quality (weak) foams form. In either case, the sweep efficiencies in displacements in various media are calculated. In particular, the invasion by foam of low permeability layers during injection in a heterogeneous system is demonstrated.« less

  14. Knudsen paradox in granular gases and the roles of thermal and athermal walls

    NASA Astrophysics Data System (ADS)

    Gupta, Ronak; Alam, Meheboob

    2017-11-01

    The well-known `Knudsen-paradox' (which refers to the decrease of the mass-flow rate of a gas with increasing Knudsen number Kn , reaching a minimum at Kn O(1) and increasing logarithmically with Kn as Kn -> ∞) is revisited using direct simulation Monte Carlo (DSMC) method. It is shown that the `Knudsen-paradox' survives in the acceleration-driven Poiseuille flow of a granular gas in contact with thermal-walls. This result is in contradiction with recent molecular dynamics simulations (Alam et al., J. Fluid Mech., vol. 782, 2015, pp. 99-126) that revealed the absence of the Knudsen-minimum in granular Poiseuille flow. The above conundrum is resolved by distinguishing between `thermal' and `athermal' walls, and it is shown that, for both molecular and granular gases, the momentum-transfer to athermal-walls is much lower than that to thermal-walls which is directly responsible for the ``anomalous'' flow-rate-variation with Kn . In the continuum limit of Kn -> 0 , the athermal walls are found to be closely related to `non-flux/adiabatic' walls. The underlying mechanistic arguments lead to Maxwell's slip-boundary condition and a possible characterization of athermal walls in terms of an effective specularity coefficient is discussed.

  15. Potatoes and Trout: Maintaining Robust Agriculture and a Healthy Trout Fishery in the Central Sands of Wisconsin

    NASA Astrophysics Data System (ADS)

    Fienen, M. N.; Bradbury, K. R.; Kniffin, M.; Barlow, P. M.; Krause, J.; Westenbroek, S.; Leaf, A.

    2015-12-01

    The well-drained sandy soil in the Wisconsin Central Sands is ideal for growing potatoes, corn, and other vegetables. A shallow sand and gravel aquifer provides abundant water for agricultural irrigation but also supplies critical base flow to cold-water trout streams. These needs compete with one another, and stakeholders from various perspectives are collaborating to seek solutions. Stakeholders were engaged in providing and verifying data to guide construction of a groundwater flow model which was used with linear and sequential linear programming to evaluate optimal tradeoffs between agricultural pumping and ecologically based minimum base flow values. The connection between individual irrigation wells as well as industrial and municipal supply and streamflow depletion can be evaluated using the model. Rather than addressing 1000s of wells individually, a variety of well management groups were established through k-means clustering. These groups are based on location, potential impact, water-use categories, depletion potential, and other factors. Through optimization, pumping rates were reduced to attain mandated minimum base flows. This formalization enables exploration of possible solutions for the stakeholders, and provides a tool which is transparent and forms a basis for discussion and negotiation.

  16. Laryngeal Aerodynamics in Healthy Older Adults and Adults With Parkinson's Disease.

    PubMed

    Matheron, Deborah; Stathopoulos, Elaine T; Huber, Jessica E; Sussman, Joan E

    2017-03-01

    The present study compared laryngeal aerodynamic function of healthy older adults (HOA) to adults with Parkinson's disease (PD) while speaking at a comfortable and increased vocal intensity. Laryngeal aerodynamic measures (subglottal pressure, peak-to-peak flow, minimum flow, and open quotient [OQ]) were compared between HOAs and individuals with PD who had a diagnosis of hypophonia. Increased vocal intensity was elicited via monaurally presented multitalker background noise. At a comfortable speaking intensity, HOAs and individuals with PD produced comparable vocal intensity, rates of vocal fold closure, and minimum flow. HOAs used smaller OQs, higher subglottal pressure, and lower peak-to-peak flow than individuals with PD. Both groups increased speaking intensity when speaking in noise to the same degree. However, HOAs produced increased intensity with greater driving pressure, faster vocal fold closure rates, and smaller OQs than individuals with PD. Monaural background noise elicited equivalent vocal intensity increases in HOAs and individuals with PD. Although both groups used laryngeal mechanisms as expected to increase sound pressure level, they used these mechanisms to different degrees. The HOAs appeared to have better control of the laryngeal mechanism to make changes to their vocal intensity.

  17. Browns Ferry-1 single-loop operation tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    March-Leuba, J.; Wood, R.T.; Otaduy, P.J.

    1985-09-01

    This report documents the results of the stability tests performed on February 9, 1985, at the Browns Ferry Nuclear Power Plant Unit 1 under single-loop operating conditions. The observed increase in neutron noise during single-loop operation is solely due to an increase in flow noise. The Browns Ferry-1 reactor has been found to be stable in all modes of operation attained during the present tests. The most unstable test plateau corresponded to minimum recirculation pump speed in single-loop operation (test BFTP3). This operating condition had the minimum flow and maximum power-to-flow ratio. The estimated decay ratio in this plateau ismore » 0.53. The decay ratio decreased as the flow was increased during single-loop operation (down to 0.34 for test plateau BFTP6). This observation implies that the core-wide reactor stability follows the same trends in single-loop as it does in two-loop operation. Finally, no local or higher mode instabilities were found in the data taken from local power range monitors. The decay ratios estimated from the local power range monitors were not significantly different from those estimated from the average power range monitors.« less

  18. Postwildfire debris-flow hazard assessment of the area burned by the 2012 Little Bear Fire, south-central New Mexico

    USGS Publications Warehouse

    Tillery, Anne C.; Matherne, Anne Marie

    2013-01-01

    A preliminary hazard assessment was developed of the debris-flow potential from 56 drainage basins burned by the Little Bear Fire in south-central New Mexico in June 2012. The Little Bear Fire burned approximately 179 square kilometers (km2) (44,330 acres), including about 143 km2 (35,300 acres) of National Forest System lands of the Lincoln National Forest. Within the Lincoln National Forest, about 72 km2 (17,664 acres) of the White Mountain Wilderness were burned. The burn area also included about 34 km2 (8,500 acres) of private lands. Burn severity was high or moderate on 53 percent of the burn area. The area burned is at risk of substantial postwildfire erosion, such as that caused by debris flows and flash floods. A postwildfire debris-flow hazard assessment of the area burned by the Little Bear Fire was performed by the U.S. Geological Survey in cooperation with the U.S. Department of Agriculture Forest Service, Lincoln National Forest. A set of two empirical hazard-assessment models developed by using data from recently burned drainage basins throughout the intermountain Western United States was used to estimate the probability of debris-flow occurrence and volume of debris flows along the burn area drainage network and for selected drainage basins within the burn area. The models incorporate measures of areal burn extent and severity, topography, soils, and storm rainfall intensity to estimate the probability and volume of debris flows following the fire. Relative hazard rankings of postwildfire debris flows were produced by summing the estimated probability and volume ranking to illustrate those areas with the highest potential occurrence of debris flows with the largest volumes. The probability that a drainage basin could produce debris flows and the volume of a possible debris flow at the basin outlet were estimated for three design storms: (1) a 2-year-recurrence, 30-minute-duration rainfall of 27 millimeters (mm) (a 50 percent chance of occurrence in any given year); (2) a 10-year-recurrence, 30-minute-duration rainfall of 42 mm (a 10 percent chance of occurrence in any given year); and (3) a 25-year-recurrence, 30-minute-duration rainfall of 51 mm (a 4 percent chance of occurrence in any given year). Thirty-nine percent of the 56 drainage basins modeled have a high (greater than 80 percent) probability of debris flows in response to the 2-year design storm; 80 percent of the modeled drainage basins have a high probability of debris flows in response to the 25-year design storm. For debris-flow volume, 7 percent of the modeled drainage basins have an estimated debris-flow volume greater than 100,000 cubic meters (m3) in response to the 2-year design storm; 9 percent of the drainage basins are included in the greater than 100,000 m3 category for both the 10-year and the 25-year design storms. Drainage basins in the greater than 100,000 m3 volume category also received the highest combined hazard ranking. The maps presented herein may be used to prioritize areas where emergency erosion mitigation or other protective measures may be needed prior to rainstorms within these drainage basins, their outlets, or areas downstream from these drainage basins within the 2- to 3-year period of vulnerability. This work is preliminary and is subject to revision. The assessment herein is provided on the condition that neither the U.S. Geological Survey nor the U.S. Government may be held liable for any damages resulting from the authorized or unauthorized use of the assessment.

  19. Numerical Studies of a Supersonic Fluidic Diverter Actuator for Flow Control

    NASA Technical Reports Server (NTRS)

    Gokoglu, Suleyman A.; Kuczmarski, Maria A.; Culley, Dennis e.; Raghu, Surya

    2010-01-01

    The analysis of the internal flow structure and performance of a specific fluidic diverter actuator, previously studied by time-dependent numerical computations for subsonic flow, is extended to include operation with supersonic actuator exit velocities. The understanding will aid in the development of fluidic diverters with minimum pressure losses and advanced designs of flow control actuators. The self-induced oscillatory behavior of the flow is successfully predicted and the calculated oscillation frequencies with respect to flow rate have excellent agreement with our experimental measurements. The oscillation frequency increases with Mach number, but its dependence on flow rate changes from subsonic to transonic to supersonic regimes. The delay time for the initiation of oscillations depends on the flow rate and the acoustic speed in the gaseous medium for subsonic flow, but is unaffected by the flow rate for supersonic conditions

  20. Anomalous maximum and minimum for the dissociation of a geminate pair in energetically disordered media

    NASA Astrophysics Data System (ADS)

    Govatski, J. A.; da Luz, M. G. E.; Koehler, M.

    2015-01-01

    We study the geminated pair dissociation probability φ as function of applied electric field and temperature in energetically disordered nD media. Regardless nD, for certain parameters regions φ versus the disorder degree (σ) displays anomalous minimum (maximum) at low (moderate) fields. This behavior is compatible with a transport energy which reaches a maximum and then decreases to negative values as σ increases. Our results explain the temperature dependence of the persistent photoconductivity in C60 single crystals going through order-disorder transitions. They also indicate how an energetic disorder spatial variation may contribute to higher exciton dissociation in multicomponent donor/acceptor systems.

  1. Conversion and matched filter approximations for serial minimum-shift keyed modulation

    NASA Technical Reports Server (NTRS)

    Ziemer, R. E.; Ryan, C. R.; Stilwell, J. H.

    1982-01-01

    Serial minimum-shift keyed (MSK) modulation, a technique for generating and detecting MSK using series filtering, is ideally suited for high data rate applications provided the required conversion and matched filters can be closely approximated. Low-pass implementations of these filters as parallel inphase- and quadrature-mixer structures are characterized in this paper in terms of signal-to-noise ratio (SNR) degradation from ideal and envelope deviation. Several hardware implementation techniques utilizing microwave devices or lumped elements are presented. Optimization of parameter values results in realizations whose SNR degradation is less than 0.5 dB at error probabilities of .000001.

  2. Final Independent External Peer Review Report, Cache la Poudre at Greeley, Colorado General Investigation Feasibility Study

    DTIC Science & Technology

    2014-06-06

    Adaptive Management Plan NED national economic development NEPA National Environmental Policy Act NER National Ecosystem Restoration NFIP... management and flow maintenance (e.g., flood water height, channel and culvert sizing) are based on high water events (i.e., FEMA base flood – 1% or 100...Minimum 15 years of experience in economics X Minimum 15 years of experience in flood risk management analysis and benefits calculations X Direct

  3. Analysis of Fractional Flow for Transient Two-Phase Flow in Fractal Porous Medium

    NASA Astrophysics Data System (ADS)

    Lu, Ting; Duan, Yonggang; Fang, Quantang; Dai, Xiaolu; Wu, Jinsui

    2016-03-01

    Prediction of fractional flow in fractal porous medium is important for reservoir engineering and chemical engineering as well as hydrology. A physical conceptual fractional flow model of transient two-phase flow is developed in fractal porous medium based on the fractal characteristics of pore-size distribution and on the approximation that porous medium consist of a bundle of tortuous capillaries. The analytical expression for fractional flow for wetting phase is presented, and the proposed expression is the function of structural parameters (such as tortuosity fractal dimension, pore fractal dimension, maximum and minimum diameters of capillaries) and fluid properties (such as contact angle, viscosity and interfacial tension) in fractal porous medium. The sensitive parameters that influence fractional flow and its derivative are formulated, and their impacts on fractional flow are discussed.

  4. Radiant energy receiver having improved coolant flow control means

    DOEpatents

    Hinterberger, H.

    1980-10-29

    An improved coolant flow control for use in radiant energy receivers of the type having parallel flow paths is disclosed. A coolant performs as a temperature dependent valve means, increasing flow in the warmer flow paths of the receiver, and impeding flow in the cooler paths of the receiver. The coolant has a negative temperature coefficient of viscosity which is high enough such that only an insignificant flow through the receiver is experienced at the minimum operating temperature of the receiver, and such that a maximum flow is experienced at the maximum operating temperature of the receiver. The valving is accomplished by changes in viscosity of the coolant in response to the coolant being heated and cooled. No remotely operated valves, comparators or the like are needed.

  5. A parametric study of the microwave plasma-assisted combustion of premixed ethylene/air mixtures

    NASA Astrophysics Data System (ADS)

    Fuh, Che A.; Wu, Wei; Wang, Chuji

    2017-11-01

    A parametric study of microwave argon plasma assisted combustion (PAC) of premixed ethylene/air mixtures was carried out using visual imaging, optical emission spectroscopy and cavity ringdown spectroscopy as diagnostic tools. The parameters investigated included the plasma feed gas flow rate, the plasma power, the fuel equivalence ratio and the total flow rate of the fuel/air mixture. The combustion enhancement effects were characterized by the minimum ignition power, the flame length and the fuel efficiency of the combustor. It was found that: (1) increasing the plasma feed gas flow rate resulted in a decrease in the flame length, an increase in the minimum ignition power for near stoichiometric fuel equivalence ratios and a corresponding decrease in the minimum ignition power for ultra-lean and rich fuel equivalence ratios; (2) at a constant plasma power, increasing the total flow rate of the ethylene/air mixture from 1.0 slm to 1.5 slm resulted in an increase in the flame length and a reduction in the fuel efficiency; (3) increasing the plasma power resulted in a slight increase in flame length as well as improved fuel efficiency with fewer C2(d) and CH(A) radicals present downstream of the flame; (4) increasing the fuel equivalence ratio caused an increase in flame length but at a reduced fuel efficiency when plasma power was kept constant; and (5) the ground state OH(X) number density was on the order of 1015 molecules/cm3 and was observed to drop downstream along the propagation axis of the flame at all parameters investigated. Results suggest that each of the parameters independently influences the PAC processes.

  6. Ground water stratification and delivery of nitrate to an incised stream under varying flow conditions.

    PubMed

    Böhlke, J K; O'Connell, Michael E; Prestegaard, Karen L

    2007-01-01

    Ground water processes affecting seasonal variations of surface water nitrate concentrations were investigated in an incised first-order stream in an agricultural watershed with a riparian forest in the coastal plain of Maryland. Aquifer characteristics including sediment stratigraphy, geochemistry, and hydraulic properties were examined in combination with chemical and isotopic analyses of ground water, macropore discharge, and stream water. The ground water flow system exhibits vertical stratification of hydraulic properties and redox conditions, with sub-horizontal boundaries that extend beneath the field and adjacent riparian forest. Below the minimum water table position, ground water age gradients indicate low recharge rates (2-5 cm yr(-1)) and long residence times (years to decades), whereas the transient ground water wedge between the maximum and minimum water table positions has a relatively short residence time (months to years), partly because of an upward increase in hydraulic conductivity. Oxygen reduction and denitrification in recharging ground waters are coupled with pyrite oxidation near the minimum water table elevation in a mottled weathering zone in Tertiary marine glauconitic sediments. The incised stream had high nitrate concentrations during high flow conditions when much of the ground water was transmitted rapidly across the riparian zone in a shallow oxic aquifer wedge with abundant outflow macropores, and low nitrate concentrations during low flow conditions when the oxic wedge was smaller and stream discharge was dominated by upwelling from the deeper denitrified parts of the aquifer. Results from this and similar studies illustrate the importance of near-stream geomorphology and subsurface geology as controls of riparian zone function and delivery of nitrate to streams in agricultural watersheds.

  7. Ground water stratification and delivery of nitrate to an incised stream under varying flow conditions

    USGS Publications Warehouse

    Böhlke, J.K.; O'Connell, M. E.; Prestegaard, K.L.

    2007-01-01

    Ground water processes affecting seasonal variations of surface water nitrate concentrations were investigated in an incised first-order stream in an agricultural watershed with a riparian forest in the coastal plain of Maryland. Aquifer characteristics including sediment stratigraphy, geochemistry, and hydraulic properties were examined in combination with chemical and isotopic analyses of ground water, macropore discharge, and stream water. The ground water flow system exhibits vertical stratification of hydraulic properties and redox conditions, with sub-horizontal boundaries that extend beneath the field and adjacent riparian forest. Below the minimum water table position, ground water age gradients indicate low recharge rates (2-5 cm yr-1) and long residence times (years to decades), whereas the transient ground water wedge between the maximum and minimum water table positions has a relatively short residence time (months to years), partly because of an upward increase in hydraulic conductivity. Oxygen reduction and denitrification in recharging ground waters are coupled with pyrite oxidation near the minimum water table elevation in a mottled weathering zone in Tertiary marine glauconitic sediments. The incised stream had high nitrate concentrations during high flow conditions when much of the ground water was transmitted rapidly across the riparian zone in a shallow oxic aquifer wedge with abundant outflow macropores, and low nitrate concentrations during low flow conditions when the oxic wedge was smaller and stream discharge was dominated by upwelling from the deeper denitrified parts of the aquifer. Results from this and similar studies illustrate the importance of near-stream geomorphology and subsurface geology as controls of riparian zone function and delivery of nitrate to streams in agricultural watersheds. ?? ASA, CSSA, SSSA.

  8. New method for estimating low-earth-orbit collision probabilities

    NASA Technical Reports Server (NTRS)

    Vedder, John D.; Tabor, Jill L.

    1991-01-01

    An unconventional but general method is described for estimating the probability of collision between an earth-orbiting spacecraft and orbital debris. This method uses a Monte Caralo simulation of the orbital motion of the target spacecraft and each discrete debris object to generate an empirical set of distances, each distance representing the separation between the spacecraft and the nearest debris object at random times. Using concepts from the asymptotic theory of extreme order statistics, an analytical density function is fitted to this set of minimum distances. From this function, it is possible to generate realistic collision estimates for the spacecraft.

  9. Modelling of spatial contaminant probabilities of occurrence of chlorinated hydrocarbons in an urban aquifer.

    PubMed

    Greis, Tillman; Helmholz, Kathrin; Schöniger, Hans Matthias; Haarstrick, Andreas

    2012-06-01

    In this study, a 3D urban groundwater model is presented which serves for calculation of multispecies contaminant transport in the subsurface on the regional scale. The total model consists of two submodels, the groundwater flow and reactive transport model, and is validated against field data. The model equations are solved applying finite element method. A sensitivity analysis is carried out to perform parameter identification of flow, transport and reaction processes. Coming from the latter, stochastic variation of flow, transport, and reaction input parameters and Monte Carlo simulation are used in calculating probabilities of pollutant occurrence in the domain. These probabilities could be part of determining future spots of contamination and their measure of damages. Application and validation is exemplarily shown for a contaminated site in Braunschweig (Germany), where a vast plume of chlorinated ethenes pollutes the groundwater. With respect to field application, the methods used for modelling reveal feasible and helpful tools to assess natural attenuation (MNA) and the risk that might be reduced by remediation actions.

  10. Time-dependent rheological behavior of natural polysaccharide xanthan gum solutions in interrupted shear and step-incremental/reductional shear flow fields

    NASA Astrophysics Data System (ADS)

    Lee, Ji-Seok; Song, Ki-Won

    2015-11-01

    The objective of the present study is to systematically elucidate the time-dependent rheological behavior of concentrated xanthan gum systems in complicated step-shear flow fields. Using a strain-controlled rheometer (ARES), step-shear flow behaviors of a concentrated xanthan gum model solution have been experimentally investigated in interrupted shear flow fields with a various combination of different shear rates, shearing times and rest times, and step-incremental and step-reductional shear flow fields with various shearing times. The main findings obtained from this study are summarized as follows. (i) In interrupted shear flow fields, the shear stress is sharply increased until reaching the maximum stress at an initial stage of shearing times, and then a stress decay towards a steady state is observed as the shearing time is increased in both start-up shear flow fields. The shear stress is suddenly decreased immediately after the imposed shear rate is stopped, and then slowly decayed during the period of a rest time. (ii) As an increase in rest time, the difference in the maximum stress values between the two start-up shear flow fields is decreased whereas the shearing time exerts a slight influence on this behavior. (iii) In step-incremental shear flow fields, after passing through the maximum stress, structural destruction causes a stress decay behavior towards a steady state as an increase in shearing time in each step shear flow region. The time needed to reach the maximum stress value is shortened as an increase in step-increased shear rate. (iv) In step-reductional shear flow fields, after passing through the minimum stress, structural recovery induces a stress growth behavior towards an equilibrium state as an increase in shearing time in each step shear flow region. The time needed to reach the minimum stress value is lengthened as a decrease in step-decreased shear rate.

  11. Design Enhancements of the Two-Dimensional, Dual Throat Fluidic Thrust Vectoring Nozzle Concept

    NASA Technical Reports Server (NTRS)

    Flamm, Jeffrey D.; Deere, Karen A.; Mason, Mary L.; Berrier, Bobby L.; Johnson, Stuart K.

    2006-01-01

    A Dual Throat Nozzle fluidic thrust vectoring technique that achieves higher thrust-vectoring efficiencies than other fluidic techniques, without sacrificing thrust efficiency has been developed at NASA Langley Research Center. The nozzle concept was designed with the aid of the structured-grid, Reynolds-averaged Navier-Stokes computational fluidic dynamics code PAB3D. This new concept combines the thrust efficiency of sonic-plane skewing with increased thrust-vectoring efficiencies obtained by maximizing pressure differentials in a separated cavity located downstream of the nozzle throat. By injecting secondary flow asymmetrically at the upstream minimum area, a new aerodynamic minimum area is formed downstream of the geometric minimum and the sonic line is skewed, thus vectoring the exhaust flow. The nozzle was tested in the NASA Langley Research Center Jet Exit Test Facility. Internal nozzle performance characteristics were defined for nozzle pressure ratios up to 10, with a range of secondary injection flow rates up to 10 percent of the primary flow rate. Most of the data included in this paper shows the effect of secondary injection rate at a nozzle pressure ratio of 4. The effects of modifying cavity divergence angle, convergence angle and cavity shape on internal nozzle performance were investigated, as were effects of injection geometry, hole or slot. In agreement with computationally predicted data, experimental data verified that decreasing cavity divergence angle had a negative impact and increasing cavity convergence angle had a positive impact on thrust vector angle and thrust efficiency. A curved cavity apex provided improved thrust ratios at some injection rates. However, overall nozzle performance suffered with no secondary injection. Injection holes were more efficient than the injection slot over the range of injection rates, but the slot generated larger thrust vector angles for injection rates less than 4 percent of the primary flow rate.

  12. A spectral radius scaling semi-implicit iterative time stepping method for reactive flow simulations with detailed chemistry

    NASA Astrophysics Data System (ADS)

    Xie, Qing; Xiao, Zhixiang; Ren, Zhuyin

    2018-09-01

    A spectral radius scaling semi-implicit time stepping scheme has been developed for simulating unsteady compressible reactive flows with detailed chemistry, in which the spectral radius in the LUSGS scheme has been augmented to account for viscous/diffusive and reactive terms and a scalar matrix is proposed to approximate the chemical Jacobian using the minimum species destruction timescale. The performance of the semi-implicit scheme, together with a third-order explicit Runge-Kutta scheme and a Strang splitting scheme, have been investigated in auto-ignition and laminar premixed and nonpremixed flames of three representative fuels, e.g., hydrogen, methane, and n-heptane. Results show that the minimum species destruction time scale can well represent the smallest chemical time scale in reactive flows and the proposed scheme can significantly increase the allowable time steps in simulations. The scheme is stable when the time step is as large as 10 μs, which is about three to five orders of magnitude larger than the smallest time scales in various tests considered. For the test flames considered, the semi-implicit scheme achieves second order of accuracy in time. Moreover, the errors in quantities of interest are smaller than those from the Strang splitting scheme indicating the accuracy gain when the reaction and transport terms are solved coupled. Results also show that the relative efficiency of different schemes depends on fuel mechanisms and test flames. When the minimum time scale in reactive flows is governed by transport processes instead of chemical reactions, the proposed semi-implicit scheme is more efficient than the splitting scheme. Otherwise, the relative efficiency depends on the cost in sub-iterations for convergence within each time step and in the integration for chemistry substep. Then, the capability of the compressible reacting flow solver and the proposed semi-implicit scheme is demonstrated for capturing the hydrogen detonation waves. Finally, the performance of the proposed method is demonstrated in a two-dimensional hydrogen/air diffusion flame.

  13. Selected low-flow frequency statistics for continuous-record streamgage locations in Maryland, 2010

    USGS Publications Warehouse

    Doheny, Edward J.; Banks, William S.L.

    2010-01-01

    According to a 2008 report by the Governor's Advisory Committee on the Management and Protection of the State's Water Resources, Maryland's population grew by 35 percent between 1970 and 2000, and is expected to increase by an additional 27 percent between 2000 and 2030. Because domestic water demand generally increases in proportion to population growth, Maryland will be facing increased pressure on water resources over the next 20 years. Water-resources decisions should be based on sound, comprehensive, long-term data and low-flow frequency statistics from all available streamgage locations with unregulated streamflow and adequate record lengths. To provide the Maryland Department of the Environment with tools for making future water-resources decisions, the U.S. Geological Survey initiated a study in October 2009 to compute low-flow frequency statistics for selected streamgage locations in Maryland with 10 or more years of continuous streamflow records. This report presents low-flow frequency statistics for 114 continuous-record streamgage locations in Maryland. The computed statistics presented for each streamgage location include the mean 7-, 14-, and 30-consecutive day minimum daily low-flow dischages for recurrence intervals of 2, 10, and 20 years, and are based on approved streamflow records that include a minimum of 10 complete climatic years of record as of June 2010. Descriptive information for each of these streamgage locations, including the station number, station name, latitude, longitude, county, physiographic province, and drainage area, also is presented. The statistics are planned for incorporation into StreamStats, which is a U.S. Geological Survey Web application for obtaining stream information, and is being used by water-resource managers and decision makers in Maryland to address water-supply planning and management, water-use appropriation and permitting, wastewater and industrial discharge permitting, and setting minimum required streamflows to protect freshwater biota and ecosystems.

  14. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Treesearch

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  15. Probability distributions of hydraulic conductivity for the hydrogeologic units of the Death Valley regional ground-water flow system, Nevada and California

    USGS Publications Warehouse

    Belcher, Wayne R.; Sweetkind, Donald S.; Elliott, Peggy E.

    2002-01-01

    The use of geologic information such as lithology and rock properties is important to constrain conceptual and numerical hydrogeologic models. This geologic information is difficult to apply explicitly to numerical modeling and analyses because it tends to be qualitative rather than quantitative. This study uses a compilation of hydraulic-conductivity measurements to derive estimates of the probability distributions for several hydrogeologic units within the Death Valley regional ground-water flow system, a geologically and hydrologically complex region underlain by basin-fill sediments, volcanic, intrusive, sedimentary, and metamorphic rocks. Probability distributions of hydraulic conductivity for general rock types have been studied previously; however, this study provides more detailed definition of hydrogeologic units based on lithostratigraphy, lithology, alteration, and fracturing and compares the probability distributions to the aquifer test data. Results suggest that these probability distributions can be used for studies involving, for example, numerical flow modeling, recharge, evapotranspiration, and rainfall runoff. These probability distributions can be used for such studies involving the hydrogeologic units in the region, as well as for similar rock types elsewhere. Within the study area, fracturing appears to have the greatest influence on the hydraulic conductivity of carbonate bedrock hydrogeologic units. Similar to earlier studies, we find that alteration and welding in the Tertiary volcanic rocks greatly influence hydraulic conductivity. As alteration increases, hydraulic conductivity tends to decrease. Increasing degrees of welding appears to increase hydraulic conductivity because welding increases the brittleness of the volcanic rocks, thus increasing the amount of fracturing.

  16. Computation of rare transitions in the barotropic quasi-geostrophic equations

    NASA Astrophysics Data System (ADS)

    Laurie, Jason; Bouchet, Freddy

    2015-01-01

    We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.

  17. Classification of resistance to passive motion using minimum probability of error criterion.

    PubMed

    Chan, H C; Manry, M T; Kondraske, G V

    1987-01-01

    Neurologists diagnose many muscular and nerve disorders by classifying the resistance to passive motion of patients' limbs. Over the past several years, a computer-based instrument has been developed for automated measurement and parameterization of this resistance. In the device, a voluntarily relaxed lower extremity is moved at constant velocity by a motorized driver. The torque exerted on the extremity by the machine is sampled, along with the angle of the extremity. In this paper a computerized technique is described for classifying a patient's condition as 'Normal' or 'Parkinson disease' (rigidity), from the torque versus angle curve for the knee joint. A Legendre polynomial, fit to the curve, is used to calculate a set of eight normally distributed features of the curve. The minimum probability of error approach is used to classify the curve as being from a normal or Parkinson disease patient. Data collected from 44 different subjects was processes and the results were compared with an independent physician's subjective assessment of rigidity. There is agreement in better than 95% of the cases, when all of the features are used.

  18. (I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.

    PubMed

    van Rijnsoever, Frank J

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.

  19. Diagnosis of brain death by transcranial Doppler sonography.

    PubMed

    Bode, H; Sauer, M; Pringsheim, W

    1988-12-01

    The blood flow velocities in the basal cerebral arteries can be recorded at any age by transcranial Doppler sonography. We examined nine children with either initial or developing clinical signs of brain death. Soon after successful resuscitation increased diastolic flow velocities indicated a probable decrease in cerebrovascular resistance; this was of no particular prognostic importance. As soon as there was a clinical deterioration, there was a reduction in flow velocities with retrograde flow during early diastole, probably due to an increase in cerebrovascular resistance; this indicated a doubtful prognosis. In eight of the nine children with clinical signs of brain death a typical reverberating flow pattern was found, which was characterised by a counterbalancing short forward flow in systole and a short retrograde flow in early diastole. This indicated arrest of cerebral blood flow. One newborn showed normal systolic and end diastolic flow velocities in the basal cerebral arteries for two days despite clinical and electroencephalographic signs of brain death. Shunting of blood through the circle of Willis without effective cerebral perfusion may explain this phenomenon. No patient had the typical reverberating flow pattern without being clinically brain dead. Transcranial Doppler sonography is a reliable technique, which can be used at the bedside for the confirmation or the exclusion of brain death in children in addition to the clinical examination.

  20. Diagnosis of brain death by transcranial Doppler sonography.

    PubMed Central

    Bode, H; Sauer, M; Pringsheim, W

    1988-01-01

    The blood flow velocities in the basal cerebral arteries can be recorded at any age by transcranial Doppler sonography. We examined nine children with either initial or developing clinical signs of brain death. Soon after successful resuscitation increased diastolic flow velocities indicated a probable decrease in cerebrovascular resistance; this was of no particular prognostic importance. As soon as there was a clinical deterioration, there was a reduction in flow velocities with retrograde flow during early diastole, probably due to an increase in cerebrovascular resistance; this indicated a doubtful prognosis. In eight of the nine children with clinical signs of brain death a typical reverberating flow pattern was found, which was characterised by a counterbalancing short forward flow in systole and a short retrograde flow in early diastole. This indicated arrest of cerebral blood flow. One newborn showed normal systolic and end diastolic flow velocities in the basal cerebral arteries for two days despite clinical and electroencephalographic signs of brain death. Shunting of blood through the circle of Willis without effective cerebral perfusion may explain this phenomenon. No patient had the typical reverberating flow pattern without being clinically brain dead. Transcranial Doppler sonography is a reliable technique, which can be used at the bedside for the confirmation or the exclusion of brain death in children in addition to the clinical examination. PMID:3069052

  1. A water framework directive (WFD) compliant determination of eologically acceptable flows in alpine rivers - a river type specific approach

    NASA Astrophysics Data System (ADS)

    Jäger, Paul; Zitek, Andreas

    2010-05-01

    Currently the EU-Water Framework Directive (WFD) represents the driving force behind the assessment for rehabilitation and conservation of aquatic resources throughout Europe. Hydropower production, often considered as "green energy", in the past has put significant pressures on river systems like fragmentation by weirs, impoundment, hydropeaking and water abstraction. Due to the limited availability of data for determining ecologically acceptable flow for rivers at water abstraction sites, a special monitoring program was conducted in the federal state of Salzburg in Austria from 2006 to 2009. Water abstraction sites at 19 hydropower plants, mostly within the trout region of the River Salzach catchment, were assessed in detail with regard to the effect of water abstraction on fish and macrozoobenthos. Based on a detailed assessment of the specific local hydro-morphological and biological situations, the validity of natural low flow criteria (Absolute Minimum Flow - AMF, the lowest daily average flow ever measured and Mean Annual Daily Low Flow - MADLF) as starting points for the determination of an ecologically acceptable flow was tested. It was assessed, if a good ecological status in accordance with the EU-WFD can be maintained at natural AMF. Additionally it was tested, if important habitat parameters describing connectivity, river type specific flow variability and river type specific habitats are maintained at this discharge. Habitat modelling was applied in some situations. Hydraulic results showed that at AMF the highest flow velocity classes were lost in most situations. When AMF was significantly undercut, flow velocities between 0,0 - 0,4 m/s became dominant, describing the loss of the river type specific flow character, leading to a loss of river type specific flow variability and habitats and increased sedimentation of fines. Furthermore limits for parameters describing connectivity for fish like maximum depth at the pessimum profile and minimum flow velocity in thalweg were undercut. Additionally a significant loss of wetted width in relation to the wetted width at MADLF was documented, leading to significantly reduced ecologically available habitats. At AMF the existence of a minimum amount of usable habitat prevented a total loss of adult fish, and a good ecological status was documented by the Fish Index Austria (FIA) in all situations, where water abstraction represented the only human pressure, and AMF was left in the river as residual flow. The fish ecological status was significantly worse in river stretches where minimum flow was significantly below the AMF. However, in about one third of these stretches a good ecological status was documented by fish. Fine grained habitat structures, expressed by mean choriotope sizes (> 20 cm) and relative roughness were found to provide enough shelter, especially for brown trout, to maintain a high variance of fish lengths influencing both, the age structure and biomass. Both variables are especially highly relevant when calculating the ecological status of rivers using the FIA, when only brown trout occurs as leading species, accompanied only by the bullhead, Cottus gobio L.. However, mean fish lengths and weights were significantly smaller in most water abstraction sites. The method currently applied for determining the ecological status by macrozoobenthos failed, because the method is still based on some types of water pollution and the flow velocity as dominating factor in rivers is not adequately considered. However, a species specific analysis of the data showed a consistent loss of rheophilic species at water abstraction sites. Based on this, recommendations for a more specified assessment of the ecological status by benthic invertebrates were developed. Natural factors like slope with significant effects on hydraulic stress (bottom shear stress, maximum flow velocities, etc.) strongly overlaid the effects of water abstraction within the whole dataset. Therefore an adequate consideration of natural factors like slope, hydraulic stress and structure parameters like mean choriotope size, and a realistic identification of the significant driving pressures (water abstraction, fragmentation, and channelization) proved to be a crucial pre-requisite for a meaningful analysis and interpretation of data and determination of efficient restoration measures. Summarizing, it can be concluded that the AMF represents a valid base for determining the ecologically acceptable flow. In most cases parameters for connectivity and river type specific habitat availability are met at this discharge. However, as this discharge represents a natural catastrophic event, it is recommended to add a dynamic component to this minimum base flow to maintain at least to some extent the river type specific flow variability, contributing to a maintenance of natural geomorphologic and ecological processes linked to natural flow patterns. Especially higher discharges, able to move substrates and flush fine sediments, should be provided in their river type specific seasonal dynamics. This seasonal clearing of sediments has been proved to be strongly related to the reproductive success of trout in the past and provides interstitial habitats for invertebrates at ecologically meaningful times of the year. Finally, re-establishment of river connectivity at weirs and the morphological restructuring of highly channelized rivers can be seen as other important pre-requisites to achieve the good ecological status in alpine river systems.

  2. Minimum area thresholds for rattlesnakes and colubrid snakes on islands in the Gulf of California, Mexico.

    PubMed

    Meik, Jesse M; Makowsky, Robert

    2018-01-01

    We expand a framework for estimating minimum area thresholds to elaborate biogeographic patterns between two groups of snakes (rattlesnakes and colubrid snakes) on islands in the western Gulf of California, Mexico. The minimum area thresholds for supporting single species versus coexistence of two or more species relate to hypotheses of the relative importance of energetic efficiency and competitive interactions within groups, respectively. We used ordinal logistic regression probability functions to estimate minimum area thresholds after evaluating the influence of island area, isolation, and age on rattlesnake and colubrid occupancy patterns across 83 islands. Minimum area thresholds for islands supporting one species were nearly identical for rattlesnakes and colubrids (~1.7 km 2 ), suggesting that selective tradeoffs for distinctive life history traits between rattlesnakes and colubrids did not result in any clear advantage of one life history strategy over the other on islands. However, the minimum area threshold for supporting two or more species of rattlesnakes (37.1 km 2 ) was over five times greater than it was for supporting two or more species of colubrids (6.7 km 2 ). The great differences between rattlesnakes and colubrids in minimum area required to support more than one species imply that for islands in the Gulf of California relative extinction risks are higher for coexistence of multiple species of rattlesnakes and that competition within and between species of rattlesnakes is likely much more intense than it is within and between species of colubrids.

  3. Cenozoic volcanic geology and probable age of inception of basin-range faulting in the southeasternmost Chocolate Mountains, California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowe, B.M.

    1978-02-01

    A complex sequence of Oligocene-age volcanic and volcaniclastic rocks form a major volcanic center in the Picacho area of the southeasternmost Chocolate Mountains, Imperial County, California. Basal-volcanic rocks consist of lava flows and flow breccia of trachybasalt, pyroxene rhyodacite, and pyroxene dacite (32 My old). These volcanic rocks locally overlie fanglomerate and rest unconformably on pre-Cenozoic basement rocks. South and southeast of a prominent arcuate fault zone in the central part of the area, the rhyolite ignimbrite (26 My old) forms a major ash-flow sheet. In the southwestern part of the Picacho area the rhyolite ignimbrite interfingers with and ismore » overlain by dacite flows and laharic breccia. The rhyolite ignimbrite and the dacite of Picacho Peak are overlapped by lava flows and breccia of pyroxene andesite (25 My old) that locally rest on pre-Cenozoic basement rocks. The volcanic rocks of the Picacho area form a slightly bimodal volcanic suite consisting chiefly of silicic volcanic rocks with subordinate andesite. Late Miocene augite-olivine basalt is most similar in major-element abundances to transitional alkali-olivine basalt of the Basin and Range province. Normal separation faults in the Picacho area trend northwest and north parallel to major linear mountain ranges in the region. The areal distribution of the 26-My-old rhyolite ignimbrite and the local presence of megabreccia and fanglomerate flanking probable paleohighs suggest that the ignimbrite was erupted over irregular topography controlled by northwest- and north-trending probable basin-range faults. These relations date the inception of faulting in southeasternmost California at pre-26 and probably pre-32 My ago. A transition of basaltic volcanism in the area is dated at 13 My ago. 9 figures, 2 tables.« less

  4. Preliminary hydrogeologic investigation of the Maxey Flats radioactive waste burial site, Fleming County, Kentucky

    USGS Publications Warehouse

    Zehner, Harold H.

    1979-01-01

    Burial trenches at the Maxey Flats radioactive waste burial site , Fleming County, Ky., cover an area of about 0.03 square mile, and are located on a plateau, about 300 to 400 feet above surrounding valleys. Although surface-water characteristics are known, little information is available regarding the ground-water hydrology of the Maxey Flats area. If transport of radionuclides from the burial site were to occur, water would probably be the principal mechanism of transport by natural means. Most base flow in streams around the burial site is from valley alluvium, and from the mantle of regolith, colluvium, and soil partially covering adjacent hills. Very little base flow is due to ground-water flow from bedrock. Most water in springs is from the mantle, rather than from bedrock. Rock units underlying the Maxey Flats area are, in descending order, the Nancy and Farmers Members of the Borden Formation, Sunbury, Bedford, and Ohio Shales, and upper part of the Crab Orchard Formation. These units are mostly shales, except for the Farmers Member, which is mostly sandstone. Total thickness of the rocks is about 320 feet. All radioactive wastes are buried in the Nancy Member. Most ground-water movement in bedrock probably occurs in fractures. The ground-water system at Maxey Flats is probably unconfined, and recharge occurs by (a) infiltration of rainfall into the mantle, and (b) vertical, unsaturated flow from the saturated regolith on hilltops to saturated zones in the Farmers Member and Ohio Shale. Data are insufficient to determine if saturated zones exist in other rock units. The upper part of the Crab Orchard Formation is probably a hydrologic boundary, with little ground-water flow through the formation. (USGS)

  5. How Will Higher Minimum Wages Affect Family Life and Children's Well-Being?

    PubMed

    Hill, Heather D; Romich, Jennifer

    2018-06-01

    In recent years, new national and regional minimum wage laws have been passed in the United States and other countries. The laws assume that benefits flow not only to workers but also to their children. Adolescent workers will most likely be affected directly given their concentration in low-paying jobs, but younger children may be affected indirectly by changes in parents' work conditions, family income, and the quality of nonparental child care. Research on minimum wages suggests modest and mixed economic effects: Decreases in employment can offset, partly or fully, wage increases, and modest reductions in poverty rates may fade over time. Few studies have examined the effects of minimum wage increases on the well-being of families, adults, and children. In this article, we use theoretical frameworks and empirical evidence concerning the effects on children of parental work and family income to suggest hypotheses about the effects of minimum wage increases on family life and children's well-being.

  6. Emergency assessment of post-fire debris-flow hazards for the 2013 Mountain fire, southern California

    USGS Publications Warehouse

    Staley, Dennis M.; Gartner, Joseph E.; Smoczyk, Greg M.; Reeves, Ryan R.

    2013-01-01

    Wildfire dramatically alters the hydrologic response of a watershed such that even modest rainstorms can produce dangerous flash floods and debris flows. We use empirical models to predict the probability and magnitude of debris flow occurrence in response to a 10-year rainstorm for the 2013 Mountain fire near Palm Springs, California. Overall, the models predict a relatively high probability (60–100 percent) of debris flow for six of the drainage basins in the burn area in response to a 10-year recurrence interval design storm. Volumetric predictions suggest that debris flows that occur may entrain a significant volume of material, with 8 of the 14 basins identified as having potential debris-flow volumes greater than 100,000 cubic meters. These results suggest there is a high likelihood of significant debris-flow hazard within and downstream of the burn area for nearby populations, infrastructure, and wildlife and water resources. Given these findings, we recommend that residents, emergency managers, and public works departments pay close attention to weather forecasts and National Weather Service–issued Debris Flow and Flash Flood Outlooks, Watches and Warnings and that residents adhere to any evacuation orders.

  7. Sample Training Based Wildfire Segmentation by 2D Histogram θ-Division with Minimum Error

    PubMed Central

    Dong, Erqian; Sun, Mingui; Jia, Wenyan; Zhang, Dengyi; Yuan, Zhiyong

    2013-01-01

    A novel wildfire segmentation algorithm is proposed with the help of sample training based 2D histogram θ-division and minimum error. Based on minimum error principle and 2D color histogram, the θ-division methods were presented recently, but application of prior knowledge on them has not been explored. For the specific problem of wildfire segmentation, we collect sample images with manually labeled fire pixels. Then we define the probability function of error division to evaluate θ-division segmentations, and the optimal angle θ is determined by sample training. Performances in different color channels are compared, and the suitable channel is selected. To further improve the accuracy, the combination approach is presented with both θ-division and other segmentation methods such as GMM. Our approach is tested on real images, and the experiments prove its efficiency for wildfire segmentation. PMID:23878526

  8. Number-phase minimum-uncertainty state with reduced number uncertainty in a Kerr nonlinear interferometer

    NASA Astrophysics Data System (ADS)

    Kitagawa, M.; Yamamoto, Y.

    1987-11-01

    An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.

  9. Streamflow Characteristics of Streams in the Helmand Basin, Afghanistan

    USGS Publications Warehouse

    Williams-Sether, Tara

    2008-01-01

    Statistical summaries of streamflow data for all historical streamflow-gaging stations for the Helmand Basin upstream from the Sistan Wetlands are presented in this report. The summaries for each streamflow-gaging station include (1) manuscript (station description), (2) graph of the annual mean discharge for the period of record, (3) statistics of monthly and annual mean discharges, (4) graph of the annual flow duration, (5) monthly and annual flow duration, (6) probability of occurrence of annual high discharges, (7) probability of occurrence of annual low discharges, (8) probability of occurrence of seasonal low discharges, (9) annual peak discharge and corresponding gage height for the period of record, and (10) monthly and annual mean discharges for the period of record.

  10. Disinfection of an advanced primary effluent with peracetic acid and ultraviolet combined treatment: a continuous-flow pilot plant study.

    PubMed

    González, Abelardo; Gehr, Ronald; Vaca, Mabel; López, Raymundo

    2012-03-01

    Disinfection of an advanced primary effluent using a continuous-flow combined peracetic acid/ultraviolet (PAA/UV) radiation system was evaluated. The purpose was to determine whether the maximum microbial content, established under Mexican standards for treated wastewaters meant for reuse--less than 240 most probable number fecal coliforms (FC)/100 mL--could be feasibly accomplished using either disinfectant individually, or the combined PAA/UV system. This meant achieving reduction of up to 5 logs, considering initial concentrations of 6.4 x 10(+6) to 5.8 x 10(+7) colony forming units/100 mL. During the tests performed under these experiments, total coliforms (TC) were counted because FC, at the most, will be equal to TC. Peracetic acid disinfection achieved less than 1.5 logs TC reduction when the C(t) x t product was less than 2.26 mg x minimum (min)/L; 3.8 logs for C(t) x t 4.40 mg x min/L; and 5.9 logs for C(t) x t 24.2 mg x min/L. In continuous-flow UV irradiation tests, at a low-operating flow (21 L/min; conditions which produced an average UV fluence of 13.0 mJ/cm2), the highest TC reduction was close to 2.5 logs. The only condition that produced a disinfection efficiency of approximately 5 logs, when both disinfection agents were used together, was the combined process dosing 30 mg PAA/L at a pilot plant flow of 21 L/min and contact time of 10 minutes to attain an average C(t) x t product of 24.2 mg x min/L and an average UV fluence of 13 mJ/cm2. There was no conclusive evidence of a synergistic effect when both disinfectants were employed in combination as compared to the individual effects achieved when used separately, but this does not take into account the nonlinearity (tailing-off) of the dose-response curve.

  11. 49 CFR 192.383 - Excess flow valve installation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Customer Meters, Service Regulators... psig or greater throughout the year; (2) The operator has prior experience with contaminants in the gas...

  12. Probability mass first flush evaluation for combined sewer discharges.

    PubMed

    Park, Inhyeok; Kim, Hongmyeong; Chae, Soo-Kwon; Ha, Sungryong

    2010-01-01

    The Korea government has put in a lot of effort to construct sanitation facilities for controlling non-point source pollution. The first flush phenomenon is a prime example of such pollution. However, to date, several serious problems have arisen in the operation and treatment effectiveness of these facilities due to unsuitable design flow volumes and pollution loads. It is difficult to assess the optimal flow volume and pollution mass when considering both monetary and temporal limitations. The objective of this article was to characterize the discharge of storm runoff pollution from urban catchments in Korea and to estimate the probability of mass first flush (MFFn) using the storm water management model and probability density functions. As a result of the review of gauged storms for the representative using probability density function with rainfall volumes during the last two years, all the gauged storms were found to be valid representative precipitation. Both the observed MFFn and probability MFFn in BE-1 denoted similarly large magnitudes of first flush with roughly 40% of the total pollution mass contained in the first 20% of the runoff. In the case of BE-2, however, there were significant difference between the observed MFFn and probability MFFn.

  13. Variational energy principle for compressible, baroclinic flow. 2: Free-energy form of Hamilton's principle

    NASA Technical Reports Server (NTRS)

    Schmid, L. A.

    1977-01-01

    The first and second variations are calculated for the irreducible form of Hamilton's Principle that involves the minimum number of dependent variables necessary to describe the kinetmatics and thermodynamics of inviscid, compressible, baroclinic flow in a specified gravitational field. The form of the second variation shows that, in the neighborhood of a stationary point that corresponds to physically stable flow, the action integral is a complex saddle surface in parameter space. There exists a form of Hamilton's Principle for which a direct solution of a flow problem is possible. This second form is related to the first by a Friedrichs transformation of the thermodynamic variables. This introduces an extra dependent variable, but the first and second variations are shown to have direct physical significance, namely they are equal to the free energy of fluctuations about the equilibrium flow that satisfies the equations of motion. If this equilibrium flow is physically stable, and if a very weak second order integral constraint on the correlation between the fluctuations of otherwise independent variables is satisfied, then the second variation of the action integral for this free energy form of Hamilton's Principle is positive-definite, so the action integral is a minimum, and can serve as the basis for a direct trail and error solution. The second order integral constraint states that the unavailable energy must be maximum at equilibrium, i.e. the fluctuations must be so correlated as to produce a second order decrease in the total unavailable energy.

  14. Efficiency and large deviations in time-asymmetric stochastic heat engines

    DOE PAGES

    Gingrich, Todd R.; Rotskoff, Grant M.; Vaikuntanathan, Suriyanarayanan; ...

    2014-10-24

    In a stochastic heat engine driven by a cyclic non-equilibrium protocol, fluctuations in work and heat give rise to a fluctuating efficiency. Using computer simulations and tools from large deviation theory, we have examined these fluctuations in detail for a model two-state engine. We find in general that the form of efficiency probability distributions is similar to those described by Verley et al (2014 Nat. Commun. 5 4721), in particular featuring a local minimum in the long-time limit. In contrast to the time-symmetric engine protocols studied previously, however, this minimum need not occur at the value characteristic of a reversible Carnot engine. Furthermore, while the local minimum may reside at the global minimum of a large deviation rate function, it does not generally correspond to the least likely efficiency measured over finite time. Lastly, we introduce a general approximation for the finite-time efficiency distribution,more » $$P(\\eta )$$, based on large deviation statistics of work and heat, that remains very accurate even when $$P(\\eta )$$ deviates significantly from its large deviation form.« less

  15. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    NASA Astrophysics Data System (ADS)

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  16. Pressure fluctuation generated by the interaction of blade and tongue

    NASA Astrophysics Data System (ADS)

    Zheng, Lulu; Dou, Hua-Shu; Chen, Xiaoping; Zhu, Zuchao; Cui, Baoling

    2018-02-01

    Pressure fluctuation around the tongue has large effect on the stable operation of a centrifugal pump. In this paper, the Reynolds averaged Navier-Stokes equations (RANS) and the RNG k-epsilon turbulence model is employed to simulate the flow in a pump. The flow field in the centrifugal pump is computed for a range of flow rate. The simulation results have been compared with the experimental data and good agreement has been achieved. In order to study the interaction of the tongue with the impeller, fifteen monitor probes are evenly distributed circumferentially at three radii around the tongue. Pressure distribution is investigated at various blade positions while the blade approaches to and leaves the tongue region. Results show that pressure signal fluctuates largely around the tongue, and it is more intense near the tongue surface. At design condition, standard deviation of pressure fluctuation is the minimum. At large flow rate, the increased low pressure region at the blade trailing edge results in the increases of pressure fluctuation amplitude and pressure spectra at the monitor probes. Minimum pressure is obtained when the blade is facing to the tongue. It is found that the amplitude of pressure fluctuation strongly depends on the blade positions at large flow rate, and pressure fluctuation is caused by the relative movement between blades and tongue. At small flow rate, the rule of pressure fluctuation is mainly depending on the structure of vortex flow at blade passage exit besides the influence from the relative position between the blade and the tongue.

  17. Global X-ray Spectral Variation of Eta Carinae through the 2003 X-ray Minimum

    NASA Technical Reports Server (NTRS)

    Hamaguchi, K.; Corcoran, M. F.; White, N. E.; Gull, T.; Damineli, A.; Davidson, K.

    2006-01-01

    We report on the results of the X-ray observing campaign of the massive, evolved star Eta Carinae in 2003 around its recent X-ray Minimum, mainly using data from the XMM-Newton observatory. These imaging observations show that the hard X-ray source associated with the Eta Carinae system does not completely disappear in any of the observations during the Minimum. The variation of the spectral shape revealed two emission components. One newly discovered component did not exhibit any variation on kilo-second to year-long timescales, in a combined analysis with earlier ASCA and ROSAT data, and might represent the collision of a high speed outflow from Eta Carinae with ambient gas clouds. The other emission component was strongly variable in flux but the temperature of the hottest plasma did not vary significantly at any orbital phase. Absorption to the hard emission, was about a factor of three larger than the absorption determined from the cutoff of the soft emission, and reached a maximum of approx.4 x 10(exp 23)/sq cm before the Minimum. The thermal Fe\\rm XXV emission line showed significant excesses on both the red and blue sides of the line outside the Minimum and exhibited a large redward excess during the Minimum. This variation in the line profile probably requires an abrupt change in ionization balance in the shocked gas.

  18. A SURVEY OF METHODS FOR SETTING MINIMUM INSTREAM FLOW STANDARDS IN THE CARIBBEAN BASIN.

    Treesearch

    F. N. SCATENA

    2004-01-01

    To evaluate the current status of instream flow practices in streams that drain into the Caribbean Basin, a voluntary survey of practising water resource managers was conducted. Responses were received from 70% of the potential continental countries, 100% of the islands in the Greater Antilles, and 56% of all the Caribbean island nations. Respondents identified ‘...

  19. Computer simulations of equilibrium magnetization and microstructure in magnetic fluids

    NASA Astrophysics Data System (ADS)

    Rosa, A. P.; Abade, G. C.; Cunha, F. R.

    2017-09-01

    In this work, Monte Carlo and Brownian Dynamics simulations are developed to compute the equilibrium magnetization of a magnetic fluid under action of a homogeneous applied magnetic field. The particles are free of inertia and modeled as hard spheres with the same diameters. Two different periodic boundary conditions are implemented: the minimum image method and Ewald summation technique by replicating a finite number of particles throughout the suspension volume. A comparison of the equilibrium magnetization resulting from the minimum image approach and Ewald sums is performed by using Monte Carlo simulations. The Monte Carlo simulations with minimum image and lattice sums are used to investigate suspension microstructure by computing the important radial pair-distribution function go(r), which measures the probability density of finding a second particle at a distance r from a reference particle. This function provides relevant information on structure formation and its anisotropy through the suspension. The numerical results of go(r) are compared with theoretical predictions based on quite a different approach in the absence of the field and dipole-dipole interactions. A very good quantitative agreement is found for a particle volume fraction of 0.15, providing a validation of the present simulations. In general, the investigated suspensions are dominated by structures like dimmer and trimmer chains with trimmers having probability to form an order of magnitude lower than dimmers. Using Monte Carlo with lattice sums, the density distribution function g2(r) is also examined. Whenever this function is different from zero, it indicates structure-anisotropy in the suspension. The dependence of the equilibrium magnetization on the applied field, the magnetic particle volume fraction, and the magnitude of the dipole-dipole magnetic interactions for both boundary conditions are explored in this work. Results show that at dilute regimes and with moderate dipole-dipole interactions, the standard method of minimum image is both accurate and computationally efficient. Otherwise, lattice sums of magnetic particle interactions are required to accelerate convergence of the equilibrium magnetization. The accuracy of the numerical code is also quantitatively verified by comparing the magnetization obtained from numerical results with asymptotic predictions of high order in the particle volume fraction, in the presence of dipole-dipole interactions. In addition, Brownian Dynamics simulations are used in order to examine magnetization relaxation of a ferrofluid and to calculate the magnetic relaxation time as a function of the magnetic particle interaction strength for a given particle volume fraction and a non-dimensional applied field. The simulations of magnetization relaxation have shown the existence of a critical value of the dipole-dipole interaction parameter. For strength of the interactions below the critical value at a given particle volume fraction, the magnetic relaxation time is close to the Brownian relaxation time and the suspension has no appreciable memory. On the other hand, for strength of dipole interactions beyond its critical value, the relaxation time increases exponentially with the strength of dipole-dipole interaction. Although we have considered equilibrium conditions, the obtained results have far-reaching implications for the analysis of magnetic suspensions under external flow.

  20. Statistical summaries of selected Iowa streamflow data through September 2013

    USGS Publications Warehouse

    Eash, David A.; O'Shea, Padraic S.; Weber, Jared R.; Nguyen, Kevin T.; Montgomery, Nicholas L.; Simonson, Adrian J.

    2016-01-04

    Statistical summaries of streamflow data collected at 184 streamgages in Iowa are presented in this report. All streamgages included for analysis have at least 10 years of continuous record collected before or through September 2013. This report is an update to two previously published reports that presented statistical summaries of selected Iowa streamflow data through September 1988 and September 1996. The statistical summaries include (1) monthly and annual flow durations, (2) annual exceedance probabilities of instantaneous peak discharges (flood frequencies), (3) annual exceedance probabilities of high discharges, and (4) annual nonexceedance probabilities of low discharges and seasonal low discharges. Also presented for each streamgage are graphs of the annual mean discharges, mean annual mean discharges, 50-percent annual flow-duration discharges (median flows), harmonic mean flows, mean daily mean discharges, and flow-duration curves. Two sets of statistical summaries are presented for each streamgage, which include (1) long-term statistics for the entire period of streamflow record and (2) recent-term statistics for or during the 30-year period of record from 1984 to 2013. The recent-term statistics are only calculated for streamgages with streamflow records pre-dating the 1984 water year and with at least 10 years of record during 1984–2013. The streamflow statistics in this report are not adjusted for the effects of water use; although some of this water is used consumptively, most of it is returned to the streams.

  1. Advanced natural laminar flow airfoil with high lift to drag ratio

    NASA Technical Reports Server (NTRS)

    Viken, Jeffrey K.; Pfenninger, Werner; Mcghee, Robert J.

    1986-01-01

    An experimental verification of a high performance natural laminar flow (NLF) airfoil for low speed and high Reynolds number applications was completed in the Langley Low Turbulence Pressure Tunnel (LTPT). Theoretical development allowed for the achievement of 0.70 chord laminar flow on both surfaces by the use of accelerated flow as long as tunnel turbulence did not cause upstream movement of transition with increasing chord Reynolds number. With such a rearward pressure recovery, a concave type deceleration was implemented. Two-dimensional theoretical analysis indicated that a minimum profile drag coefficient of 0.0026 was possible with the desired laminar flow at the design condition. With the three-foot chord two-dimensional model constructed for the LTPT experiment, a minimum profile drag coefficient of 0.0027 was measured at c sub l = 0.41 and Re sub c = 10 x 10 to the 6th power. The low drag bucket was shifted over a considerably large c sub l range by the use of the 12.5 percent chord trailing edge flap. A two-dimensional lift to drag ratio (L/D) was 245. Surprisingly high c sub l max values were obtained for an airfoil of this type. A 0.20 chort split flap with 60 deg deflection was also implemented to verify the airfoil's lift capabilities. A maximum lift coefficient of 2.70 was attained at Reynolds numbers of 3 and 6 million.

  2. Flow regime alterations under changing climate in two river basins: Implications for freshwater ecosystems

    USGS Publications Warehouse

    Gibson, C.A.; Meyer, J.L.; Poff, N.L.; Hay, L.E.; Georgakakos, A.

    2005-01-01

    We examined impacts of future climate scenarios on flow regimes and how predicted changes might affect river ecosystems. We examined two case studies: Cle Elum River, Washington, and Chattahoochee-Apalachicola River Basin, Georgia and Florida. These rivers had available downscaled global circulation model (GCM) data and allowed us to analyse the effects of future climate scenarios on rivers with (1) different hydrographs, (2) high future water demands, and (3) a river-floodplain system. We compared observed flow regimes to those predicted under future climate scenarios to describe the extent and type of changes predicted to occur. Daily stream flow under future climate scenarios was created by either statistically downscaling GCMs (Cle Elum) or creating a regression model between climatological parameters predicted from GCMs and stream flow (Chattahoochee-Apalachicola). Flow regimes were examined for changes from current conditions with respect to ecologically relevant features including the magnitude and timing of minimum and maximum flows. The Cle Elum's hydrograph under future climate scenarios showed a dramatic shift in the timing of peak flows and lower low flow of a longer duration. These changes could mean higher summer water temperatures, lower summer dissolved oxygen, and reduced survival of larval fishes. The Chattahoochee-Apalachicola basin is heavily impacted by dams and water withdrawals for human consumption; therefore, we made comparisons between pre-large dam conditions, current conditions, current conditions with future demand, and future climate scenarios with future demand to separate climate change effects and other anthropogenic impacts. Dam construction, future climate, and future demand decreased the flow variability of the river. In addition, minimum flows were lower under future climate scenarios. These changes could decrease the connectivity of the channel and the floodplain, decrease habitat availability, and potentially lower the ability of the river to assimilate wastewater treatment plant effluent. Our study illustrates the types of changes that river ecosystems might experience under future climates. Copyright ?? 2005 John Wiley & Sons, Ltd.

  3. A modeling approach to establish environmental flow threshold in ungauged semidiurnal tidal river

    NASA Astrophysics Data System (ADS)

    Akter, A.; Tanim, A. H.

    2018-03-01

    Due to shortage of flow monitoring data in ungauged semidiurnal river, 'environmental flow' (EF) determination based on its key component 'minimum low flow' is always difficult. For EF assessment this study selected a reach immediately after the Halda-Karnafuli confluence, a unique breeding ground for Indian Carp fishes of Bangladesh. As part of an ungauged tidal river, EF threshold establishment faces challenges in changing ecological paradigms with periodic change of tides and hydrologic alterations. This study describes a novel approach through modeling framework comprising hydrological, hydrodynamic and habitat simulation model. The EF establishment was conceptualized according to the hydrologic process of an ungauged semi-diurnal tidal regime in four steps. Initially, a hydrologic model coupled with a hydrodynamic model to simulate flow considering land use changes effect on streamflow, seepage loss of channel, friction dominated tidal decay as well as lack of long term flow characteristics. Secondly, to define hydraulic habitat feature, a statistical analysis on derived flow data was performed to identify 'habitat suitability'. Thirdly, to observe the ecological habitat behavior based on the identified hydrologic alteration, hydraulic habitat features were investigated. Finally, based on the combined habitat suitability index flow alteration and ecological response relationship was established. Then, the obtained EF provides a set of low flow indices of desired regime and thus the obtained discharge against maximum Weighted Usable Area (WUA) was defined as EF threshold for the selected reach. A suitable EF regime condition was obtained within flow range 25-30.1 m3/s i.e., around 10-12% of the mean annual runoff of 245 m3/s and these findings are within researchers' recommendation of minimum flow requirement. Additionally it was observed that tidal characteristics are dominant process in semi-diurnal regime. However, during the study period (2010-2015) the validated model with those reported observations can provide guidance for the decision support system (DSS) to maintain EF range in an ungauged tidal river.

  4. Generating a Simulated Fluid Flow over a Surface Using Anisotropic Diffusion

    NASA Technical Reports Server (NTRS)

    Rodriguez, David L. (Inventor); Sturdza, Peter (Inventor)

    2016-01-01

    A fluid-flow simulation over a computer-generated surface is generated using a diffusion technique. The surface is comprised of a surface mesh of polygons. A boundary-layer fluid property is obtained for a subset of the polygons of the surface mesh. A gradient vector is determined for a selected polygon, the selected polygon belonging to the surface mesh but not one of the subset of polygons. A maximum and minimum diffusion rate is determined along directions determined using the gradient vector corresponding to the selected polygon. A diffusion-path vector is defined between a point in the selected polygon and a neighboring point in a neighboring polygon. An updated fluid property is determined for the selected polygon using a variable diffusion rate, the variable diffusion rate based on the minimum diffusion rate, maximum diffusion rate, and the gradient vector.

  5. Variation principle in calculating the flow of a two-phase mixture in the pipes of the cooling systems in high-rise buildings

    NASA Astrophysics Data System (ADS)

    Aksenov, Andrey; Malysheva, Anna

    2018-03-01

    The analytical solution of one of the urgent problems of modern hydromechanics and heat engineering about the distribution of gas and liquid phases along the channel cross-section, the thickness of the annular layer and their connection with the mass content of the gas phase in the gas-liquid flow is given in the paper.The analytical method is based on the fundamental laws of theoretical mechanics and thermophysics on the minimum of energy dissipation and the minimum rate of increase in the system entropy, which determine the stability of stationary states and processes. Obtained dependencies disclose the physical laws of the motion of two-phase media and can be used in hydraulic calculations during the design and operation of refrigeration and air conditioning systems.

  6. Styrene recovery from polystyrene by flash pyrolysis in a conical spouted bed reactor.

    PubMed

    Artetxe, Maite; Lopez, Gartzen; Amutio, Maider; Barbarias, Itsaso; Arregi, Aitor; Aguado, Roberto; Bilbao, Javier; Olazar, Martin

    2015-11-01

    Continuous pyrolysis of polystyrene has been studied in a conical spouted bed reactor with the main aim of enhancing styrene monomer recovery. Thermal degradation in a thermogravimetric analyser was conducted as a preliminary study in order to apply this information in the pyrolysis in the conical spouted bed reactor. The effects of temperature and gas flow rate in the conical spouted bed reactor on product yield and composition have been determined in the 450-600°C range by using a spouting velocity from 1.25 to 3.5 times the minimum one. Styrene yield is strongly influenced by both temperature and gas flow rate, with the maximum yield being 70.6 wt% at 500°C and a gas velocity twice the minimum one. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Minimum data requirement for neural networks based on power spectral density analysis.

    PubMed

    Deng, Jiamei; Maass, Bastian; Stobart, Richard

    2012-04-01

    One of the most critical challenges ahead for diesel engines is to identify new techniques for fuel economy improvement without compromising emissions regulations. One technique is the precise control of air/fuel ratio, which requires the measurement of instantaneous fuel consumption. Measurement accuracy and repeatability for fuel rate is the key to successfully controlling the air/fuel ratio and real-time measurement of fuel consumption. The volumetric and gravimetric measurement principles are well-known methods for measurement of fuel consumption in internal combustion engines. However, the fuel flow rate measured by these methods is not suitable for either real-time control or real-time measurement purposes because of the intermittent nature of the measurements. This paper describes a technique that can be used to find the minimum data [consisting of data from just 2.5% of the non-road transient cycle (NRTC)] to solve the problem concerning discontinuous data of fuel flow rate measured using an AVL 733S fuel meter for a medium or heavy-duty diesel engine using neural networks. Only torque and speed are used as the input parameters for the fuel flow rate prediction. Power density analysis is used to find the minimum amount of the data. The results show that the nonlinear autoregressive model with exogenous inputs could predict the particulate matter successfully with R(2) above 0.96 using 2.5% NRTC data with only torque and speed as inputs.

  8. U S Navy Diving Manual. Volume 2. Mixed-Gas Diving. Revision 1.

    DTIC Science & Technology

    1981-07-01

    has been soaked in a solution of portant aspects of underwater physics and physiology caustic potash. This chemical absorbed the carbon as they...between the diver’s breathing passages and the circuit must be of minimum volume minimum of caustic fumes. Water produced by the to preclude deadspace and...strongly react with water to pro- space around the absorbent bed to reduce the gas duce caustic fumes and cannot be used in UBA’s. flow distance. The

  9. Automated design of minimum drag light aircraft fuselages and nacelles

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.; Fox, S. R.; Karlin, B. E.

    1982-01-01

    The constrained minimization algorithm of Vanderplaats is applied to the problem of designing minimum drag faired bodies such as fuselages and nacelles. Body drag is computed by a variation of the Hess-Smith code. This variation includes a boundary layer computation. The encased payload provides arbitrary geometric constraints, specified a priori by the designer, below which the fairing cannot shrink. The optimization may include engine cooling air flows entering and exhausting through specific port locations on the body.

  10. Effects of sporadic E-layer characteristics on spread-F generation in the nighttime ionosphere near a northern equatorial anomaly crest during solar minimum

    NASA Astrophysics Data System (ADS)

    Lee, C. C.; Chen, W. S.

    2015-06-01

    This study is to know how the characteristics of sporadic E-layer (Es-layer) affect the generation of spread-F in the nighttime ionosphere near the crest of equatorial ionization anomaly during solar minimum. The data of Es-layer parameters and spread-F are obtained from the Chungli ionograms of 1996. The Es-layer parameters include foEs (critical frequency of Es-layer), fbEs (blanketing frequency of Es-layer), and Δf (≡foEs-fbEs). Results show that the nighttime variations of foEs and fbEs medians (Δf medians) are different from (similar to) that of the occurrence probabilities of spread-F. Because the total number of Es-layer events is greater than that of spread-F events, the comparison between the medians of Es-layer parameters and the occurrence probabilities of spread-F might have a shortfall. Further, we categorize the Es-layer and spread-F events into each frequency interval of Es-layer parameters. For the occurrence probabilities of spread-F versus foEs, an increasing trend is found in post-midnight of all three seasons. The increasing trend also exists in pre-midnight of the J-months and in post-midnight of all seasons, for the occurrence probabilities of spread-F versus Δf. These demonstrate that the spread-F occurrence increases with increasing foEs and/or Δf. Moreover, the increasing trends indicate that polarization electric fields generated in Es-layer assist to produce spread-F, through the electrodynamical coupling of Es-layer and F-region. Regarding the occurrence probabilities of spread-F versus fbEs, the significant trend only appears in post-midnight of the E-months. This implies that fbEs might not be a major factor for the spread-F formation.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herberger, Sarah M.; Boring, Ronald L.

    Abstract Objectives: This paper discusses the differences between classical human reliability analysis (HRA) dependence and the full spectrum of probabilistic dependence. Positive influence suggests an error increases the likelihood of subsequent errors or success increases the likelihood of subsequent success. Currently the typical method for dependence in HRA implements the Technique for Human Error Rate Prediction (THERP) positive dependence equations. This assumes that the dependence between two human failure events varies at discrete levels between zero and complete dependence (as defined by THERP). Dependence in THERP does not consistently span dependence values between 0 and 1. In contrast, probabilistic dependencemore » employs Bayes Law, and addresses a continuous range of dependence. Methods: Using the laws of probability, complete dependence and maximum positive dependence do not always agree. Maximum dependence is when two events overlap to their fullest amount. Maximum negative dependence is the smallest amount that two events can overlap. When the minimum probability of two events overlapping is less than independence, negative dependence occurs. For example, negative dependence is when an operator fails to actuate Pump A, thereby increasing his or her chance of actuating Pump B. The initial error actually increases the chance of subsequent success. Results: Comparing THERP and probability theory yields different results in certain scenarios; with the latter addressing negative dependence. Given that most human failure events are rare, the minimum overlap is typically 0. And when the second event is smaller than the first event the max dependence is less than 1, as defined by Bayes Law. As such alternative dependence equations are provided along with a look-up table defining the maximum and maximum negative dependence given the probability of two events. Conclusions: THERP dependence has been used ubiquitously for decades, and has provided approximations of the dependencies between two events. Since its inception, computational abilities have increased exponentially, and alternative approaches that follow the laws of probability dependence need to be implemented. These new approaches need to consider negative dependence and identify when THERP output is not appropriate.« less

  12. Geochemical variations during development of the 5.46 Ma Broadwell Mesa basaltic volcanic field, California

    USGS Publications Warehouse

    Buesch, David C.

    2017-01-01

    The 5.46±0.04 Ma Broadwell Mesa basalt and associated basaltic volcanic field in the western Bristol Mountains, California, formed a ~6 km2 volcanic flow field with architecture including numerous lava flows, a ~1.1 km2 lava lake, and a ~0.17 km2 cinder cone. The local number of lava flows varies from one along the margins of the field to as many as 18 that are stacked vertically, onlapped by younger flows, or are laterally adjacent to each other. Geochemical plots of 40 hand samples indicate that all lava flows are basalt and that the field is slightly compositionally zoned. Typically, there is a progressive change in composition in sequentially overlying lava flows, although in some flow sequences, the overlying flow has an “across trend” step in composition, and a few have an “against trend” step in composition. The progressive compositional change indicates that the magmatic composition evolved during the history of the field, and the “across trend” and minor “against trend” steps probably represent periods of crystal fractionation or reinjection of magma during hiatuses in eruptions. The lack of clastic sedimentary rocks or even aeolianite interstratified with the lava flows probably indicates that the Broadwell Mesa volcanic field was short-lived.

  13. Emergency assessment of post-fire debris-flow hazards for the 2013 Springs Fire, Ventura County, California

    USGS Publications Warehouse

    Staley, Dennis M.

    2014-01-01

    Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can produce dangerous flash floods and debris flows. In this report, empirical models are used to predict the probability and magnitude of debris-flow occurrence in response to a 10-year rainstorm for the 2013 Springs fire in Ventura County, California. Overall, the models predict a relatively high probability (60–80 percent) of debris flow for 9 of the 99 drainage basins in the burn area in response to a 10-year recurrence interval design storm. Predictions of debris-flow volume suggest that debris flows may entrain a significant volume of material, with 28 of the 99 basins identified as having potential debris-flow volumes greater than 10,000 cubic meters. These results of the relative combined hazard analysis suggest there is a moderate likelihood of significant debris-flow hazard within and downstream of the burn area for nearby populations, infrastructure, wildlife, and water resources. Given these findings, we recommend that residents, emergency managers, and public works departments pay close attention to weather forecasts and National Weather Service-issued Debris Flow and Flash Flood Outlooks, Watches, and Warnings, and that residents adhere to any evacuation orders.

  14. On the radiobiological impact of metal artifacts in head-and-neck IMRT in terms of tumor control probability (TCP) and normal tissue complication probability (NTCP).

    PubMed

    Kim, Yusung; Tomé, Wolfgang A

    2007-11-01

    To investigate the effects of distorted head-and-neck (H&N) intensity-modulated radiation therapy (IMRT) dose distributions (hot and cold spots) on normal tissue complication probability (NTCP) and tumor control probability (TCP) due to dental-metal artifacts. Five patients' IMRT treatment plans have been analyzed, employing five different planning image data-sets: (a) uncorrected (UC); (b) homogeneous uncorrected (HUC); (c) sinogram completion corrected (SCC); (d) minimum-value-corrected (MVC); and (e) streak-artifact-reduction including minimum-value-correction (SAR-MVC), which has been taken as the reference data-set. The effects on NTCP and TCP were evaluated using the Lyman-NTCP model and the Logistic-TCP model, respectively. When compared to the predicted NTCP obtained using the reference data-set, the treatment plan based on the original CT data-set (UC) yielded an increase in NTCP of 3.2 and 2.0% for the spared parotid gland and the spinal cord, respectively. While for the treatment plans based on the MVC CT data-set the NTCP increased by a 1.1% and a 0.1% for the spared parotid glands and the spinal cord, respectively. In addition, the MVC correction method showed a reduction in TCP for target volumes (MVC: delta TCP = -0.6% vs. UC: delta TCP = -1.9%) with respect to that of the reference CT data-set. Our results indicate that the presence of dental-metal-artifacts in H&N planning CT data-sets has an impact on the estimates of TCP and NTCP. In particular dental-metal-artifacts lead to an increase in NTCP for the spared parotid glands and a slight decrease in TCP for target volumes.

  15. Flow Regime Based Climatologies of Lightning Probabilities for Spaceports and Airports

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III; Volmer, Matthew; Sharp, David; Spratt, Scott; Lafosse, Richard A.

    2007-01-01

    Objective: provide forecasters with a "first guess" climatological lightning probability tool (1) Focus on Space Shuttle landings and NWS T AFs (2) Four circles around sites: 5-, 10-, 20- and 30 n mi (4) Three time intervals: hourly, every 3 hr and every 6 hr It is based on: (1) NLDN gridded data (2) Flow regime (3) Warm season months of May-Sep for years 1989-2004 Gridded data and available code yields squares, not circles Over 850 spread sheets converted into manageable user-friendly web-based GUI

  16. Cross-stream migration of active particles

    NASA Astrophysics Data System (ADS)

    Uspal, William; Katuri, Jaideep; Simmchen, Juliane; Miguel-Lopez, Albert; Sanchez, Samuel

    For natural microswimmers, the interplay of swimming activity and external flow can promote robust directed motion, e.g. propulsion against (upstream rheotaxis) or perpendicular to the direction of flow. These effects are generally attributed to their complex body shapes and flagellar beat patterns. Here, using catalytic Janus particles as a model system, we report on a strong directional response that naturally emerges for spherical active particles in a channel flow. The particles align their propulsion axis to be perpendicular to both the direction of flow and the normal vector of a nearby bounding surface. We develop a deterministic theoretical model that captures this spontaneous transverse orientational order. We show how the directional response emerges from the interplay of external shear flow and swimmer/surface interactions (e.g., hydrodynamic interactions) that originate in swimming activity. Finally, adding the effect of thermal noise, we obtain probability distributions for the swimmer orientation that show good agreement with the experimental probability distributions. Our findings show that the qualitative response of microswimmers to flow is sensitive to the detailed interaction between individual microswimmers and bounding surfaces.

  17. Evidence of climate change impact on stream low flow from the tropical mountain rainforest watershed in Hainan Island, China

    Treesearch

    Z. Zhou; Y. Ouyang; Z. Qiu; G. Zhou; M. Lin; Y. Li

    2017-01-01

    Stream low flow estimates are central to assessing climate change impact, water resource management, and ecosystem restoration. This study investigated the impacts of climate change upon stream low flows from a rainforest watershed in Jianfengling (JFL) Mountain, Hainan Island, China, using the low flow selection method as well as the frequency and probability analysis...

  18. Nature of Fluctuations on Directional Discontinuities Inside a Solar Ejection: Wind and IMP 8 Observations

    NASA Technical Reports Server (NTRS)

    Vasquez, Bernard J.; Farrugia, Charles J.; Markovskii, Sergei A.; Hollweg, Joseph V.; Richardson, Ian G.; Ogilvie, Keith W.; Lepping, Ronald P.; Lin, Robert P.; Larson, Davin; White, Nicholas E. (Technical Monitor)

    2001-01-01

    A solar ejection passed the Wind spacecraft between December 23 and 26, 1996. On closer examination, we find a sequence of ejecta material, as identified by abnormally low proton temperatures, separated by plasmas with typical solar wind temperatures at 1 AU. Large and abrupt changes in field and plasma properties occurred near the separation boundaries of these regions. At the one boundary we examine here, a series of directional discontinuities was observed. We argue that Alfvenic fluctuations in the immediate vicinity of these discontinuities distort minimum variance normals, introducing uncertainty into the identification of the discontinuities as either rotational or tangential. Carrying out a series of tests on plasma and field data including minimum variance, velocity and magnetic field correlations, and jump conditions, we conclude that the discontinuities are tangential. Furthermore, we find waves superposed on these tangential discontinuities (TDs). The presence of discontinuities allows the existence of both surface waves and ducted body waves. Both probably form in the solar atmosphere where many transverse nonuniformities exist and where theoretically they have been expected. We add to prior speculation that waves on discontinuities may in fact be a common occurrence. In the solar wind, these waves can attain large amplitudes and low frequencies. We argue that such waves can generate dynamical changes at TDs through advection or forced reconnection. The dynamics might so extensively alter the internal structure that the discontinuity would no longer be identified as tangential. Such processes could help explain why the occurrence frequency of TDs observed throughout the solar wind falls off with increasing heliocentric distance. The presence of waves may also alter the nature of the interactions of TDs with the Earth's bow shock in so-called hot flow anomalies.

  19. Adaptive Conditioning of Multiple-Point Geostatistical Facies Simulation to Flow Data with Facies Probability Maps

    NASA Astrophysics Data System (ADS)

    Khodabakhshi, M.; Jafarpour, B.

    2013-12-01

    Characterization of complex geologic patterns that create preferential flow paths in certain reservoir systems requires higher-order geostatistical modeling techniques. Multipoint statistics (MPS) provides a flexible grid-based approach for simulating such complex geologic patterns from a conceptual prior model known as a training image (TI). In this approach, a stationary TI that encodes the higher-order spatial statistics of the expected geologic patterns is used to represent the shape and connectivity of the underlying lithofacies. While MPS is quite powerful for describing complex geologic facies connectivity, the nonlinear and complex relation between the flow data and facies distribution makes flow data conditioning quite challenging. We propose an adaptive technique for conditioning facies simulation from a prior TI to nonlinear flow data. Non-adaptive strategies for conditioning facies simulation to flow data can involves many forward flow model solutions that can be computationally very demanding. To improve the conditioning efficiency, we develop an adaptive sampling approach through a data feedback mechanism based on the sampling history. In this approach, after a short period of sampling burn-in time where unconditional samples are generated and passed through an acceptance/rejection test, an ensemble of accepted samples is identified and used to generate a facies probability map. This facies probability map contains the common features of the accepted samples and provides conditioning information about facies occurrence in each grid block, which is used to guide the conditional facies simulation process. As the sampling progresses, the initial probability map is updated according to the collective information about the facies distribution in the chain of accepted samples to increase the acceptance rate and efficiency of the conditioning. This conditioning process can be viewed as an optimization approach where each new sample is proposed based on the sampling history to improve the data mismatch objective function. We extend the application of this adaptive conditioning approach to the case where multiple training images are proposed to describe the geologic scenario in a given formation. We discuss the advantages and limitations of the proposed adaptive conditioning scheme and use numerical experiments from fluvial channel formations to demonstrate its applicability and performance compared to non-adaptive conditioning techniques.

  20. Effect of Cisplatin on Parotid Gland Function in Concomitant Radiochemotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hey, Jeremias; Setz, Juergen; Gerlach, Reinhard

    2009-12-01

    Purpose: To determine the influence of concomitant radiochemotherapy with cisplatin on parotid gland tissue complication probability. Methods and Materials: Patients treated with either radiotherapy (n = 61) or concomitant radiochemotherapy with cisplatin (n = 36) for head-and-neck cancer were prospectively evaluated. The dose and volume distributions of the parotid glands were noted in dose-volume histograms. Stimulated salivary flow rates were measured before, during the 2nd and 6th weeks and at 4 weeks and 6 months after the treatment. The data were fit using the normal tissue complication probability model of Lyman. Complication was defined as a reduction of the salivarymore » flow rate to less than 25% of the pretreatment flow rate. Results: The normal tissue complication probability model parameter TD{sub 50} (the dose leading to a complication probability of 50%) was found to be 32.2 Gy at 4 weeks and 32.1 Gy at 6 months for concomitant radiochemotherapy and 41.1 Gy at 4 weeks and 39.6 Gy at 6 months for radiotherapy. The tolerated dose for concomitant radiochemotherapy was at least 7 to 8 Gy lower than for radiotherapy alone at TD{sub 50}. Conclusions: In this study, the concomitant radiochemotherapy tended to cause a higher probability of parotid gland tissue damage. Advanced radiotherapy planning approaches such as intensity-modulated radiotherapy may be partiticularly important for parotid sparing in radiochemotherapy because of cisplatin-related increased radiosensitivity of glands.« less

Top