NASA Astrophysics Data System (ADS)
Kashefi, Ali; Staples, Anne
2016-11-01
Coarse grid projection (CGP) methodology is a novel multigrid method for systems involving decoupled nonlinear evolution equations and linear elliptic equations. The nonlinear equations are solved on a fine grid and the linear equations are solved on a corresponding coarsened grid. Mapping functions transfer data between the two grids. Here we propose a version of CGP for incompressible flow computations using incremental pressure correction methods, called IFEi-CGP (implicit-time-integration, finite-element, incremental coarse grid projection). Incremental pressure correction schemes solve Poisson's equation for an intermediate variable and not the pressure itself. This fact contributes to IFEi-CGP's efficiency in two ways. First, IFEi-CGP preserves the velocity field accuracy even for a high level of pressure field grid coarsening and thus significant speedup is achieved. Second, because incremental schemes reduce the errors that arise from boundaries with artificial homogenous Neumann conditions, CGP generates undamped flows for simulations with velocity Dirichlet boundary conditions. Comparisons of the data accuracy and CPU times for the incremental-CGP versus non-incremental-CGP computations are presented.
Armour, Carl L.; Taylor, Jonathan G.
1991-01-01
This paper summarizes results of a survey conducted in 1988 of 57 U.S. Fish and Wildlife Service field offices. The purpose was to document opinions of biologists experienced in applying the Instream Flow Incremental Methodology (IFIM). Responses were received from 35 offices where 616 IFIM applications were reported. The existence of six monitoring studies designed to evaluate the adequacy of flows provided at sites was confirmed. The two principal categories reported as stumbling blocks to the successful application of IFIM were beliefs that the methodology is technically too simplistic or that it is too complex to apply. Recommendations receiving the highest scores for future initiatives to enhance IFIM use were (1) training and workshops for field biologists; and (2) improving suitability index (SI) curves and computer models, and evaluating the relationship of weighted useable area (WUA) to fish responses. The authors concur that emphasis for research should be on addressing technical concerns about SI curves and WUA.
Use of the instream flow incremental methodology: a tool for negotiation
Cavendish, Mary G.; Duncan, Margaret I.
1986-01-01
The resolution of conflicts arising from differing values and water uses requires technical information and negotiating skills. This article outlines the Instream Flow Incremental Methodology (IFIM), developed by the US Fish and Wildlife Service, and demonstrates that its use to quantify flows necessary to protect desired instream values aids negotiation by illustrating areas of agreement and possible compromises between conflicting water interests. Pursuant to a Section 404 permit application to the US Army Corps of Engineers made by City Utilities of Springfield, Missouri, in 1978, IFIM provided the means by which City Utilities, concerned with a secure water supply for a growing population, and those advocating instream values were satisfied that their requirements were met. In tracing the 15-month process, the authors conclude that the application of IFIM, as well as the cooperative stance adopted by the parties involved, were the key ingredients of the successful permit application.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1996-01-01
An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1993-01-01
In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.
2006-12-01
92–101. Bovee, K . D . 1982. A guide to stream habitat analysis using the in stream flow incremental methodology. Instream Flow Information Paper No...Thames. 1991. Hydrology and the management of watersheds . Iowa State University Press, Ames, IA. Brown, J. K . 1974. Handbook for inventorying downed...woody material. General Technical Report INT-16, U.S. Department of Agriculture, Forest Service. Brown, J. K ., R. D . Oberheu, and C. M. Johnston
An incremental strategy for calculating consistent discrete CFD sensitivity derivatives
NASA Technical Reports Server (NTRS)
Korivi, Vamshi Mohan; Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene W.; Jones, Henry E.
1992-01-01
In this preliminary study involving advanced computational fluid dynamic (CFD) codes, an incremental formulation, also known as the 'delta' or 'correction' form, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods appear to be needed for future 3D applications; however, because direct solver methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form result in certain difficulties, such as ill-conditioning of the coefficient matrix, which can be overcome when these equations are cast in the incremental form; these and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two laminar sample problems: (1) transonic flow through a double-throat nozzle; and (2) flow over an isolated airfoil.
A novel methodology for in-process monitoring of flow forming
NASA Astrophysics Data System (ADS)
Appleby, Andrew; Conway, Alastair; Ion, William
2017-10-01
Flow forming (FF) is an incremental cold working process with near-net-shape forming capability. Failures by fracture due to high deformation can be unexpected and sometimes catastrophic, causing tool damage. If process failures can be identified in real time, an automatic cut-out could prevent costly tool damage. Sound and vibration monitoring is well established and commercially viable in the machining sector to detect current and incipient process failures, but not for FF. A broad-frequency microphone was used to record the sound signature of the manufacturing cycle for a series of FF parts. Parts were flow formed using single and multiple passes, and flaws were introduced into some of the parts to simulate the presence of spontaneously initiated cracks. The results show that this methodology is capable of identifying both introduced defects and spontaneous failures during flow forming. Further investigation is needed to categorise and identify different modes of failure and identify further potential applications in rotary forming.
Instream Flows Incremental Methodology :Kootenai River, Montana : Final Report 1990-2000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Greg; Skaar, Don; Dalbey, Steve
2002-11-01
Regulated rivers such as the Kootenai River below Libby Dam often exhibit hydrographs and water fluctuation levels that are atypical when compared to non-regulated rivers. These flow regimes are often different conditions than those which native fish species evolved with, and can be important limiting factors in some systems. Fluctuating discharge levels can change the quantity and quality of aquatic habitat for fish. The instream flow incremental methodology (IFIM) is a tool that can help water managers evaluate different discharges in terms of their effects on available habitat for a particular fish species. The U.S. Fish and Wildlife Service developedmore » the IFIM (Bovee 1982) to quantify changes in aquatic habitat with changes in instream flow (Waite and Barnhart 1992; Baldridge and Amos 1981; Gore and Judy 1981; Irvine et al. 1987). IFIM modeling uses hydraulic computer models to relate changes in discharge to changes in the physical parameters such as water depth, current velocity and substrate particle size, within the aquatic environment. Habitat utilization curves are developed to describe the physical habitat most needed, preferred or tolerated for a selected species at various life stages (Bovee and Cochnauer 1977; Raleigh et al. 1984). Through the use of physical habitat simulation computer models, hydraulic and physical variables are simulated for differing flows, and the amount of usable habitat is predicted for the selected species and life stages. The Kootenai River IFIM project was first initiated in 1990, with the collection of habitat utilization and physical hydraulic data through 1996. The physical habitat simulation computer modeling was completed from 1996 through 2000 with the assistance from Thomas Payne and Associates. This report summarizes the results of these efforts.« less
Integrated Aero-Propulsion CFD Methodology for the Hyper-X Flight Experiment
NASA Technical Reports Server (NTRS)
Cockrell, Charles E., Jr.; Engelund, Walter C.; Bittner, Robert D.; Dilley, Arthur D.; Jentink, Tom N.; Frendi, Abdelkader
2000-01-01
Computational fluid dynamics (CFD) tools have been used extensively in the analysis and development of the X-43A Hyper-X Research Vehicle (HXRV). A significant element of this analysis is the prediction of integrated vehicle aero-propulsive performance, which includes an integration of aerodynamic and propulsion flow fields. This paper describes analysis tools used and the methodology for obtaining pre-flight predictions of longitudinal performance increments. The use of higher-fidelity methods to examine flow-field characteristics and scramjet flowpath component performance is also discussed. Limited comparisons with available ground test data are shown to illustrate the approach used to calibrate methods and assess solution accuracy. Inviscid calculations to evaluate lateral-directional stability characteristics are discussed. The methodology behind 3D tip-to-tail calculations is described and the impact of 3D exhaust plume expansion in the afterbody region is illustrated. Finally, future technology development needs in the area of hypersonic propulsion-airframe integration analysis are discussed.
Stream habitat analysis using the instream flow incremental methodology
Bovee, Ken D.; Lamb, Berton L.; Bartholow, John M.; Stalnaker, Clair B.; Taylor, Jonathan; Henriksen, Jim
1998-01-01
This document describes the Instream Flow Methodology in its entirety. This also is to serve as a comprehensive introductory textbook on IFIM for training courses as it contains the most complete and comprehensive description of IFIM in existence today. This should also serve as an official guide to IFIM in publication to counteract the misconceptions about the methodology that have pervaded the professional literature since the mid-1980's as this describes IFIM as it is envisioned by its developers. The document is aimed at the decisionmakers of management and allocation of natural resources in providing them an overview; and to those who design and implement studies to inform the decisionmakers. There should be enough background on model concepts, data requirements, calibration techniques, and quality assurance to help the technical user design and implement a cost-effective application of IFIM that will provide policy-relevant information. Some of the chapters deal with basic organization of IFIM, procedural sequence of applying IFIM starting with problem identification, study planning and implementation, and problem resolution.
QUANTIFICATION OF INSTREAM FLOW NEEDS OF A WILD AND SCENIC RIVER FOR WATER RIGHTS LITIGATION.
Garn, Herbert S.
1986-01-01
The lower 4 miles of the Red River, a tributary of the Rio Grande in northern New Mexico, was designated as one of the 'instant' components of the National Wild and Scenic River System in 1968. Instream flow requirements were determined by several methods to quantify the claims made by the United States for a federal reserved water right under the Wild and Scenic Rivers Act. The scenic (aesthetic), recreational, and fish and wildlife values are the purposes for which instream flow requirements were claimed. Since water quality is related to these values, instream flows for waste transport and protection of water quality were also included in the claim. The U. S. Fish and Wildlife Service's Instream Flow Incremental Methodology was used to quantify the relationship between various flow regimes and fish habitat. Study results are discussed.
A demonstration of the instream flow incremental methodology, Shenandoah River
Zappia, Humbert; Hayes, Donald C.
1998-01-01
Current and projected demands on the water resources of the Shenandoah River have increased concerns for the potential effect of these demands on the natural integrity of the Shenandoah River system. The Instream Flow Incremental Method (IFIM) process attempts to integrate concepts of water-supply planning, analytical hydraulic engineering models, and empirically derived habitat versus flow functions to address water-use and instream-flow issues and questions concerning life-stage specific effects on selected species and the general well being of aquatic biological populations.The demonstration project also sets the stage for the identification and compilation of the major instream-flow issues in the Shenandoah River Basin, development of the required multidisciplinary technical team to conduct more detailed studies, and development of basin specific habitat and flow requirements for fish species, species assemblages, and various water uses in the Shenandoah River Basin. This report presents the results of an IFIM demonstration project, conducted on the main stem Shenandoah River in Virginia, during 1996 and 1997, using the Physical Habitat Simulation System (PHABSIM) model.Output from PHABSIM is used to address the general flow requirements for water supply and recreation and habitat for selected life stages of several fish species. The model output is only a small part of the information necessary for effective decision making and management of river resources. The information by itself is usually insufficient for formulation of recommendations regarding instream-flow requirements. Additional information, for example, can be obtained by analysis of habitat time-series data, habitat duration data, and habitat bottlenecks. Alternative-flow analysis and habitat-duration curves are presented.
Use of Instream Flow Incremental Methodology: introduction to the special issue
Lamb, Berton Lee; Sabaton, C.; Souchon, Y.
2004-01-01
In 1991, Harvey Doerksen was able to write a memoir discussing 20 years of instream flow work (Doerksen 1991). He recalled coming into the field in about 1973, but points out that there were many dedicated professionals working on the front line of what has become known as the environmental flow issue since at least the 1940’s. One of the earliest controversies in this new field was about what to call it. Some of the can- didate titles included “Stream Re- source Maintenance Flow,” “Base Flow,” and “Minimum Flow.” Although some of these terms were already in wide use by the early 1970’s, the term “instream flow” was not even listed in the 1973, 1974, or 1975 editions of the Water Resources Research Catalog of keywords (Doerksen 1991: 100). When most of the authors represented in this special issue began their professional careers, the field of instream flow was still seeking a core identity and a set of organizing principles.
NASA Technical Reports Server (NTRS)
Madrid, G. A.; Westmoreland, P. T.
1983-01-01
A progress report is presented on a program to upgrade the existing NASA Deep Space Network in terms of a redesigned computer-controlled data acquisition system for channelling tracking, telemetry, and command data between a California-based control center and three signal processing centers in Australia, California, and Spain. The methodology for the improvements is oriented towards single subsystem development with consideration for a multi-system and multi-subsystem network of operational software. Details of the existing hardware configurations and data transmission links are provided. The program methodology includes data flow design, interface design and coordination, incremental capability availability, increased inter-subsystem developmental synthesis and testing, system and network level synthesis and testing, and system verification and validation. The software has been implemented thus far to a 65 percent completion level, and the methodology being used to effect the changes, which will permit enhanced tracking and communication with spacecraft, has been concluded to feature effective techniques.
Kieffer, James D.
2017-01-01
Abstract The most utilized method to measure swimming performance of fishes has been the critical swimming speed (UCrit) test. In this test, the fish is forced to swim against an incrementally increasing flow of water until fatigue. Before the water velocity is increased, the fish swims at the water velocity for a specific, pre-arranged time interval. The magnitude of the velocity increments and the time interval for each swimming period can vary across studies making the comparison between and within species difficult. This issue has been acknowledged in the literature, however, little empirical evidence exists that tests the importance of velocity and time increments on swimming performance in fish. A practical application for fish performance is through the design of fishways that enable fish to bypass anthropogenic structures (e.g. dams) that block migration routes, which is one of the causes of world-wide decline in sturgeon populations. While fishways will improve sturgeon conservation, they need to be specifically designed to accommodate the swimming capabilities specific for sturgeons, and it is possible that current swimming methodologies have under-estimated the swimming performance of sturgeons. The present study assessed the UCrit of shortnose sturgeon using modified UCrit to determine the importance of velocity increment (5 and 10 cm s−1) and time (5, 15 and 30 min) intervals on swimming performance. UCrit was found to be influenced by both time interval and water velocity. UCrit was generally lower in sturgeon when they were swum using 5cm s−1 compared with 10 cm s−1 increments. Velocity increment influences the UCrit more than time interval. Overall, researchers must consider the impacts of using particular swimming criteria when designing their experiments. PMID:28835841
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1994-01-01
The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.
Comparison and Validation of Hydrological E-Flow Methods through Hydrodynamic Modelling
NASA Astrophysics Data System (ADS)
Kuriqi, Alban; Rivaes, Rui; Sordo-Ward, Alvaro; Pinheiro, António N.; Garrote, Luis
2017-04-01
Flow regime determines physical habitat conditions and local biotic configuration. The development of environmental flow guidelines to support the river integrity is becoming a major concern in water resources management. In this study, we analysed two sites located in southern part of Portugal, respectively at Odelouca and Ocreza Rivers, characterised by the Mediterranean climate. Both rivers are almost in pristine condition, not regulated by dams or other diversion construction. This study presents an analysis of the effect on fish habitat suitability by the implementation of different hydrological e-flow methods. To conduct this study we employed certain hydrological e-flow methods recommended by the European Small Hydropower Association (ESHA). River hydrology assessment was based on approximately 30 years of mean daily flow data, provided by the Portuguese Water Information System (SNIRH). The biological data, bathymetry, physical and hydraulic features, and the Habitat Suitability Index for fish species were collected from extensive field works. We followed the Instream Flow Incremental Methodology (IFIM) to assess the flow-habitat relationship taking into account the habitat suitability of different instream flow releases. Initially, we analysed fish habitat suitability based on natural conditions, and we used it as reference condition for other scenarios considering the chosen hydrological e-flow methods. We accomplished the habitat modelling through hydrodynamic analysis by using River-2D model. The same methodology was applied to each scenario by considering as input the e-flows obtained from each of the hydrological method employed in this study. This contribution shows the significance of ecohydrological studies in establishing a foundation for water resources management actions. Keywords: ecohydrology, e-flow, Mediterranean rivers, river conservation, fish habitat, River-2D, Hydropower.
Habitat Suitability Index Models: Yellow perch
Krieger, Douglas A.; Terrell, James W.; Nelson, Patrick C.
1983-01-01
A review and synthesis of existing information were used to develop riverine and lacustrine habitat models for yellow perch (Perca flavescens). The models are scaled to produce an index of habitat suitability between 0 (unsuitable habitat) to 1 (optimally suitable habitat) for riverine, lacustrine, and palustrine habitat in the 48 contiguous United States. Habitat Suitability Indexes (HSI's) are designed for use with the Habitat Evaluation Procedures developed by the U.S. Fish and Wildlife Service. Also included are discussions of Suitability Index (SI) curves as used in the Instream Flow Incremental Methodology (IFIM) and SI curves available for an IFIM analysis of yellow perch habitat.
NASA Technical Reports Server (NTRS)
Kuchemann, Dietrich; Weber, Johanna
1952-01-01
The dependence of the maximum incremental velocities and air forces on a circular cowling on the mass flow and the angle of attack of the oblique flow is determined with the aid of pressure-distribution measurements. The particular cowling tested had been partially investigated in NACA TM 1327.
Bigger is Better, but at What Cost? Estimating the Economic Value of Incremental Data Assets.
Dalessandro, Brian; Perlich, Claudia; Raeder, Troy
2014-06-01
Many firms depend on third-party vendors to supply data for commercial predictive modeling applications. An issue that has received very little attention in the prior research literature is the estimation of a fair price for purchased data. In this work we present a methodology for estimating the economic value of adding incremental data to predictive modeling applications and present two cases studies. The methodology starts with estimating the effect that incremental data has on model performance in terms of common classification evaluation metrics. This effect is then translated into economic units, which gives an expected economic value that the firm might realize with the acquisition of a particular data asset. With this estimate a firm can then set a data acquisition price that targets a particular return on investment. This article presents the methodology in full detail and illustrates it in the context of two marketing case studies.
Physical habitat simulation system reference manual: version II
Milhous, Robert T.; Updike, Marlys A.; Schneider, Diane M.
1989-01-01
There are four major components of a stream system that determine the productivity of the fishery (Karr and Dudley 1978). These are: (1) flow regime, (2) physical habitat structure (channel form, substrate distribution, and riparian vegetation), (3) water quality (including temperature), and (4) energy inputs from the watershed (sediments, nutrients, and organic matter). The complex interaction of these components determines the primary production, secondary production, and fish population of the stream reach. The basic components and interactions needed to simulate fish populations as a function of management alternatives are illustrated in Figure I.1. The assessment process utilizes a hierarchical and modular approach combined with computer simulation techniques. The modular components represent the "building blocks" for the simulation. The quality of the physical habitat is a function of flow and, therefore, varies in quality and quantity over the range of the flow regime. The conceptual framework of the Incremental Methodology and guidelines for its application are described in "A Guide to Stream Habitat Analysis Using the Instream Flow Incremental Methodology" (Bovee 1982). Simulation of physical habitat is accomplished using the physical structure of the stream and streamflow. The modification of physical habitat by temperature and water quality is analyzed separately from physical habitat simulation. Temperature in a stream varies with the seasons, local meteorological conditions, stream network configuration, and the flow regime; thus, the temperature influences on habitat must be analysed on a stream system basis. Water quality under natural conditions is strongly influenced by climate and the geological materials, with the result that there is considerable natural variation in water quality. When we add the activities of man, the possible range of water quality possibilities becomes rather large. Consequently, water quality must also be analysed on a stream system basis. Such analysis is outside the scope of this manual, which concentrates on simulation of physical habitat based on depth, velocity, and a channel index. The results form PHABSIM can be used alone or by using a series of habitat time series programs that have been developed to generate monthly or daily habitat time series from the Weighted Usable Area versus streamflow table resulting from the habitat simulation programs and streamflow time series data. Monthly and daily streamflow time series may be obtained from USGS gages near the study site or as the output of river system management models.
Constructing increment-decrement life tables.
Schoen, R
1975-05-01
A life table model which can recognize increments (or entrants) as well as decrements has proven to be of considerable value in the analysis of marital status patterns, labor force participation patterns, and other areas of substantive interest. Nonetheless, relatively little work has been done on the methodology of increment-decrement (or combined) life tables. The present paper reviews the general, recursive solution of Schoen and Nelson (1974), develops explicit solutions for three cases of particular interest, and compares alternative approaches to the construction of increment-decrement tables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christian, Mark H; Hadjerioua, Boualem; Lee, Kyutae
2015-01-01
The following paper represents the results of an investigation into the impact of the number and placement of Current Meter (CM) flow sensors on the accuracy to which they are capable of predicting the overall flow rate. Flow measurement accuracy is of particular importance in multiunit plants because it plays a pivotal role in determining the operational efficiency characteristics of each unit, allowing the operator to select the unit (or combination of units) which most efficiently meet demand. Several case studies have demonstrated that optimization of unit dispatch has the potential to increase plant efficiencies from between 1 to 4.4more » percent [2] [3]. Unfortunately current industry standards do not have an established methodology to measure the flow rate through hydropower units with short converging intakes (SCI); the only direction provided is that CM sensors should be used. The most common application of CM is horizontally, along a trolley which is incrementally lowered across a measurement cross section. As such, the measurement resolution is defined horizontally and vertically by the number of CM and the number of measurement increments respectively. There has not been any published research on the role of resolution in either direction on the accuracy of flow measurement. The work below investigates the effectiveness of flow measurement in a SCI by performing a case study in which point velocity measurements were extracted from a physical plant and then used to calculate a series of reference flow distributions. These distributions were then used to perform sensitivity studies on the relation between the number of CM and the accuracy to which the flow rate was predicted. The following research uncovered that a minimum of 795 plants contain SCI, a quantity which represents roughly 12% of total domestic hydropower capacity. In regards to measurement accuracy, it was determined that accuracy ceases to increase considerably due to strict increases in vertical resolution beyond the application of 49 transects. Moreover the research uncovered that the application of 5 CM (when applied at 49 vertical transects) resulted in an average accuracy of 95.6% and the application of additional sensors resulted in a linear increase in accuracy up to 17 CM which had an average accuracy of 98.5%. Beyond 17 CM incremental increases in accuracy due to the addition of CM was found decrease exponentially. Future work that will be performed in this area will investigate the use of computational fluid dynamics to acquire a broader range of flow fields within SCI.« less
NASA Astrophysics Data System (ADS)
Abdelmoula, Nouha; Harthong, Barthélémy; Imbault, Didier; Dorémus, Pierre
2017-12-01
The multi-particle finite element method involving assemblies of meshed particles interacting through finite-element contact conditions is adopted to study the plastic flow of a granular material with highly deformable elastic-plastic grains. In particular, it is investigated whether the flow rule postulate applies for such materials. Using a spherical stress probing method, the influence of incremental stress on plastic strain increment vectors was assessed for numerical samples compacted along two different loading paths up to different values of relative density. Results show that the numerical samples studied behave reasonably well according to an associated flow rule, except in the vicinity of the loading point where the influence of the stress increment proved to be very significant. A plausible explanation for the non-uniqueness of the direction of plastic flow is proposed, based on the idea that the resistance of the numerical sample to plastic straining can vary by an order of magnitude depending on the direction of the accumulated stress. The above-mentioned dependency of the direction of plastic flow on the direction of the stress increment was related to the difference in strength between shearing and normal stressing at the scale of contact surfaces between particles.
NASA Astrophysics Data System (ADS)
de La Cal, E. A.; Fernández, E. M.; Quiroga, R.; Villar, J. R.; Sedano, J.
In previous works a methodology was defined, based on the design of a genetic algorithm GAP and an incremental training technique adapted to the learning of series of stock market values. The GAP technique consists in a fusion of GP and GA. The GAP algorithm implements the automatic search for crisp trading rules taking as objectives of the training both the optimization of the return obtained and the minimization of the assumed risk. Applying the proposed methodology, rules have been obtained for a period of eight years of the S&P500 index. The achieved adjustment of the relation return-risk has generated rules with returns very superior in the testing period to those obtained applying habitual methodologies and even clearly superior to Buy&Hold. This work probes that the proposed methodology is valid for different assets in a different market than previous work.
Incremental wind tunnel testing of high lift systems
NASA Astrophysics Data System (ADS)
Victor, Pricop Mihai; Mircea, Boscoianu; Daniel-Eugeniu, Crunteanu
2016-06-01
Efficiency of trailing edge high lift systems is essential for long range future transport aircrafts evolving in the direction of laminar wings, because they have to compensate for the low performance of the leading edge devices. Modern high lift systems are subject of high performance requirements and constrained to simple actuation, combined with a reduced number of aerodynamic elements. Passive or active flow control is thus required for the performance enhancement. An experimental investigation of reduced kinematics flap combined with passive flow control took place in a low speed wind tunnel. The most important features of the experimental setup are the relatively large size, corresponding to a Reynolds number of about 2 Million, the sweep angle of 30 degrees corresponding to long range airliners with high sweep angle wings and the large number of flap settings and mechanical vortex generators. The model description, flap settings, methodology and results are presented.
Stan D. Wullschleger; Samuel B. McLaughlin; Matthew P. Ayres
2004-01-01
Manual and automated dendrometers, and thermal dissipation probes were used to measure stem increment and sap flow for loblolly pine (Pinus taeda L.) trees attacked by southern pine beetle (Dendroctonus frontalis Zimm.) in east Tennessee, USA. Seasonal-long measurements with manual dendrometers indicated linear increases in stem...
Blood flow patterns during incremental and steady-state aerobic exercise.
Coovert, Daniel; Evans, LeVisa D; Jarrett, Steven; Lima, Carla; Lima, Natalia; Gurovich, Alvaro N
2017-05-30
Endothelial shear stress (ESS) is a physiological stimulus for vascular homeostasis, highly dependent on blood flow patterns. Exercise-induced ESS might be beneficial on vascular health. However, it is unclear what type of ESS aerobic exercise (AX) produces. The aims of this study are to characterize exercise-induced blood flow patterns during incremental and steady-state AX. We expect blood flow pattern during exercise will be intensity-dependent and bidirectional. Six college-aged students (2 males and 4 females) were recruited to perform 2 exercise tests on cycleergometer. First, an 8-12-min incremental test (Test 1) where oxygen uptake (VO2), heart rate (HR), blood pressure (BP), and blood lactate (La) were measured at rest and after each 2-min step. Then, at least 48-hr. after the first test, a 3-step steady state exercise test (Test 2) was performed measuring VO2, HR, BP, and La. The three steps were performed at the following exercise intensities according to La: 0-2 mmol/L, 2-4 mmol/L, and 4-6 mmol/L. During both tests, blood flow patterns were determined by high-definition ultrasound and Doppler on the brachial artery. These measurements allowed to determine blood flow velocities and directions during exercise. On Test 1 VO2, HR, BP, La, and antegrade blood flow velocity significantly increased in an intensity-dependent manner (repeated measures ANOVA, p<0.05). Retrograde blood flow velocity did not significantly change during Test 1. On Test 2 all the previous variables significantly increased in an intensity-dependent manner (repeated measures ANOVA, p<0.05). These results support the hypothesis that exercise induced ESS might be increased in an intensity-dependent way and blood flow patterns during incremental and steady-state exercises include both antegrade and retrograde blood flows.
NASA Astrophysics Data System (ADS)
DeMarco, Adam Ward
The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This novel approach can provide a method of characterizing increment fields with the sole use of only four pdf parameters. Also, we investigate the capability of the current state-of-the-art mesoscale atmospheric models to predict the features and highlight the potential for use for future model development. With the knowledge gained in this study, a number of applications can benefit by using our methodology, including the wind energy and optical wave propagation fields.
Dynamics of intrinsic axial flows in unsheared, uniform magnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, J. C.; Diamond, P. H.; Xu, X. Q.
2016-05-15
A simple model for the generation and amplification of intrinsic axial flow in a linear device, controlled shear decorrelation experiment, is proposed. This model proposes and builds upon a novel dynamical symmetry breaking mechanism, using a simple theory of drift wave turbulence in the presence of axial flow shear. This mechanism does not require complex magnetic field structure, such as shear, and thus is also applicable to intrinsic rotation generation in tokamaks at weak or zero magnetic shear, as well as to linear devices. This mechanism is essentially the self-amplification of the mean axial flow profile, i.e., a modulational instability.more » Hence, the flow development is a form of negative viscosity phenomenon. Unlike conventional mechanisms where the residual stress produces an intrinsic torque, in this dynamical symmetry breaking scheme, the residual stress induces a negative increment to the ambient turbulent viscosity. The axial flow shear is then amplified by this negative viscosity increment. The resulting mean axial flow profile is calculated and discussed by analogy with the problem of turbulent pipe flow. For tokamaks, the negative viscosity is not needed to generate intrinsic rotation. However, toroidal rotation profile gradient is enhanced by the negative increment in turbulent viscosity.« less
NASA Technical Reports Server (NTRS)
Korivi, V. M.; Taylor, A. C., III; Newman, P. A.; Hou, G. J.-W.; Jones, H. E.
1992-01-01
An incremental strategy is presented for iteratively solving very large systems of linear equations, which are associated with aerodynamic sensitivity derivatives for advanced CFD codes. It is shown that the left-hand side matrix operator and the well-known factorization algorithm used to solve the nonlinear flow equations can also be used to efficiently solve the linear sensitivity equations. Two airfoil problems are considered as an example: subsonic low Reynolds number laminar flow and transonic high Reynolds number turbulent flow.
A classification scheme for turbulent flows based on their joint velocity-intermittency structure
NASA Astrophysics Data System (ADS)
Keylock, C. J.; Nishimura, K.; Peinke, J.
2011-12-01
Kolmogorov's classic theory for turbulence assumed an independence between velocity increments and the value for the velocity itself. However, this assumption is questionable, particularly in complex geophysical flows. Here we propose a framework for studying velocity-intermittency coupling that is similar in essence to the popular quadrant analysis method for studying near-wall flows. However, we study the dominant (longitudinal) velocity component along with a measure of the roughness of the signal, given mathematically by its series of Hölder exponents. Thus, we permit a possible dependence between velocity and intermittency. We compare boundary layer data obtained in a wind tunnel to turbulent jets and wake flows. These flow classes all have distinct velocity-intermittency characteristics, which cause them to be readily distinguished using our technique. Our method is much simpler and quicker to apply than approaches that condition the velocity increment statistics at some scale, r, on the increment statistics at a neighbouring, larger spatial scale, r+Δ, and the velocity itself. Classification of environmental flows is then possible based on their similarities to the idealised flow classes and we demonstrate this using laboratory data for flow in a parallel-channel confluence where the region of flow recirculation in the lee of the step is discriminated as a flow class distinct from boundary layer, jet and wake flows. Hence, using our method, it is possible to assign a flow classification to complex geophysical, turbulent flows depending upon which idealised flow class they most resemble.
30 CFR 1206.173 - How do I calculate the alternative methodology for dual accounting?
Code of Federal Regulations, 2012 CFR
2012-07-01
...) Turtle Mountain Reservation; (N) Ute Mountain Ute Reservation; (O) Uintah and Ouray Reservation; (P) Wind... equation, the increment for dual accounting is the number you take from the applicable Btu range, determined under paragraph (b)(3) of this section, in the following table: BTU range Increment if Lessee has...
30 CFR 1206.173 - How do I calculate the alternative methodology for dual accounting?
Code of Federal Regulations, 2014 CFR
2014-07-01
...) Turtle Mountain Reservation; (N) Ute Mountain Ute Reservation; (O) Uintah and Ouray Reservation; (P) Wind... equation, the increment for dual accounting is the number you take from the applicable Btu range, determined under paragraph (b)(3) of this section, in the following table: BTU range Increment if Lessee has...
30 CFR 1206.173 - How do I calculate the alternative methodology for dual accounting?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Ute Reservation; (M) Turtle Mountain Reservation; (N) Ute Mountain Ute Reservation; (O) Uintah and... equation, the increment for dual accounting is the number you take from the applicable Btu range, determined under paragraph (b)(3) of this section, in the following table: BTU range Increment if Lessee has...
30 CFR 1206.173 - How do I calculate the alternative methodology for dual accounting?
Code of Federal Regulations, 2013 CFR
2013-07-01
...) Turtle Mountain Reservation; (N) Ute Mountain Ute Reservation; (O) Uintah and Ouray Reservation; (P) Wind... equation, the increment for dual accounting is the number you take from the applicable Btu range, determined under paragraph (b)(3) of this section, in the following table: BTU range Increment if Lessee has...
A methodology for evaluation of a markup-based specification of clinical guidelines.
Shalom, Erez; Shahar, Yuval; Taieb-Maimon, Meirav; Lunenfeld, Eitan
2008-11-06
We introduce a three-phase, nine-step methodology for specification of clinical guidelines (GLs) by expert physicians, clinical editors, and knowledge engineers, and for quantitative evaluation of the specification's quality. We applied this methodology to a particular framework for incremental GL structuring (mark-up) and to GLs in three clinical domains with encouraging results.
Optimal tree increment models for the Northeastern United Statesq
Don C. Bragg
2003-01-01
used the potential relative increment (PRI) methodology to develop optimal tree diameter growth models for the Northeastern United States. Thirty species from the Eastwide Forest Inventory Database yielded 69,676 individuals, which were then reduced to fast-growing subsets for PRI analysis. For instance, only 14 individuals from the greater than 6,300-tree eastern...
Optimal Tree Increment Models for the Northeastern United States
Don C. Bragg
2005-01-01
I used the potential relative increment (PRI) methodology to develop optimal tree diameter growth models for the Northeastern United States. Thirty species from the Eastwide Forest Inventory Database yielded 69,676 individuals, which were then reduced to fast-growing subsets for PRI analysis. For instance, only 14 individuals from the greater than 6,300-tree eastern...
Li, J. C.; Diamond, P. H.
2017-03-23
Here, negative compressibility ITG turbulence in a linear plasma device (CSDX) can induce a negative viscosity increment. However, even with this negative increment, we show that the total axial viscosity remains positive definite, i.e. no intrinsic axial flow can be generated by pure ITG turbulence in a straight magnetic field. This differs from the case of electron drift wave (EDW) turbulence, where the total viscosity can turn negative, at least transiently. When the flow gradient is steepened by any drive mechanism, so that the parallel shear flow instability (PSFI) exceeds the ITG drive, the flow profile saturates at a level close to the value above which PSFI becomes dominant. This saturated flow gradient exceeds the PSFI linear threshold, and grows withmore » $$\
White sturgeon spawning and rearing habitat in the lower Columbia River
Parsley, Michael J.; Beckman, Lance G.
1994-01-01
Estimates of spawning habitat for white sturgeons Acipenser transmontanus in the tailraces of the four dams on the lower 470 km of the Columbia River were obtained by using the Physical Habitat Simulation System of the U.S. Fish and Wildlife Service's Instream Flow Incremental Methodology to identify areas with suitable water depths, water velocities, and substrates. Rearing habitat throughout the lower Columbia River was assessed by using a geographic information system to identify areas with suitable water depths and substrates. The lowering of spring and summer river discharges from hydropower system operation reduces the availability of spawning habitat for white sturgeons. The four dam tailraces in the study area differ in the amount and quality of spawning habitat available at various discharges; the differences are due to channel morphology. The three impoundments and the free-flowing Columbia River downstream from Bonneville Dam provide extensive areas that are physically suitable for rearing young-of-the-year and juvenile white sturgeons.
Health level seven interoperability strategy: big data, incrementally structured.
Dolin, R H; Rogers, B; Jaffe, C
2015-01-01
Describe how the HL7 Clinical Document Architecture (CDA), a foundational standard in US Meaningful Use, contributes to a "big data, incrementally structured" interoperability strategy, whereby data structured incrementally gets large amounts of data flowing faster. We present cases showing how this approach is leveraged for big data analysis. To support the assertion that semi-structured narrative in CDA format can be a useful adjunct in an overall big data analytic approach, we present two case studies. The first assesses an organization's ability to generate clinical quality reports using coded data alone vs. coded data supplemented by CDA narrative. The second leverages CDA to construct a network model for referral management, from which additional observations can be gleaned. The first case shows that coded data supplemented by CDA narrative resulted in significant variances in calculated performance scores. In the second case, we found that the constructed network model enables the identification of differences in patient characteristics among different referral work flows. The CDA approach goes after data indirectly, by focusing first on the flow of narrative, which is then incrementally structured. A quantitative assessment of whether this approach will lead to a greater flow of data and ultimately a greater flow of structured data vs. other approaches is planned as a future exercise. Along with growing adoption of CDA, we are now seeing the big data community explore the standard, particularly given its potential to supply analytic en- gines with volumes of data previously not possible.
Projection methods for incompressible flow problems with WENO finite difference schemes
NASA Astrophysics Data System (ADS)
de Frutos, Javier; John, Volker; Novo, Julia
2016-03-01
Weighted essentially non-oscillatory (WENO) finite difference schemes have been recommended in a competitive study of discretizations for scalar evolutionary convection-diffusion equations [20]. This paper explores the applicability of these schemes for the simulation of incompressible flows. To this end, WENO schemes are used in several non-incremental and incremental projection methods for the incompressible Navier-Stokes equations. Velocity and pressure are discretized on the same grid. A pressure stabilization Petrov-Galerkin (PSPG) type of stabilization is introduced in the incremental schemes to account for the violation of the discrete inf-sup condition. Algorithmic aspects of the proposed schemes are discussed. The schemes are studied on several examples with different features. It is shown that the WENO finite difference idea can be transferred to the simulation of incompressible flows. Some shortcomings of the methods, which are due to the splitting in projection schemes, become also obvious.
NASA Technical Reports Server (NTRS)
Smith, Mark S.; Bui, Trong T.; Garcia, Christian A.; Cumming, Stephen B.
2016-01-01
A pair of compliant trailing edge flaps was flown on a modified GIII airplane. Prior to flight test, multiple analysis tools of various levels of complexity were used to predict the aerodynamic effects of the flaps. Vortex lattice, full potential flow, and full Navier-Stokes aerodynamic analysis software programs were used for prediction, in addition to another program that used empirical data. After the flight-test series, lift and pitching moment coefficient increments due to the flaps were estimated from flight data and compared to the results of the predictive tools. The predicted lift increments matched flight data well for all predictive tools for small flap deflections. All tools over-predicted lift increments for large flap deflections. The potential flow and Navier-Stokes programs predicted pitching moment coefficient increments better than the other tools.
Orion Launch Abort Vehicle Attitude Control Motor Testing
NASA Technical Reports Server (NTRS)
Murphy, Kelly J.; Brauckmann, Gregory J.; Paschal, Keith B.; Chan, David T.; Walker, Eric L.; Foley, Robert; Mayfield, David; Cross, Jared
2011-01-01
Current Orion Launch Abort Vehicle (LAV) configurations use an eight-jet, solid-fueled Attitude Control Motor (ACM) to provide required vehicle control for all proposed abort trajectories. Due to the forward position of the ACM on the LAV, it is necessary to assess the effects of jet-interactions (JI) between the various ACM nozzle plumes and the external flow along the outside surfaces of the vehicle. These JI-induced changes in flight control characteristics must be accounted for in developing ACM operations and LAV flight characteristics. A test program to generate jet interaction aerodynamic increment data for multiple LAV configurations was conducted in the NASA Ames and NASA Langley Unitary Plan Wind Tunnels from August 2007 through December 2009. Using cold air as the simulant gas, powered subscale models were used to generate interaction data at subsonic, transonic, and supersonic test conditions. This paper presents an overview of the complete ACM JI experimental test program for Orion LAV configurations, highlighting ACM system modeling, nozzle scaling assumptions, experimental test techniques, and data reduction methodologies. Lessons learned are discussed, and sample jet interaction data are shown. These data, in conjunction with computational predictions, were used to create the ACM JI increments for all relevant flight databases.
NASA Technical Reports Server (NTRS)
Steele, W. G.; Molder, K. J.; Hudson, S. T.; Vadasy, K. V.; Rieder, P. T.; Giel, T.
2005-01-01
NASA and the U.S. Air Force are working on a joint project to develop a new hydrogen-fueled, full-flow, staged combustion rocket engine. The initial testing and modeling work for the Integrated Powerhead Demonstrator (IPD) project is being performed by NASA Marshall and Stennis Space Centers. A key factor in the testing of this engine is the ability to predict and measure the transient fluid flow during engine start and shutdown phases of operation. A model built by NASA Marshall in the ROCket Engine Transient Simulation (ROCETS) program is used to predict transient engine fluid flows. The model is initially calibrated to data from previous tests on the Stennis E1 test stand. The model is then used to predict the next run. Data from this run can then be used to recalibrate the model providing a tool to guide the test program in incremental steps to reduce the risk to the prototype engine. In this paper, they define this type of model as a calibrated model. This paper proposes a method to estimate the uncertainty of a model calibrated to a set of experimental test data. The method is similar to that used in the calibration of experiment instrumentation. For the IPD example used in this paper, the model uncertainty is determined for both LOX and LH flow rates using previous data. The successful use of this model is then demonstrated to predict another similar test run within the uncertainty bounds. The paper summarizes the uncertainty methodology when a model is continually recalibrated with new test data. The methodology is general and can be applied to other calibrated models.
Dental caries increments and related factors in children with type 1 diabetes mellitus.
Siudikiene, J; Machiulskiene, V; Nyvad, B; Tenovuo, J; Nedzelskiene, I
2008-01-01
The aim of this study was to analyse possible associations between caries increments and selected caries determinants in children with type 1 diabetes mellitus and their age- and sex-matched non-diabetic controls, over 2 years. A total of 63 (10-15 years old) diabetic and non-diabetic pairs were examined for dental caries, oral hygiene and salivary factors. Salivary flow rates, buffer effect, concentrations of mutans streptococci, lactobacilli, yeasts, total IgA and IgG, protein, albumin, amylase and glucose were analysed. Means of 2-year decayed/missing/filled surface (DMFS) increments were similar in diabetics and their controls. Over the study period, both unstimulated and stimulated salivary flow rates remained significantly lower in diabetic children compared to controls. No differences were observed in the counts of lactobacilli, mutans streptococci or yeast growth during follow-up, whereas salivary IgA, protein and glucose concentrations were higher in diabetics than in controls throughout the 2-year period. Multivariable linear regression analysis showed that children with higher 2-year DMFS increments were older at baseline and had higher salivary glucose concentrations than children with lower 2-year DMFS increments. Likewise, higher 2-year DMFS increments in diabetics versus controls were associated with greater increments in salivary glucose concentrations in diabetics. Higher increments in active caries lesions in diabetics versus controls were associated with greater increments of dental plaque and greater increments of salivary albumin. Our results suggest that, in addition to dental plaque as a common caries risk factor, diabetes-induced changes in salivary glucose and albumin concentrations are indicative of caries development among diabetics. Copyright 2008 S. Karger AG, Basel.
Semi-Empirical Prediction of Aircraft Low-Speed Aerodynamic Characteristics
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2015-01-01
This paper lays out a comprehensive methodology for computing a low-speed, high-lift polar, without requiring additional details about the aircraft design beyond what is typically available at the conceptual design stage. Introducing low-order, physics-based aerodynamic analyses allows the methodology to be more applicable to unconventional aircraft concepts than traditional, fully-empirical methods. The methodology uses empirical relationships for flap lift effectiveness, chord extension, drag-coefficient increment and maximum lift coefficient of various types of flap systems as a function of flap deflection, and combines these increments with the characteristics of the unflapped airfoils. Once the aerodynamic characteristics of the flapped sections are known, a vortex-lattice analysis calculates the three-dimensional lift, drag and moment coefficients of the whole aircraft configuration. This paper details the results of two validation cases: a supercritical airfoil model with several types of flaps; and a 12-foot, full-span aircraft model with slats and double-slotted flaps.
NASA Technical Reports Server (NTRS)
Griffin, Roy N., Jr.; Holzhauser, Curt A.; Weiberg, James A.
1958-01-01
An investigation was made to determine the lifting effectiveness and flow requirements of blowing over the trailing-edge flaps and ailerons on a large-scale model of a twin-engine, propeller-driven airplane having a high-aspect-ratio, thick, straight wing. With sufficient blowing jet momentum to prevent flow separation on the flap, the lift increment increased for flap deflections up to 80 deg (the maximum tested). This lift increment also increased with increasing propeller thrust coefficient. The blowing jet momentum coefficient required for attached flow on the flaps was not significantly affected by thrust coefficient, angle of attack, or blowing nozzle height.
Bovee, Ken D.
1986-01-01
The Instream Flow Incremental Methodology (IFIM) is a habitat-based tool used to evaluate the environmental consequences of various water and land use practices. As such, knowledge about the conditions that provide favorable habitat for a species, and those that do not, is necessary for successful implementation of the methodology. In the context of IFIM, this knowledge is defined as habitat suitability criteria: characteristic behavioral traits of a species that are established as standards for comparison in the decision-making process. Habitat suitability criteria may be expressed in a variety of types and formats. The type, or category, refers to the procedure used to develop the criteria. Category I criteria are based on professional judgment, with little or no empirical data. Category II criteria have as their source, microhabitat data collected at locations where target organisms are observed or collected. These are called “utilization” functions because they are based on observed locations that were used by the target organism. These functions tend to be biased by the environmental conditions that were available to the fish or invertebrates at the time they were observed. Correction of the utilization function for environmental availability creates category III, or “preference” criteria, which tend to be much less site specific than category II criteria. There are also several ways to express habitat suitability in graphical form. The binary format establishes a suitable range for each variable as it pertains to a life stage of interest, and is presented graphically as a step function. The quality rating for a variable is 1.0 if it falls within the range of the criteria, and 0.0 if it falls outside the range. The univariate curve format established both the usable range and the optimum range for each variable, with conditions of intermediate usability expressed along the portion between the tails and the peak of the curve. Multivariate probability density functions, which can be used to compute suitability for several variables simultaneously, are conveyed as three dimensional figures with suitability on the z-axis, and two independent variables on the x-y plane. These functions are useful for incorporating interactive terms between two or more variable. Such interactions can also be demonstrated using conditional criteria, which are stratified by cover type or substrate size. Conditional criteria may be of any category or format, but are distinguishable by two or more sets of functional relationships for each life stage.
Ada and the rapid development lifecycle
NASA Technical Reports Server (NTRS)
Deforrest, Lloyd; Gref, Lynn
1991-01-01
JPL is under contract, through NASA, with the US Army to develop a state-of-the-art Command Center System for the US European Command (USEUCOM). The Command Center System will receive, process, and integrate force status information from various sources and provide this integrated information to staff officers and decision makers in a format designed to enhance user comprehension and utility. The system is based on distributed workstation class microcomputers, VAX- and SUN-based data servers, and interfaces to existing military mainframe systems and communication networks. JPL is developing the Command Center System utilizing an incremental delivery methodology called the Rapid Development Methodology with adherence to government and industry standards including the UNIX operating system, X Windows, OSF/Motif, and the Ada programming language. Through a combination of software engineering techniques specific to the Ada programming language and the Rapid Development Approach, JPL was able to deliver capability to the military user incrementally, with comparable quality and improved economies of projects developed under more traditional software intensive system implementation methodologies.
Statistical Properties of Line Centroid Velocity Increments in the rho Ophiuchi Cloud
NASA Technical Reports Server (NTRS)
Lis, D. C.; Keene, Jocelyn; Li, Y.; Phillips, T. G.; Pety, J.
1998-01-01
We present a comparison of histograms of CO (2-1) line centroid velocity increments in the rho Ophiuchi molecular cloud with those computed for spectra synthesized from a three-dimensional, compressible, but non-starforming and non-gravitating hydrodynamic simulation. Histograms of centroid velocity increments in the rho Ophiuchi cloud show clearly non-Gaussian wings, similar to those found in histograms of velocity increments and derivatives in experimental studies of laboratory and atmospheric flows, as well as numerical simulations of turbulence. The magnitude of these wings increases monotonically with decreasing separation, down to the angular resolution of the data. This behavior is consistent with that found in the phase of the simulation which has most of the properties of incompressible turbulence. The time evolution of the magnitude of the non-Gaussian wings in the histograms of centroid velocity increments in the simulation is consistent with the evolution of the vorticity in the flow. However, we cannot exclude the possibility that the wings are associated with the shock interaction regions. Moreover, in an active starforming region like the rho Ophiuchi cloud, the effects of shocks may be more important than in the simulation. However, being able to identify shock interaction regions in the interstellar medium is also important, since numerical simulations show that vorticity is generated in shock interactions.
NASA Astrophysics Data System (ADS)
Lee, Ji-Seok; Song, Ki-Won
2015-11-01
The objective of the present study is to systematically elucidate the time-dependent rheological behavior of concentrated xanthan gum systems in complicated step-shear flow fields. Using a strain-controlled rheometer (ARES), step-shear flow behaviors of a concentrated xanthan gum model solution have been experimentally investigated in interrupted shear flow fields with a various combination of different shear rates, shearing times and rest times, and step-incremental and step-reductional shear flow fields with various shearing times. The main findings obtained from this study are summarized as follows. (i) In interrupted shear flow fields, the shear stress is sharply increased until reaching the maximum stress at an initial stage of shearing times, and then a stress decay towards a steady state is observed as the shearing time is increased in both start-up shear flow fields. The shear stress is suddenly decreased immediately after the imposed shear rate is stopped, and then slowly decayed during the period of a rest time. (ii) As an increase in rest time, the difference in the maximum stress values between the two start-up shear flow fields is decreased whereas the shearing time exerts a slight influence on this behavior. (iii) In step-incremental shear flow fields, after passing through the maximum stress, structural destruction causes a stress decay behavior towards a steady state as an increase in shearing time in each step shear flow region. The time needed to reach the maximum stress value is shortened as an increase in step-increased shear rate. (iv) In step-reductional shear flow fields, after passing through the minimum stress, structural recovery induces a stress growth behavior towards an equilibrium state as an increase in shearing time in each step shear flow region. The time needed to reach the minimum stress value is lengthened as a decrease in step-decreased shear rate.
Zhang, Xi; Xiao, Yanni; Ran, Qian; Liu, Yao; Duan, Qianbi; Duan, Huiling; Ye, Xingde; Li, Zhongjun
2012-01-01
Background Factors affecting the efficacy of platelet and red blood cell (RBC) transfusion in patients undergoing hematopoietic stem cell transplantation (HSCT) have not been studied extensively. We aimed to evaluate platelet and RBC transfusion efficacy by measuring the platelet corrected count increment and the hemoglobin increment, respectively, 24 h after transfusion in 105 patients who received HSCT. Methodology/Principal Findings Using retrospective analysis, we studied whether factors, including gender, time of transplantation, the compatibility of ABO group between HSC donors and recipients, and autologous or allogenic transplantation, influence the efficacy of blood component transfusion. We found that the infection rate of HSCT patients positively correlated with the transfusion amount, and the length of stay in the laminar flow room was associated with transfusion. We found that platelet transfusion performed during HSCT showed significantly better efficacy than that performed before HSCT. The effect of platelet transfusion in auto-transplantation was significantly better than that in allo-transplantation. The efficacy of RBC transfusion during HSCT was significantly lower than that performed before HSCT. The efficacy of RBC transfusion in auto-transplantation was significantly higher than that in allo-transplantation. Allo-transplantation patients who received HSCs from compatible ABO groups showed significantly higher efficacy during both platelet and RBC transfusion. Conclusions We conclude that the efficacy of platelet and RBC transfusions does not correlate with the gender of patients, while it significantly correlates with the time of transplantation, type of transplantation, and ABO compatibility between HSC donors and recipients. During HSCT, the infection rate of patients positively correlates with the transfusion amount of RBCs and platelets. The total volume of RBC units transfused positively correlates with the length of the patients’ stay in the laminar flow room. PMID:22701516
Frictional strength and heat flow of southern San Andreas Fault
NASA Astrophysics Data System (ADS)
Zhu, P. P.
2016-01-01
Frictional strength and heat flow of faults are two related subjects in geophysics and seismology. To date, the investigation on regional frictional strength and heat flow still stays at the stage of qualitative estimation. This paper is concentrated on the regional frictional strength and heat flow of the southern San Andreas Fault (SAF). Based on the in situ borehole measured stress data, using the method of 3D dynamic faulting analysis, we quantitatively determine the regional normal stress, shear stress, and friction coefficient at various seismogenic depths. These new data indicate that the southern SAF is a weak fault within the depth of 15 km. As depth increases, all the regional normal and shear stresses and friction coefficient increase. The former two increase faster than the latter. Regional shear stress increment per kilometer equals 5.75 ± 0.05 MPa/km for depth ≤15 km; regional normal stress increment per kilometer is equal to 25.3 ± 0.1 MPa/km for depth ≤15 km. As depth increases, regional friction coefficient increment per kilometer decreases rapidly from 0.08 to 0.01/km at depths less than ~3 km. As depth increases from ~3 to ~5 km, it is 0.01/km and then from ~5 to 15 km, and it is 0.002/km. Previously, frictional strength could be qualitatively determined by heat flow measurements. It is difficult to obtain the quantitative heat flow data for the SAF because the measured heat flow data exhibit large scatter. However, our quantitative results of frictional strength can be employed to investigate the heat flow in the southern SAF. We use a physical quantity P f to describe heat flow. It represents the dissipative friction heat power per unit area generated by the relative motion of two tectonic plates accommodated by off-fault deformation. P f is called "fault friction heat." On the basis of our determined frictional strength data, utilizing the method of 3D dynamic faulting analysis, we quantitatively determine the regional long-term fault friction heat at various seismogenic depths in the southern SAF. The new data show that as depth increases, regional friction stress increases within the depth of 15 km; its increment per kilometer equals 5.75 ± 0.05 MPa/km. As depth increases, regional long-term fault friction heat increases; its increment per kilometer is equal to 3.68 ± 0.03 mW/m2/km. The values of regional long-term fault friction heat provided by this study are always lower than those from heat flow measurements. The difference between them and the scatter existing in the measured heat flow data are mainly caused by the following processes: (i) heat convection, (ii) heat advection, (iii) stress accumulation, (iv) seismic bursts between short-term lull periods in a long-term period, and (v) influence of seismicity in short-term periods upon long-term slip rate and heat flow. Fault friction heat is a fundamental parameter in research on heat flow.
Williamson, S. C.; Bartholow, J. M.; Stalnaker, C. B.
1993-01-01
A conceptual model has been developed to test river regulation concepts by linking physical habitat and water temperature with salmonid population and production in cold water streams. Work is in progress to examine numerous questions as part of flow evaluation and habitat restoration programmes in the Trinity River of California and elsewhere. For instance, how much change in pre-smolt chinook salmon (Oncorhynchus tshawytscha) production in the Trinity River would result from a different annual instream allocation (i.e. up or down from 271 × 106 m3released in the late 1980s) and how much change in pre-smolt production would result from a different release pattern (i.e. different from the 8.5 m3 s−1 year-round release). The conceptual model is being used to: design, integrate and improve young-of-year population data collection efforts; test hypotheses that physical habitat significantly influences movement, growth and mortality of salmonid fishes; and analyse the relative severity of limiting factors during each life stage. The conceptual model, in conjunction with previously developed tools in the Instream Flow Incremental Methodology, should provide the means to more effectively manage a fishery resource below a regulated reservoir and to provide positive feedback to planning of annual reservoir operations.
Sutherland, John C.
2017-04-15
Linear dichroism provides information on the orientation of chromophores part of, or bound to, an orientable molecule such as DNA. For molecular alignment induced by hydrodynamic shear, the principal axes orthogonal to the direction of alignment are not equivalent. Thus, the magnitude of the flow-induced change in absorption for light polarized parallel to the direction of flow can be more than a factor of two greater than the corresponding change for light polarized perpendicular to both that direction and the shear axis. The ratio of the two flow-induced changes in absorption, the dichroic increment ratio, is characterized using the orthogonalmore » orientation model, which assumes that each absorbing unit is aligned parallel to one of the principal axes of the apparatus. The absorption of the alienable molecules is characterized by components parallel and perpendicular to the orientable axis of the molecule. The dichroic increment ratio indicates that for the alignment of DNA in rectangular flow cells, average alignment is not uniaxial, but for higher shear, as produced in a Couette cell, it can be. The results from the simple model are identical to tensor models for typical experimental configuration. Approaches for measuring the dichroic increment ratio with modern dichrometers are further discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutherland, John C.
Linear dichroism provides information on the orientation of chromophores part of, or bound to, an orientable molecule such as DNA. For molecular alignment induced by hydrodynamic shear, the principal axes orthogonal to the direction of alignment are not equivalent. Thus, the magnitude of the flow-induced change in absorption for light polarized parallel to the direction of flow can be more than a factor of two greater than the corresponding change for light polarized perpendicular to both that direction and the shear axis. The ratio of the two flow-induced changes in absorption, the dichroic increment ratio, is characterized using the orthogonalmore » orientation model, which assumes that each absorbing unit is aligned parallel to one of the principal axes of the apparatus. The absorption of the alienable molecules is characterized by components parallel and perpendicular to the orientable axis of the molecule. The dichroic increment ratio indicates that for the alignment of DNA in rectangular flow cells, average alignment is not uniaxial, but for higher shear, as produced in a Couette cell, it can be. The results from the simple model are identical to tensor models for typical experimental configuration. Approaches for measuring the dichroic increment ratio with modern dichrometers are further discussed.« less
Sutherland, John C
2017-04-15
Linear dichroism provides information on the orientation of chromophores part of, or bound to, an orientable molecule such as DNA. For molecular alignment induced by hydrodynamic shear, the principal axes orthogonal to the direction of alignment are not equivalent. Thus, the magnitude of the flow-induced change in absorption for light polarized parallel to the direction of flow can be more than a factor of two greater than the corresponding change for light polarized perpendicular to both that direction and the shear axis. The ratio of the two flow-induced changes in absorption, the dichroic increment ratio, is characterized using the orthogonal orientation model, which assumes that each absorbing unit is aligned parallel to one of the principal axes of the apparatus. The absorption of the alienable molecules is characterized by components parallel and perpendicular to the orientable axis of the molecule. The dichroic increment ratio indicates that for the alignment of DNA in rectangular flow cells, average alignment is not uniaxial, but for higher shear, as produced in a Couette cell, it can be. The results from the simple model are identical to tensor models for typical experimental configurations. Approaches for measuring the dichroic increment ratio with modern dichrometers are discussed. Copyright © 2017. Published by Elsevier Inc.
Wind Tunnel Results of Pneumatic Forebody Vortex Control Using Rectangular Slots a Chined Forebody
NASA Technical Reports Server (NTRS)
Alexander, Michael; Meyn, Larry A.
1994-01-01
A subsonic wind tunnel investigation of pneumatic vortex flow control on a chined forebody using slots was accomplished at a dynamic pressure of 50 psf resulting in a R(n)/ft of 1.3 x 10(exp 6). Data were acquired from angles of attack ranging from -4deg to +34deg at side slips of +0.4deg and +10.4deg. The test article used in this study was the 10% scale Fighter Lift and Control (FLAC) advanced diamond winged, vee-tailed fighter configuration. Three different slot blowing concepts were evaluated; outward, downward, and tangential with ail blowing accomplished asymmetrically. The results of three different mass flows (0.067, 0.13, and 0.26 lbm/s; C(sub mu)'s of less than or equal to 0.006, 0.011. and 0.022 respectively) were analyzed and reported. Test data are presented on the effects of mass flows, slot lengths and positions and blowing concepts on yawing moment and side force generation. Results from this study indicate that the outward and downward blowing slots developed yawing moment and side force increments in the direction opposite of the blowing side while the tangential blowing slots generated yawing moment and side force increments in the direction towards the blowing side. The outward and downward blowing slots typically produced positive pitching moment increments while the tangential blowing slots typically generated negative pitching moment increments. The slot blowing nearest the forebody apex was most effective at generating the largest increments and as the slot was moved aft or increased in length, its effectiveness at generating forces and moments diminished.
NASA Technical Reports Server (NTRS)
Kelley, Mark W; Tolhurst, William H JR
1955-01-01
A wind-tunnel investigation was made to determine the effects of ejecting high-velocity air near the leading edge of plain trailing-edge flaps on a 35 degree sweptback wing. The tests were made with flap deflections from 45 degrees to 85 degrees and with pressure ratios across the flap nozzles from sub-critical up to 2.9. A limited study of the effects of nozzle location and configuration on the efficiency of the flap was made. Measurements of the lift, drag, and pitching moment were made for Reynolds numbers from 5.8 to 10.1x10(6). Measurements were also made of the weight rate of flow, pressure, and temperature of the air supplied to the flap nozzles.The results show that blowing on the deflected flap produced large flap lift increments. The amount of air required to prevent flow separation on the flap was significantly less than that estimated from published two-dimensional data. When the amount of air ejected over the flap was just sufficient to prevent flow separation, the lift increment obtained agreed well with linear inviscid fluid theory up to flap deflections of 60 degrees. The flap lift increment at 85 degrees flap deflection was about 80 percent of that predicted theoretically.With larger amounts of air blown over the flap, these lift increments could be significantly increased. It was found that the performance of the flap was relatively insensitive to the location of the flap nozzle, to spacers in the nozzle, and to flow disturbances such as those caused by leading-edge slats or discontinuities on the wing or flap surfaces. Analysis of the results indicated that installation of this system on an F-86 airplane is feasible.
NASA Astrophysics Data System (ADS)
Tian, P.; Xu, X.; Pan, C.; Hsu, K. L.; Yang, T.
2016-12-01
Few attempts have been made to investigate the quantitative effects of rainfall on overland flow driven erosion processes and flow hydrodynamics on steep hillslopes under field conditions. Field experiments were performed in flows for six inflow rates (q: 6-36 Lmin-1m-1) with and without rainfall (60 mm h-1) on a steep slope (26°) to investigate: (1) the quantitative effects of rainfall on runoff and sediment yield processes, and flow hydrodynamics; (2) the effect of interaction between rainfall and overland flow on soil loss. Results showed that the rainfall increased runoff coefficients and the fluctuation of temporal variations in runoff. The rainfall significantly increased soil loss (10.6-68.0%), but this increment declined as q increased. When the interrill erosion dominated (q=6 Lmin-1m-1), the increment in the rill erosion was 1.5 times that in the interrill erosion, and the effect of the interaction on soil loss was negative. When the rill erosion dominated (q=6-36 Lmin-1m-1), the increment in the interrill erosion was 1.7-8.8 times that in the rill erosion, and the effect of the interaction on soil loss became positive. The rainfall was conducive to the development of rills especially for low inflow rates. The rainfall always decreased interrill flow velocity, decreased rill flow velocity (q=6-24 Lmin-1m-1), and enhanced the spatial uniformity of the velocity distribution. Under rainfall disturbance, flow depth, Reynolds number (Re) and resistance were increased but Froude number was reduced, and lower Re was needed to transform a laminar flow to turbulent flow. The rainfall significantly increased flow shear stress (τ) and stream power (φ), with the most sensitive parameters to sediment yield being τ (R2=0.994) and φ (R2=0.993), respectively, for non-rainfall and rainfall conditions. Compared to non-rainfall conditions, there was a reduction in the critical hydrodynamic parameters of mean flow velocity, τ, and φ by the rainfall. These findings provide a better understanding on the influence mechanism of rainfall impact on hillslope erosion processes.
Are large clinical trials in orthopaedic trauma justified?
Sprague, Sheila; Tornetta, Paul; Slobogean, Gerard P; O'Hara, Nathan N; McKay, Paula; Petrisor, Brad; Jeray, Kyle J; Schemitsch, Emil H; Sanders, David; Bhandari, Mohit
2018-04-20
The objective of this analysis is to evaluate the necessity of large clinical trials using FLOW trial data. The FLOW pilot study and definitive trial were factorial trials evaluating the effect of different irrigation solutions and pressures on re-operation. To explore treatment effects over time, we analyzed data from the pilot and definitive trial in increments of 250 patients until the final sample size of 2447 patients was reached. At each increment we calculated the relative risk (RR) and associated 95% confidence interval (CI) for the treatment effect, and compared the results that would have been reported at the smaller enrolments with those seen in the final, adequately powered study. The pilot study analysis of 89 patients and initial incremental enrolments in the FLOW definitive trial favored low pressure compared to high pressure (RR: 1.50, 95% CI: 0.75-3.04; RR: 1.39, 95% CI: 0.60-3.23, respectively), which is in contradiction to the final enrolment, which found no difference between high and low pressure (RR: 1.04, 95% CI: 0.81-1.33). In the soap versus saline comparison, the FLOW pilot study suggested that re-operation rate was similar in both the soap and saline groups (RR: 0.98, 95% CI: 0.50-1.92), whereas the FLOW definitive trial found that the re-operation rate was higher in the soap treatment arm (RR: 1.28, 95% CI: 1.04-1.57). Our findings suggest that studies with smaller sample sizes would have led to erroneous conclusions in the management of open fracture wounds. NCT01069315 (FLOW Pilot Study) Date of Registration: February 17, 2010, NCT00788398 (FLOW Definitive Trial) Date of Registration: November 10, 2008.
Green, Daniel J; Bilsborough, William; Naylor, Louise H; Reed, Chris; Wright, Jeremy; O'Driscoll, Gerry; Walsh, Jennifer H
2005-01-01
The contribution of endothelium-derived nitric oxide (NO) to exercise hyperaemia remains controversial. Disparate findings may, in part, be explained by different shear stress stimuli as a result of different types of exercise. We have directly compared forearm blood flow (FBF) responses to incremental handgrip and cycle ergometer exercise in 14 subjects (age ± s.e.m.) using a novel software system which calculates conduit artery blood flow continuously across the cardiac cycle by synchronising automated edge-detection and wall tracking of high resolution B-mode arterial ultrasound images and Doppler waveform envelope analysis. Monomethyl arginine (l-NMMA) was infused during repeat bouts of each incremental exercise test to assess the contribution of NO to hyperaemic responses. During handgrip, mean FBF increased with workload (P < 0.01) whereas FBF decreased at lower cycle workloads (P < 0.05), before increasing at 120 W (P < 0.001). Differences in these patterns of mean FBF response to different exercise modalities were due to the influence of retrograde diastolic flow during cycling, which had a relatively larger impact on mean flows at lower workloads. Retrograde diastolic flow was negligible during handgrip. Although mean FBF was lower in response to cycling than handgrip exercise, the impact of l–NMMA was significant during the cycle modality only (P < 0.05), possibly reflecting the importance of an oscillatory antegrade/retrograde flow pattern on shear stress-mediated release of NO from the endothelium. In conclusion, different types of exercise present different haemodynamic stimuli to the endothelium, which may result in differential effects of shear stress on the vasculature. PMID:15513940
Clausen, J L; Georgian, T; Gardner, K H; Douglas, T A
2018-01-01
This study compares conventional grab sampling to incremental sampling methodology (ISM) to characterize metal contamination at a military small-arms-range. Grab sample results had large variances, positively skewed non-normal distributions, extreme outliers, and poor agreement between duplicate samples even when samples were co-located within tens of centimeters of each other. The extreme outliers strongly influenced the grab sample means for the primary contaminants lead (Pb) and antinomy (Sb). In contrast, median and mean metal concentrations were similar for the ISM samples. ISM significantly reduced measurement uncertainty of estimates of the mean, increasing data quality (e.g., for environmental risk assessments) with fewer samples (e.g., decreasing total project costs). Based on Monte Carlo resampling simulations, grab sampling resulted in highly variable means and upper confidence limits of the mean relative to ISM.
How shear increments affect the flow production branching ratio in CSDX
NASA Astrophysics Data System (ADS)
Li, J. C.; Diamond, P. H.
2018-06-01
The coupling of turbulence-driven azimuthal and axial flows in a linear device absent magnetic shear (Controlled Shear Decorrelation Experiment) is investigated. In particular, we examine the apportionment of Reynolds power between azimuthal and axial flows, and how the azimuthal flow shear affects axial flow generation and saturation by drift wave turbulence. We study the response of the energy branching ratio, i.e., ratio of axial and azimuthal Reynolds powers, PzR/PyR , to incremental changes of azimuthal and axial flow shears. We show that increasing azimuthal flow shear decreases the energy branching ratio. When axial flow shear increases, this ratio first increases but then decreases to zero. The axial flow shear saturates below the threshold for parallel shear flow instability. The effects of azimuthal flow shear on the generation and saturation of intrinsic axial flows are analyzed. Azimuthal flow shear slows down the modulational growth of the seed axial flow shear, and thus reduces intrinsic axial flow production. Azimuthal flow shear reduces both the residual Reynolds stress (of axial flow, i.e., ΠxzR e s ) and turbulent viscosity ( χzDW ) by the same factor |⟨vy⟩'|-2Δx-2Ln-2ρs2cs2 , where Δx is the distance relative to the reference point where ⟨vy⟩=0 in the plasma frame. Therefore, the stationary state axial flow shear is not affected by azimuthal flow shear to leading order since ⟨vz⟩'˜ΠxzR e s/χzDW .
Chang, Dwayne; Manecksha, Rustom P; Syrrakos, Konstantinos; Lawrentschuk, Nathan
2012-01-01
To investigate the effects of height, external pressure, and bladder fullness on the flow rate in continuous, non-continuous cystoscopy and the automated irrigation fluid pumping system (AIFPS). Each experiment had two 2-litre 0.9% saline bags connected to a continuous, non-continuous cystoscope or AIFPS via irrigation tubing. Other equipment included height-adjustable drip poles, uroflowmetry devices, and model bladders. In Experiment 1, saline bags were elevated to measure the increment in flow rate. In Experiment 2, saline bags were placed under external pressures to evaluate the effect on flow rate. In Experiment 3, flow rate changes in response to variable bladder fullness were measured. Elevating saline bags caused an increase in flow rates, however the increment slowed down beyond a height of 80 cm. Increase in external pressure on saline bags elevated flow rates, but inconsistently. A fuller bladder led to a decrease in flow rates. In all experiments, the AIFPS posted consistent flow rates. Traditional irrigation systems were susceptible to changes in height of irrigation solution, external pressure application, and bladder fullness thus creating inconsistent flow rates. The AIFPS produced consistent flow rates and was not affected by any of the factors investigated in the study.
NASA Astrophysics Data System (ADS)
Patel, Jitendra Kumar; Natarajan, Ganesh
2018-05-01
We present an interpolation-free diffuse interface immersed boundary method for multiphase flows with moving bodies. A single fluid formalism using the volume-of-fluid approach is adopted to handle multiple immiscible fluids which are distinguished using the volume fractions, while the rigid bodies are tracked using an analogous volume-of-solid approach that solves for the solid fractions. The solution to the fluid flow equations are carried out using a finite volume-immersed boundary method, with the latter based on a diffuse interface philosophy. In the present work, we assume that the solids are filled with a "virtual" fluid with density and viscosity equal to the largest among all fluids in the domain. The solids are assumed to be rigid and their motion is solved using Newton's second law of motion. The immersed boundary methodology constructs a modified momentum equation that reduces to the Navier-Stokes equations in the fully fluid region and recovers the no-slip boundary condition inside the solids. An implicit incremental fractional-step methodology in conjunction with a novel hybrid staggered/non-staggered approach is employed, wherein a single equation for normal momentum at the cell faces is solved everywhere in the domain, independent of the number of spatial dimensions. The scalars are all solved for at the cell centres, with the transport equations for solid and fluid volume fractions solved using a high-resolution scheme. The pressure is determined everywhere in the domain (including inside the solids) using a variable coefficient Poisson equation. The solution to momentum, pressure, solid and fluid volume fraction equations everywhere in the domain circumvents the issue of pressure and velocity interpolation, which is a source of spurious oscillations in sharp interface immersed boundary methods. A well-balanced algorithm with consistent mass/momentum transport ensures robust simulations of high density ratio flows with strong body forces. The proposed diffuse interface immersed boundary method is shown to be discretely mass-preserving while being temporally second-order accurate and exhibits nominal second-order accuracy in space. We examine the efficacy of the proposed approach through extensive numerical experiments involving one or more fluids and solids, that include two-particle sedimentation in homogeneous and stratified environment. The results from the numerical simulations show that the proposed methodology results in reduced spurious force oscillations in case of moving bodies while accurately resolving complex flow phenomena in multiphase flows with moving solids. These studies demonstrate that the proposed diffuse interface immersed boundary method, which could be related to a class of penalisation approaches, is a robust and promising alternative to computationally expensive conformal moving mesh algorithms as well as the class of sharp interface immersed boundary methods for multibody problems in multi-phase flows.
Three-Phase AC Optimal Power Flow Based Distribution Locational Marginal Price: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Rui; Zhang, Yingchen
2017-05-17
Designing market mechanisms for electricity distribution systems has been a hot topic due to the increased presence of smart loads and distributed energy resources (DERs) in distribution systems. The distribution locational marginal pricing (DLMP) methodology is one of the real-time pricing methods to enable such market mechanisms and provide economic incentives to active market participants. Determining the DLMP is challenging due to high power losses, the voltage volatility, and the phase imbalance in distribution systems. Existing DC Optimal Power Flow (OPF) approaches are unable to model power losses and the reactive power, while single-phase AC OPF methods cannot capture themore » phase imbalance. To address these challenges, in this paper, a three-phase AC OPF based approach is developed to define and calculate DLMP accurately. The DLMP is modeled as the marginal cost to serve an incremental unit of demand at a specific phase at a certain bus, and is calculated using the Lagrange multipliers in the three-phase AC OPF formulation. Extensive case studies have been conducted to understand the impact of system losses and the phase imbalance on DLMPs as well as the potential benefits of flexible resources.« less
NASA Technical Reports Server (NTRS)
Cliff, Susan E.; Baker, Timothy J.; Hicks, Raymond M.; Reuther, James J.
1999-01-01
Two supersonic transport configurations designed by use of non-linear aerodynamic optimization methods are compared with a linearly designed baseline configuration. One optimized configuration, designated Ames 7-04, was designed at NASA Ames Research Center using an Euler flow solver, and the other, designated Boeing W27, was designed at Boeing using a full-potential method. The two optimized configurations and the baseline were tested in the NASA Langley Unitary Plan Supersonic Wind Tunnel to evaluate the non-linear design optimization methodologies. In addition, the experimental results are compared with computational predictions for each of the three configurations from the Enter flow solver, AIRPLANE. The computational and experimental results both indicate moderate to substantial performance gains for the optimized configurations over the baseline configuration. The computed performance changes with and without diverters and nacelles were in excellent agreement with experiment for all three models. Comparisons of the computational and experimental cruise drag increments for the optimized configurations relative to the baseline show excellent agreement for the model designed by the Euler method, but poorer comparisons were found for the configuration designed by the full-potential code.
Lamb, Berton Lee; Burkardt, Nina
2008-01-01
When Linda Pilkey- Jarvis and Orrin Pilkey state in their article, "Useless Arithmetic," that "mathematical models are simplified, generalized representations of a process or system," they probably do not mean to imply that these models are simple. Rather, the models are simpler than nature and that is the heart of the problem with predictive models. We have had a long professional association with the developers and users of one of these simplifications of nature in the form of a mathematical model known as Physical Habitat Simulation (PHABSIM), which is part of the Instream Flow Incremental Methodology (IFIM). The IFIM is a suite of techniques, including PHABSIM, that allows the analyst to incorporate hydrology , hydraulics, habitat, water quality, stream temperature, and other variables into a tradeoff analysis that decision makers can use to design a flow regime to meet management objectives (Stalnaker et al. 1995). Although we are not the developers of the IFIM, we have worked with those who did design it, and we have tried to understand how the IFIM and PHABSIM are actually used in decision making (King, Burkardt, and Clark 2006; Lamb 1989).
Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie
2015-01-01
Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.
Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie
2015-01-01
Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks. PMID:26496370
Development of weight and cost estimates for lifting surfaces with active controls
NASA Technical Reports Server (NTRS)
Anderson, R. D.; Flora, C. C.; Nelson, R. M.; Raymond, E. T.; Vincent, J. H.
1976-01-01
Equations and methodology were developed for estimating the weight and cost incrementals due to active controls added to the wing and horizontal tail of a subsonic transport airplane. The methods are sufficiently generalized to be suitable for preliminary design. Supporting methodology and input specifications for the weight and cost equations are provided. The weight and cost equations are structured to be flexible in terms of the active control technology (ACT) flight control system specification. In order to present a self-contained package, methodology is also presented for generating ACT flight control system characteristics for the weight and cost equations. Use of the methodology is illustrated.
Towards a Decision Support System for Space Flight Operations
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Hogle, Charles; Ruszkowski, James
2013-01-01
The Mission Operations Directorate (MOD) at the Johnson Space Center (JSC) has put in place a Model Based Systems Engineering (MBSE) technological framework for the development and execution of the Flight Production Process (FPP). This framework has provided much added value and return on investment to date. This paper describes a vision for a model based Decision Support System (DSS) for the development and execution of the FPP and its design and development process. The envisioned system extends the existing MBSE methodology and technological framework which is currently in use. The MBSE technological framework currently in place enables the systematic collection and integration of data required for building an FPP model for a diverse set of missions. This framework includes the technology, people and processes required for rapid development of architectural artifacts. It is used to build a feasible FPP model for the first flight of spacecraft and for recurrent flights throughout the life of the program. This model greatly enhances our ability to effectively engage with a new customer. It provides a preliminary work breakdown structure, data flow information and a master schedule based on its existing knowledge base. These artifacts are then refined and iterated upon with the customer for the development of a robust end-to-end, high-level integrated master schedule and its associated dependencies. The vision is to enhance this framework to enable its application for uncertainty management, decision support and optimization of the design and execution of the FPP by the program. Furthermore, this enhanced framework will enable the agile response and redesign of the FPP based on observed system behavior. The discrepancy of the anticipated system behavior and the observed behavior may be due to the processing of tasks internally, or due to external factors such as changes in program requirements or conditions associated with other organizations that are outside of MOD. The paper provides a roadmap for the three increments of this vision. These increments include (1) hardware and software system components and interfaces with the NASA ground system, (2) uncertainty management and (3) re-planning and automated execution. Each of these increments provide value independently; but some may also enable building of a subsequent increment.
Health risk assessments for alumina refineries.
Donoghue, A Michael; Coffey, Patrick S
2014-05-01
To describe contemporary air dispersion modeling and health risk assessment methodologies applied to alumina refineries and to summarize recent results. Air dispersion models using emission source and meteorological data have been used to assess ground-level concentrations (GLCs) of refinery emissions. Short-term (1-hour and 24-hour average) GLCs and annual average GLCs have been used to assess acute health, chronic health, and incremental carcinogenic risks. The acute hazard index can exceed 1 close to refineries, but it is typically less than 1 at neighboring residential locations. The chronic hazard index is typically substantially less than 1. The incremental carcinogenic risk is typically less than 10(-6). The risks of acute health effects are adequately controlled, and the risks of chronic health effects and incremental carcinogenic risks are negligible around referenced alumina refineries.
Regional Frequency Computation Users Manual.
1972-07-01
increment of flow used to prevent infinite logarithms for events with zero flow X = Mean logarithm of flow events N = Total years of record S = Unbiased...C LIBRARY 3’jr.RfUTINFES USEO--ALflGpSINvAB3 1002 c PRGRAM ~ SUBRflUTINES CR0UTpR,chfNN-SFE COt’MENTS I-N RNGEN 100 C REFERENCE TOl TAPE ? AT
Thavorn, Kednapa; Kugathasan, Howsikan; Tan, Darrell H S; Moqueet, Nasheed; Baral, Stefan D; Skidmore, Becky; MacFadden, Derek; Simkin, Anna; Mishra, Sharmistha
2018-03-15
Pre-exposure prophylaxis (PrEP) with antiretrovirals is an efficacious and effective intervention to decrease the risk of HIV (human immunodeficiency virus) acquisition. Yet drug and delivery costs prohibit access in many jurisdictions. In the absence of guidelines for the synthesis of economic evaluations, we developed a protocol for a systematic review of economic evaluation studies for PrEP by drawing on best practices in systematic reviews and the conduct and reporting of economic evaluations. We aim to estimate the incremental cost per health outcome of PrEP compared with placebo, no PrEP, or other HIV prevention strategies; assess the methodological variability in, and quality of, economic evaluations of PrEP; estimate the incremental cost per health outcome of different PrEP implementation strategies; and quantify the potential sources of heterogeneity in outcomes. We will systematically search electronic databases (MEDLINE, Embase) and the gray literature. We will include economic evaluation studies that assess both costs and health outcomes of PrEP in HIV-uninfected individuals, without restricting language or year of publication. Two reviewers will independently screen studies using predefined inclusion criteria, extract data, and assess methodological quality using the Philips checklist, Second Panel on the Cost-effectiveness of Health and Medicines, and the International Society for Pharmacoeconomics and Outcomes Research recommendations. Outcomes of interest include incremental costs and outcomes in natural units or utilities, cost-effectiveness ratios, and net monetary benefit. We will perform descriptive and quantitative syntheses using sensitivity analyses of outcomes by population subgroups, HIV epidemic settings, study designs, baseline intervention contexts, key parameter inputs and assumptions, type of outcomes, economic perspectives, and willingness to pay values. Findings will guide future economic evaluation of PrEP strategies in terms of methodological and knowledge gaps, and will inform decisions on the efficient integration of PrEP into public health programs across epidemiologic and health system contexts. PROSPERO CRD42016038440 .
Health economic assessment: a methodological primer.
Simoens, Steven
2009-12-01
This review article aims to provide an introduction to the methodology of health economic assessment of a health technology. Attention is paid to defining the fundamental concepts and terms that are relevant to health economic assessments. The article describes the methodology underlying a cost study (identification, measurement and valuation of resource use, calculation of costs), an economic evaluation (type of economic evaluation, the cost-effectiveness plane, trial- and model-based economic evaluation, discounting, sensitivity analysis, incremental analysis), and a budget impact analysis. Key references are provided for those readers who wish a more advanced understanding of health economic assessments.
Health Economic Assessment: A Methodological Primer
Simoens, Steven
2009-01-01
This review article aims to provide an introduction to the methodology of health economic assessment of a health technology. Attention is paid to defining the fundamental concepts and terms that are relevant to health economic assessments. The article describes the methodology underlying a cost study (identification, measurement and valuation of resource use, calculation of costs), an economic evaluation (type of economic evaluation, the cost-effectiveness plane, trial- and model-based economic evaluation, discounting, sensitivity analysis, incremental analysis), and a budget impact analysis. Key references are provided for those readers who wish a more advanced understanding of health economic assessments. PMID:20049237
Cost-Utility Analysis: Current Methodological Issues and Future Perspectives
Nuijten, Mark J. C.; Dubois, Dominique J.
2011-01-01
The use of cost–effectiveness as final criterion in the reimbursement process for listing of new pharmaceuticals can be questioned from a scientific and policy point of view. There is a lack of consensus on main methodological issues and consequently we may question the appropriateness of the use of cost–effectiveness data in health care decision-making. Another concern is the appropriateness of the selection and use of an incremental cost–effectiveness threshold (Cost/QALY). In this review, we focus mainly on only some key methodological concerns relating to discounting, the utility concept, cost assessment, and modeling methodologies. Finally we will consider the relevance of some other important decision criteria, like social values and equity. PMID:21713127
Design Of Combined Stochastic Feedforward/Feedback Control
NASA Technical Reports Server (NTRS)
Halyo, Nesim
1989-01-01
Methodology accommodates variety of control structures and design techniques. In methodology for combined stochastic feedforward/feedback control, main objectives of feedforward and feedback control laws seen clearly. Inclusion of error-integral feedback, dynamic compensation, rate-command control structure, and like integral element of methodology. Another advantage of methodology flexibility to develop variety of techniques for design of feedback control with arbitrary structures to obtain feedback controller: includes stochastic output feedback, multiconfiguration control, decentralized control, or frequency and classical control methods. Control modes of system include capture and tracking of localizer and glideslope, crab, decrab, and flare. By use of recommended incremental implementation, control laws simulated on digital computer and connected with nonlinear digital simulation of aircraft and its systems.
Health Risk Assessments for Alumina Refineries
Coffey, Patrick S.
2014-01-01
Objective: To describe contemporary air dispersion modeling and health risk assessment methodologies applied to alumina refineries and to summarize recent results. Methods: Air dispersion models using emission source and meteorological data have been used to assess ground-level concentrations (GLCs) of refinery emissions. Short-term (1-hour and 24-hour average) GLCs and annual average GLCs have been used to assess acute health, chronic health, and incremental carcinogenic risks. Results: The acute hazard index can exceed 1 close to refineries, but it is typically less than 1 at neighboring residential locations. The chronic hazard index is typically substantially less than 1. The incremental carcinogenic risk is typically less than 10−6. Conclusions: The risks of acute health effects are adequately controlled, and the risks of chronic health effects and incremental carcinogenic risks are negligible around referenced alumina refineries. PMID:24806721
2014 Review on the Extension of the AMedP-8(C) Methodology to New Agents, Materials, and Conditions
2015-08-01
chemical agents, five biological agents, seven radioisotopes , nuclear fallout, or prompt nuclear effects.1 Each year since 2009, OTSG has sponsored IDA...evaluated four agents: anthrax, botulinum toxin, sarin (GB), and distilled mustard (HD), first using the default parameters and methods in HPAC and...the IDA team then made incremental changes to the default casualty parameters and methods to control for all known data and methodological
Age diagnosis based on incremental lines in dental cementum: a critical reflection.
Grosskopf, Birgit; McGlynn, George
2011-01-01
Age estimation based on the counting of incremental lines in dental cementum is a method frequently used for the estimation of the age at death for humans in bioarchaeology, and increasingly, forensic anthropology. Assessment of applicability, precision, and method reproducibility continue to be the focus of research in this area, and are occasionally accompanied by significant controversy. Differences in methodological techniques for data collection (e.g. number of sections, factor of magnification for counting or interpreting "outliers") are presented. Potential influences on method reliability are discussed, especially for their applicability in forensic contexts.
NASA Astrophysics Data System (ADS)
Guadagnini, A.; Riva, M.; Neuman, S. P.
2016-12-01
Environmental quantities such as log hydraulic conductivity (or transmissivity), Y(x) = ln K(x), and their spatial (or temporal) increments, ΔY, are known to be generally non-Gaussian. Documented evidence of such behavior includes symmetry of increment distributions at all separation scales (or lags) between incremental values of Y with sharp peaks and heavy tails that decay asymptotically as lag increases. This statistical scaling occurs in porous as well as fractured media characterized by either one or a hierarchy of spatial correlation scales. In hierarchical media one observes a range of additional statistical ΔY scaling phenomena, all of which are captured comprehensibly by a novel generalized sub-Gaussian (GSG) model. In this model Y forms a mixture Y(x) = U(x) G(x) of single- or multi-scale Gaussian processes G having random variances, U being a non-negative subordinator independent of G. Elsewhere we developed ways to generate unconditional and conditional random realizations of isotropic or anisotropic GSG fields which can be embedded in numerical Monte Carlo flow and transport simulations. Here we present and discuss expressions for probability distribution functions of Y and ΔY as well as their lead statistical moments. We then focus on a simple flow setting of mean uniform steady state flow in an unbounded, two-dimensional domain, exploring ways in which non-Gaussian heterogeneity affects stochastic flow and transport descriptions. Our expressions represent (a) lead order autocovariance and cross-covariance functions of hydraulic head, velocity and advective particle displacement as well as (b) analogues of preasymptotic and asymptotic Fickian dispersion coefficients. We compare them with corresponding expressions developed in the literature for Gaussian Y.
Performance Data Gathering and Representation from Fixed-Size Statistical Data
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Jin, Haoqiang H.; Schmidt, Melisa A.; Kutler, Paul (Technical Monitor)
1997-01-01
The two commonly-used performance data types in the super-computing community, statistics and event traces, are discussed and compared. Statistical data are much more compact but lack the probative power event traces offer. Event traces, on the other hand, are unbounded and can easily fill up the entire file system during program execution. In this paper, we propose an innovative methodology for performance data gathering and representation that offers a middle ground. Two basic ideas are employed: the use of averages to replace recording data for each instance and 'formulae' to represent sequences associated with communication and control flow. The user can trade off tracing overhead, trace data size with data quality incrementally. In other words, the user will be able to limit the amount of trace data collected and, at the same time, carry out some of the analysis event traces offer using space-time views. With the help of a few simple examples, we illustrate the use of these techniques in performance tuning and compare the quality of the traces we collected with event traces. We found that the trace files thus obtained are, indeed, small, bounded and predictable before program execution, and that the quality of the space-time views generated from these statistical data are excellent. Furthermore, experimental results showed that the formulae proposed were able to capture all the sequences associated with 11 of the 15 applications tested. The performance of the formulae can be incrementally improved by allocating more memory at runtime to learn longer sequences.
Characterization of fracture aperture for groundwater flow and transport
NASA Astrophysics Data System (ADS)
Sawada, A.; Sato, H.; Tetsu, K.; Sakamoto, K.
2007-12-01
This paper presents experiments and numerical analyses of flow and transport carried out on natural fractures and transparent replica of fractures. The purpose of this study was to improve the understanding of the role of heterogeneous aperture patterns on channelization of groundwater flow and dispersion in solute transport. The research proceeded as follows: First, a precision plane grinder was applied perpendicular to the fracture plane to characterize the aperture distribution on a natural fracture with 1 mm of increment size. Although both time and labor were intensive, this approach provided a detailed, three dimensional picture of the pattern of fracture aperture. This information was analyzed to provide quantitative measures for the fracture aperture distribution, including JRC (Joint Roughness Coefficient) and fracture contact area ratio. These parameters were used to develop numerical models with corresponding synthetic aperture patterns. The transparent fracture replica and numerical models were then used to study how transport is affected by the aperture spatial pattern. In the transparent replica, transmitted light intensity measured by a CCD camera was used to image channeling and dispersion due to the fracture aperture spatial pattern. The CCD image data was analyzed to obtain the quantitative fracture aperture and tracer concentration data according to Lambert-Beer's law. The experimental results were analyzed using the numerical models. Comparison of the numerical models to the transparent replica provided information about the nature of channeling and dispersion due to aperture spatial patterns. These results support to develop a methodology for defining representative fracture aperture of a simplified parallel fracture model for flow and transport in heterogeneous fractures for contaminant transport analysis.
Prediction of River Flooding using Geospatial and Statistical Analysis in New York, USA and Kent, UK
NASA Astrophysics Data System (ADS)
Marsellos, A.; Tsakiri, K.; Smith, M.
2014-12-01
Flooding in the rivers normally occurs during periods of excessive precipitation (i.e. New York, USA; Kent, UK) or ice jams during the winter period (New York, USA). For the prediction and mapping of the river flooding, it is necessary to evaluate the spatial distribution of the water (volume) in the river as well as study the interaction between the climatic and hydrological variables. Two study areas have been analyzed; one in Mohawk River, New York and one in Kent, United Kingdom (UK). A high resolution Digital Elevation Model (DEM) of the Mohawk River, New York has been used for a GIS flooding simulation to determine the maximum elevation value of the water that cannot continue to be restricted in the trunk stream and as a result flooding in the river may be triggered. The Flooding Trigger Level (FTL) is determined by incremental volumetric and surface calculations from Triangulated Irregular Network (TIN) with the use of GIS software and LiDAR data. The prediction of flooding in the river can also be improved by the statistical analysis of the hydrological and climatic variables in Mohawk River and Kent, UK. A methodology of time series analysis has been applied for the decomposition of the hydrological (water flow and ground water data) and climatic data in both locations. The KZ (Kolmogorov-Zurbenko) filter is used for the decomposition of the time series into the long, seasonal, and short term components. The explanation of the long term component of the water flow using the climatic variables has been improved up to 90% for both locations. Similar analysis has been performed for the prediction of the seasonal and short term component. This methodology can be applied for flooding of the rivers in multiple sites.
Computational Aeroelastic Analysis of Ares Crew Launch Vehicle Bi-Modal Loading
NASA Technical Reports Server (NTRS)
Massey, Steven J.; Chwalowski, Pawel
2010-01-01
A Reynolds averaged Navier-Stokes analysis, with and without dynamic aeroelastic effects, is presented for the Ares I-X launch vehicle at transonic Mach numbers and flight Reynolds numbers for two grid resolutions and two angles of attack. The purpose of the study is to quantify the force and moment increment imparted by the sudden transition from fully separated flow around the crew module - service module junction to that of the bi-modal flow state in which only part of the flow reattaches. The bi-modal flow phenomenon is of interest to the guidance, navigation and control community because it causes a discontinuous jump in forces and moments. Computations with a rigid structure at zero zero angle of attack indicate significant increases in normal force and pitching moment. Dynamic aeroelastic computations indicate the bi-modal flow state is insensitive to vehicle flexibility due to the resulting deflections imparting only very small changes in local angle of attack. At an angle of attack of 2.5deg, the magnitude of the pitching moment increment resulting from the bi-modal state nearly triples, while occurring at a slightly lower Mach number. Significant grid induced variations between the solutions indicate that further grid refinement is warranted.
NASA Technical Reports Server (NTRS)
Kohl, F. J.
1982-01-01
The methodology to predict deposit evolution (deposition rate and subsequent flow of liquid deposits) as a function of fuel and air impurity content and relevant aerodynamic parameters for turbine airfoils is developed in this research. The spectrum of deposition conditions encountered in gas turbine operations includes the mechanisms of vapor deposition, small particle deposition with thermophoresis, and larger particle deposition with inertial effects. The focus is on using a simplified version of the comprehensive multicomponent vapor diffusion formalism to make deposition predictions for: (1) simple geometry collectors; and (2) gas turbine blade shapes, including both developing laminar and turbulent boundary layers. For the gas turbine blade the insights developed in previous programs are being combined with heat and mass transfer coefficient calculations using the STAN 5 boundary layer code to predict vapor deposition rates and corresponding liquid layer thicknesses on turbine blades. A computer program is being written which utilizes the local values of the calculated deposition rate and skin friction to calculate the increment in liquid condensate layer growth along a collector surface.
40 CFR 86.331-79 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... plot of the difference between the span and zero response versus fuel flow will be similar to the one... least one-half hour after the oven has reached temperature for the system to equilibrate. (c) Initial... difference between the span-gas response and the zero-gas response. Incrementally adjust the fuel flow above...
Studying Faculty Flows Using an Interactive Spreadsheet Model. AIR 1997 Annual Forum Paper.
ERIC Educational Resources Information Center
Kelly, Wayne
This paper describes a spreadsheet-based faculty flow model developed and implemented at the University of Calgary (Canada) to analyze faculty retirement, turnover, and salary issues. The study examined whether, given expected faculty turnover, the current salary increment system was sustainable in a stable or declining funding environment, and…
Statistical Analysis of CFD Solutions from the Third AIAA Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Morrison, Joseph H.; Hemsch, Michael J.
2007-01-01
The first AIAA Drag Prediction Workshop, held in June 2001, evaluated the results from an extensive N-version test of a collection of Reynolds-Averaged Navier-Stokes CFD codes. The code-to-code scatter was more than an order of magnitude larger than desired for design and experimental validation of cruise conditions for a subsonic transport configuration. The second AIAA Drag Prediction Workshop, held in June 2003, emphasized the determination of installed pylon-nacelle drag increments and grid refinement studies. The code-to-code scatter was significantly reduced compared to the first DPW, but still larger than desired. However, grid refinement studies showed no significant improvement in code-to-code scatter with increasing grid refinement. The third Drag Prediction Workshop focused on the determination of installed side-of-body fairing drag increments and grid refinement studies for clean attached flow on wing alone configurations and for separated flow on the DLR-F6 subsonic transport model. This work evaluated the effect of grid refinement on the code-to-code scatter for the clean attached flow test cases and the separated flow test cases.
Impact of Incremental Sampling Methodology (ISM) on Metals Bioavailability
2016-05-01
vent that resembles gastrointestinal fluids. This technique mimics diges- tion in the human gut, resulting in a means to understand the human health...25 4 Results ...62 Report Documentation Page ERDC TR-16-4 v Illustrations Figures 1 Comparison of prior digestion results for
Infantry Weapons Test Methodology Study. Volume 3. Light Machine Gun Test Methodology
1972-06-01
Trajjectnry and marionim ordiwate. to the target increases beyond 500 meters, the beaten zone will become shorter and wider. When fires are de - livered into...traversing amount of elevation change is de - handwheel. To insure adequate tar- termined by the slope of the terrain get coverage, a burst is fired after and...to the traversing handwheel. The 6-mii increments on the traversing amount of elevation change Is de - handwheel. To insure adequate tar- termined by
Prajapati, Parna; Shah, Pankhil; King, Hollis H; Williams, Arthur G; Desai, Pratikkumar; Downey, H Fred
2010-09-01
Osteopathic lymphatic pump treatments (LPT) are used to treat edema, but their direct effects on lymph flow have not been studied. In the current study, we examined the effects of LPT on lymph flow in the thoracic duct of instrumented conscious dogs in the presence of edema produced by constriction of the inferior vena cava (IVC). Six dogs were surgically instrumented with an ultrasonic flow transducer on the thoracic lymph duct and catheters in the descending thoracic aorta and in IVC. After postoperative recovery, lymph flow and hemodynamic variables were measured 1) pre-LPT, 2) during 4 min LPT, 3) post-LPT, in the absence and presence of edema produced by IVC constriction. This constriction increased abdominal girth from 60 +/-2.6 to 75 +/- 2.9 cm. Before IVC constriction, LPT increased lymph flow (P < 0.05) from 1.9 +/- 0.2 ml/min to a maximum of 4.7 +/-1.2 ml/min, whereas after IVC constriction, LPT increased lymph flow (P < 0.05) from 7.9 +/-2.2 to a maximum of 11.7 +/-2.2 ml/min. The incremental lymph flow mobilized by 4 min of LPT (ie, the flow that exceeded 4 min of baseline flow), was 10.6 ml after IVC constriction. This incremental flow was not significantly greater than that measured before IVC constriction. Edema caused by IVC constriction markedly increased lymph flow in the thoracic duct. LPT increased thoracic duct lymph flow before and after IVC constriction. The lymph flow mobilized by 4 min of LPT in presence of edema was not significantly greater than that mobilized prior to edema.
Willan, Andrew R; Eckermann, Simon
2012-10-01
Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.
Warwick, Peter D.; Verma, Mahendra K.; Attanasi, Emil; Olea, Ricardo A.; Blondes, Madalyn S.; Freeman, Philip; Brennan, Sean T.; Merrill, Matthew; Jahediesfanjani, Hossein; Roueche, Jacqueline; Lohr, Celeste D.
2017-01-01
The U.S. Geological Survey (USGS) has developed an assessment methodology for estimating the potential incremental technically recoverable oil resources resulting from carbon dioxide-enhanced oil recovery (CO2-EOR) in reservoirs with appropriate depth, pressure, and oil composition. The methodology also includes a procedure for estimating the CO2 that remains in the reservoir after the CO2-EOR process is complete. The methodology relies on a reservoir-level database that incorporates commercially available geologic and engineering data. The mathematical calculations of this assessment methodology were tested and produced realistic results for the Permian Basin Horseshoe Atoll, Upper Pennsylvanian-Wolfcampian Play (Texas, USA). The USGS plans to use the new methodology to conduct an assessment of technically recoverable hydrocarbons and associated CO2 sequestration resulting from CO2-EOR in the United States.
PIA and REWIND: Two New Methodologies for Cross Section Adjustment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmiotti, G.; Salvatores, M.
2017-02-01
This paper presents two new cross section adjustment methodologies intended for coping with the problem of compensations. The first one PIA, Progressive Incremental Adjustment, gives priority to the utilization of experiments of elemental type (those sensitive to a specific cross section), following a definite hierarchy on which type of experiment to use. Once the adjustment is performed, both the new adjusted data and the new covariance matrix are kept. The second methodology is called REWIND (Ranking Experiments by Weighting for Improved Nuclear Data). This new proposed approach tries to establish a methodology for ranking experiments by looking at the potentialmore » gain they can produce in an adjustment. Practical applications for different adjustments illustrate the results of the two methodologies against the current one and show the potential improvement for reducing uncertainties in target reactors.« less
NASA Technical Reports Server (NTRS)
Sandlin, Doral R.; Howard, Kipp E.
1991-01-01
A user friendly FORTRAN code that can be used for preliminary design of V/STOL aircraft is described. The program estimates lift increments, due to power induced effects, encountered by aircraft in V/STOL flight. These lift increments are calculated using empirical relations developed from wind tunnel tests and are due to suckdown, fountain, ground vortex, jet wake, and the reaction control system. The code can be used as a preliminary design tool along with NASA Ames' Aircraft Synthesis design code or as a stand-alone program for V/STOL aircraft designers. The Power Induced Effects (PIE) module was validated using experimental data and data computed from lift increment routines. Results are presented for many flat plate models along with the McDonnell Aircraft Company's MFVT (mixed flow vectored thrust) V/STOL preliminary design and a 15 percent scale model of the YAV-8B Harrier V/STOL aircraft. Trends and magnitudes of lift increments versus aircraft height above the ground were predicted well by the PIE module. The code also provided good predictions of the magnitudes of lift increments versus aircraft forward velocity. More experimental results are needed to determine how well the code predicts lift increments as they vary with jet deflection angle and angle of attack. The FORTRAN code is provided in the appendix.
Blood temperature and perfusion to exercising and non-exercising human limbs.
González-Alonso, José; Calbet, José A L; Boushel, Robert; Helge, Jørn W; Søndergaard, Hans; Munch-Andersen, Thor; van Hall, Gerrit; Mortensen, Stefan P; Secher, Niels H
2015-10-01
What is the central question of this study? Temperature-sensitive mechanisms are thought to contribute to blood-flow regulation, but the relationship between exercising and non-exercising limb perfusion and blood temperature is not established. What is the main finding and its importance? The close coupling among perfusion, blood temperature and aerobic metabolism in exercising and non-exercising extremities across different exercise modalities and activity levels and the tight association between limb vasodilatation and increases in plasma ATP suggest that both temperature- and metabolism-sensitive mechanisms are important for the control of human limb perfusion, possibly by activating ATP release from the erythrocytes. Temperature-sensitive mechanisms may contribute to blood-flow regulation, but the influence of temperature on perfusion to exercising and non-exercising human limbs is not established. Blood temperature (TB ), blood flow and oxygen uptake (V̇O2) in the legs and arms were measured in 16 healthy humans during 90 min of leg and arm exercise and during exhaustive incremental leg or arm exercise. During prolonged exercise, leg blood flow (LBF) was fourfold higher than arm blood flow (ABF) in association with higher TB and limb V̇O2. Leg and arm vascular conductance during exercise compared with rest was related closely to TB (r(2) = 0.91; P < 0.05), plasma ATP (r(2) = 0.94; P < 0.05) and limb V̇O2 (r(2) = 0.99; P < 0.05). During incremental leg exercise, LBF increased in association with elevations in TB and limb V̇O2, whereas ABF, arm TB and V̇O2 remained largely unchanged. During incremental arm exercise, both ABF and LBF increased in relationship to similar increases in V̇O2. In 12 trained males, increases in femoral TB and LBF during incremental leg exercise were mirrored by similar pulmonary artery TB and cardiac output dynamics, suggesting that processes in active limbs dominate central temperature and perfusion responses. The present data reveal a close coupling among perfusion, TB and aerobic metabolism in exercising and non-exercising extremities and a tight association between limb vasodilatation and increases in plasma ATP. These findings suggest that temperature and V̇O2 contribute to the regulation of limb perfusion through control of intravascular ATP. © 2015 The Authors Experimental Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
Blood temperature and perfusion to exercising and non‐exercising human limbs
Calbet, José A. L.; Boushel, Robert; Helge, Jørn W.; Søndergaard, Hans; Munch‐Andersen, Thor; van Hall, Gerrit; Mortensen, Stefan P.; Secher, Niels H.
2015-01-01
New Findings What is the central question of this study? Temperature‐sensitive mechanisms are thought to contribute to blood‐flow regulation, but the relationship between exercising and non‐exercising limb perfusion and blood temperature is not established. What is the main finding and its importance? The close coupling among perfusion, blood temperature and aerobic metabolism in exercising and non‐exercising extremities across different exercise modalities and activity levels and the tight association between limb vasodilatation and increases in plasma ATP suggest that both temperature‐ and metabolism‐sensitive mechanisms are important for the control of human limb perfusion, possibly by activating ATP release from the erythrocytes. Temperature‐sensitive mechanisms may contribute to blood‐flow regulation, but the influence of temperature on perfusion to exercising and non‐exercising human limbs is not established. Blood temperature (T B), blood flow and oxygen uptake (V˙O2) in the legs and arms were measured in 16 healthy humans during 90 min of leg and arm exercise and during exhaustive incremental leg or arm exercise. During prolonged exercise, leg blood flow (LBF) was fourfold higher than arm blood flow (ABF) in association with higher T B and limb V˙O2. Leg and arm vascular conductance during exercise compared with rest was related closely to T B (r 2 = 0.91; P < 0.05), plasma ATP (r 2 = 0.94; P < 0.05) and limb V˙O2 (r 2 = 0.99; P < 0.05). During incremental leg exercise, LBF increased in association with elevations in T B and limb V˙O2, whereas ABF, arm T B and V˙O2 remained largely unchanged. During incremental arm exercise, both ABF and LBF increased in relationship to similar increases in V˙O2. In 12 trained males, increases in femoral T B and LBF during incremental leg exercise were mirrored by similar pulmonary artery T B and cardiac output dynamics, suggesting that processes in active limbs dominate central temperature and perfusion responses. The present data reveal a close coupling among perfusion, T B and aerobic metabolism in exercising and non‐exercising extremities and a tight association between limb vasodilatation and increases in plasma ATP. These findings suggest that temperature and V˙O2 contribute to the regulation of limb perfusion through control of intravascular ATP. PMID:26268717
NASA Astrophysics Data System (ADS)
Dib, Alain; Kavvas, M. Levent
2018-03-01
The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.
Knowledge Sharing through Pair Programming in Learning Environments: An Empirical Study
ERIC Educational Resources Information Center
Kavitha, R. K.; Ahmed, M. S.
2015-01-01
Agile software development is an iterative and incremental methodology, where solutions evolve from self-organizing, cross-functional teams. Pair programming is a type of agile software development technique where two programmers work together with one computer for developing software. This paper reports the results of the pair programming…
Redefining the stormwater first flush phenomenon.
Bach, Peter M; McCarthy, David T; Deletic, Ana
2010-04-01
The first flush in urban runoff has been an important, yet disputed phenomenon amongst many researchers. The vast differences in the evidence could be solely due to limitations of the first flush current definition and the approach used for its assessment. There is a need for revisiting the first flush theory in the light of its practical applications to urban drainage management practices. We propose that a catchment's first flush behaviour is to be quantified by the runoff volume required to reduce a catchment's stormwater pollutant concentrations to background levels. The proposed method for assessment of this runoff volume starts by finding the average catchment pollutant concentrations for a given increment of discharged volume using a number of event pollutographs. Non-parametric statistics are then used to establish the characteristic pollutograph by pooling statistically indifferent runoff increments (known as slices) together. This allows the identification of the catchment's initial and background pollutant concentrations and for quantification of the first flush volume and its strength. The novel technique was used on seven catchments around Melbourne, Australia, with promising results. Sensitivity to the chosen increment of runoff (for which mean concentrations are calculated) indicated that when dealing with discrete flow-weighted water quality data, a suitable slice size should closely match the flow-weighting of samples. The overall sensitivity to runoff increment and level of significance was found to be negligible. Further research is needed to fully develop this method. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Phase retrieval via incremental truncated amplitude flow algorithm
NASA Astrophysics Data System (ADS)
Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao
2017-10-01
This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.
The Capillary Flow Experiments Aboard the International Space Station: Increments 9-15
NASA Technical Reports Server (NTRS)
Jenson, Ryan M.; Weislogel, Mark M.; Tavan, Noel T.; Chen, Yongkang; Semerjian, Ben; Bunnell, Charles T.; Collicott, Steven H.; Klatte, Jorg; dreyer, Michael E.
2009-01-01
This report provides a summary of the experimental, analytical, and numerical results of the Capillary Flow Experiment (CFE) performed aboard the International Space Station (ISS). The experiments were conducted in space beginning with Increment 9 through Increment 16, beginning August 2004 and ending December 2007. Both primary and extra science experiments were conducted during 19 operations performed by 7 astronauts including: M. Fincke, W. McArthur, J. Williams, S. Williams, M. Lopez-Alegria, C. Anderson, and P. Whitson. CFE consists of 6 approximately 1 to 2 kg handheld experiment units designed to investigate a selection of capillary phenomena of fundamental and applied importance, such as large length scale contact line dynamics (CFE-Contact Line), critical wetting in discontinuous structures (CFE-Vane Gap), and capillary flows and passive phase separations in complex containers (CFE-Interior Corner Flow). Highly quantitative video from the simply performed flight experiments provide data helpful in benchmarking numerical methods, confirming theoretical models, and guiding new model development. In an extensive executive summary, a brief history of the experiment is reviewed before introducing the science investigated. A selection of experimental results and comparisons with both analytic and numerical predictions is given. The subsequent chapters provide additional details of the experimental and analytical methods developed and employed. These include current presentations of the state of the data reduction which we anticipate will continue throughout the year and culminate in several more publications. An extensive appendix is used to provide support material such as an experiment history, dissemination items to date (CFE publication, etc.), detailed design drawings, and crew procedures. Despite the simple nature of the experiments and procedures, many of the experimental results may be practically employed to enhance the design of spacecraft engineering systems involving capillary interface dynamics.
Delineating Area of Review in a System with Pre-injection Relative Overpressure
Oldenburg, Curtis M.; Cihan, Abdullah; Zhou, Quanlin; ...
2014-12-31
The Class VI permit application for geologic carbon sequestration (GCS) requires delineation of an area of review (AoR), defined as the region surrounding the (GCS) project where underground sources of drinking water (USDWs) may be endangered. The methods for estimating AoR under the Class VI regulation were developed assuming that GCS reservoirs would be in hydrostatic equilibrium with overlying aquifers. Here we develop and apply an approach to estimating AoR for sites with preinjection relative overpressure for which standard AoR estimation methods produces an infinite AoR. The approach we take is to compare brine leakage through a hypothetical open flowmore » path in the base-case scenario (no-injection) to the incrementally larger leakage that would occur in the CO 2-injection case. To estimate AoR by this method, we used semi-analytical solutions to single-phase flow equations to model reservoir pressurization and flow up (single) leaky wells located at progressively greater distances from the injection well. We found that the incrementally larger flow rates for hypothetical leaky wells located 6 km and 4 km from the injection well are ~20% and 30% greater, respectively, than hypothetical baseline leakage rates. If total brine leakage is considered, the results depend strongly on how the incremental increase in total leakage is calculated, varying from a few percent to up to 40% greater (at most at early time) than base-case total leakage.« less
Kindermann, Georg E; Schörghuber, Stefan; Linkosalo, Tapio; Sanchez, Anabel; Rammer, Werner; Seidl, Rupert; Lexer, Manfred J
2013-02-01
Forests play an important role in the global carbon flow. They can store carbon and can also provide wood which can substitute other materials. In EU27 the standing biomass is steadily increasing. Increments and harvests seem to have reached a plateau between 2005 and 2010. One reason for reaching this plateau will be the circumstance that the forests are getting older. High ages have the advantage that they typical show high carbon concentration and the disadvantage that the increment rates are decreasing. It should be investigated how biomass stock, harvests and increments will develop under different climate scenarios and two management scenarios where one is forcing to store high biomass amounts in forests and the other tries to have high increment rates and much harvested wood. A management which is maximising standing biomass will raise the stem wood carbon stocks from 30 tC/ha to 50 tC/ha until 2100. A management which is maximising increments will lower the stock to 20 tC/ha until 2100. The estimates for the climate scenarios A1b, B1 and E1 are different but there is much more effect by the management target than by the climate scenario. By maximising increments the harvests are 0.4 tC/ha/year higher than in the management which maximises the standing biomass. The increments until 2040 are close together but around 2100 the increments when maximising standing biomass are approximately 50 % lower than those when maximising increments. Cold regions will benefit from the climate changes in the climate scenarios by showing higher increments. The results of this study suggest that forest management should maximise increments, not stocks to be more efficient in sense of climate change mitigation. This is true especially for regions which have already high carbon stocks in forests, what is the case in many regions in Europe. During the time span 2010-2100 the forests of EU27 will absorb additional 1750 million tC if they are managed to maximise increments compared if they are managed to maximise standing biomass. Incentives which will increase the standing biomass beyond the increment optimal biomass should therefore be avoided. Mechanisms which will maximise increments and sustainable harvests need to be developed to have substantial amounts of wood which can be used as substitution of non sustainable materials.
Nasir, Hina; Javaid, Nadeem; Sher, Muhammad; Qasim, Umar; Khan, Zahoor Ali; Alrajeh, Nabil; Niaz, Iftikhar Azim
2016-01-01
This paper embeds a bi-fold contribution for Underwater Wireless Sensor Networks (UWSNs); performance analysis of incremental relaying in terms of outage and error probability, and based on the analysis proposition of two new cooperative routing protocols. Subject to the first contribution, a three step procedure is carried out; a system model is presented, the number of available relays are determined, and based on cooperative incremental retransmission methodology, closed-form expressions for outage and error probability are derived. Subject to the second contribution, Adaptive Cooperation in Energy (ACE) efficient depth based routing and Enhanced-ACE (E-ACE) are presented. In the proposed model, feedback mechanism indicates success or failure of data transmission. If direct transmission is successful, there is no need for relaying by cooperative relay nodes. In case of failure, all the available relays retransmit the data one by one till the desired signal quality is achieved at destination. Simulation results show that the ACE and E-ACE significantly improves network performance, i.e., throughput, when compared with other incremental relaying protocols like Cooperative Automatic Repeat reQuest (CARQ). E-ACE and ACE achieve 69% and 63% more throughput respectively as compared to CARQ in hard underwater environment. PMID:27420061
Lucero, Adam A; Addae, Gifty; Lawrence, Wayne; Neway, Beemnet; Credeur, Daniel P; Faulkner, James; Rowlands, David; Stoner, Lee
2018-01-01
What is the central question of this study? Continuous-wave near-infrared spectroscopy, coupled with venous and arterial occlusions, offers an economical, non-invasive alternative to measuring skeletal muscle blood flow and oxygen consumption, but its reliability during exercise has not been established. What is the main finding and its importance? Continuous-wave near-infrared spectroscopy devices can reliably assess local skeletal muscle blood flow and oxygen consumption from the vastus lateralis in healthy, physically active adults. The patterns of response exhibited during exercise of varying intensity agree with other published results using similar methodologies, meriting potential applications in clinical diagnosis and therapeutic assessment. Near-infrared spectroscopy (NIRS), coupled with rapid venous and arterial occlusions, can be used for the non-invasive estimation of resting local skeletal muscle blood flow (mBF) and oxygen consumption (mV̇O2), respectively. However, the day-to-day reliability of mBF and mV̇O2 responses to stressors such as incremental dynamic exercise has not been established. The aim of this study was to determine the reliability of NIRS-derived mBF and mV̇O2 responses from incremental dynamic exercise. Measurements of mBF and mV̇O2 were collected in the vastus lateralis of 12 healthy, physically active adults [seven men and five women; 25 (SD 6) years old] during three non-consecutive visits within 10 days. After 10 min rest, participants performed 3 min of rhythmic isotonic knee extension (one extension every 4 s) at 5, 10, 15, 20, 25 and 30% of maximal voluntary contraction (MVC), before four venous occlusions and then two arterial occlusions. The mBF and mV̇O2 increased proportionally with intensity [from 0.55 to 7.68 ml min -1 (100 ml) -1 and from 0.05 to 1.86 ml O 2 min -1 (100 g) -1 , respectively] up to 25% MVC, where they began to plateau at 30% MVC. Moreover, an mBF/mV̇O2 muscle oxygen consumption ratio of ∼5 was consistent for all exercise stages. The intraclass correlation coefficient for mBF indicated high to very high reliability for 10-30% MVC (0.82-0.9). There was very high reliability for mV̇O2 across all exercise stages (intraclass correlation coefficient 0.91-0.96). In conclusion, NIRS can reliably assess muscle blood flow and oxygen consumption responses to low- to moderate-intensity exercise, meriting potential applications in clinical diagnosis and therapeutic assessment. © 2017 The Authors. Experimental Physiology © 2017 The Physiological Society.
Optimal Diameter Growth Equations for Major Tree Species of the Midsouth
Don C. Bragg
2003-01-01
Optimal diameter growth equations for 60 major tree species were fit using the potential relative increment (PRI) methodology. Almost 175,000 individuals from the Midsouth (Arkansas, Louisiana, Missouri, Oklahoma, and Texas) were selected from the USDA Forest Service's Eastwide Forest Inventory Database (EFIDB). These records were then reduced to the individuals...
Opinion: An Argument for Archival Research Methods--Thinking beyond Methodology
ERIC Educational Resources Information Center
L'Eplattenier, Barbara E.
2009-01-01
Historians of rhetoric and composition need to be more explicit and specific about their investigative methods when reporting their research, states this author. This should be done in a systematic and incremental way that both highlights the uniqueness of archival study and creates the depth and breadth of knowledge required to begin…
Data mining for signals in spontaneous reporting databases: proceed with caution.
Stephenson, Wendy P; Hauben, Manfred
2007-04-01
To provide commentary and points of caution to consider before incorporating data mining as a routine component of any Pharmacovigilance program, and to stimulate further research aimed at better defining the predictive value of these new tools as well as their incremental value as an adjunct to traditional methods of post-marketing surveillance. Commentary includes review of current data mining methodologies employed and their limitations, caveats to consider in the use of spontaneous reporting databases and caution against over-confidence in the results of data mining. Future research should focus on more clearly delineating the limitations of the various quantitative approaches as well as the incremental value that they bring to traditional methods of pharmacovigilance.
Dark Flows in Newton Crater Extending During Summer Six-Image Sequence
2011-08-04
This image comes from observations of Newton crater by the HiRISE camera onboard NASA Mars Reconnaissance Orbiter where features appear and incrementally grow during warm seasons and fade in cold seasons.
Sinanovic, Edina; Ramma, Lebogang; Foster, Nicola; Berrie, Leigh; Stevens, Wendy; Molapo, Sebaka; Marokane, Puleng; McCarthy, Kerrigan; Churchyard, Gavin; Vassall, Anna
2016-01-01
Abstract Purpose Estimating the incremental costs of scaling‐up novel technologies in low‐income and middle‐income countries is a methodologically challenging and substantial empirical undertaking, in the absence of routine cost data collection. We demonstrate a best practice pragmatic approach to estimate the incremental costs of new technologies in low‐income and middle‐income countries, using the example of costing the scale‐up of Xpert Mycobacterium tuberculosis (MTB)/resistance to riframpicin (RIF) in South Africa. Materials and methods We estimate costs, by applying two distinct approaches of bottom‐up and top‐down costing, together with an assessment of processes and capacity. Results The unit costs measured using the different methods of bottom‐up and top‐down costing, respectively, are $US16.9 and $US33.5 for Xpert MTB/RIF, and $US6.3 and $US8.5 for microscopy. The incremental cost of Xpert MTB/RIF is estimated to be between $US14.7 and $US17.7. While the average cost of Xpert MTB/RIF was higher than previous studies using standard methods, the incremental cost of Xpert MTB/RIF was found to be lower. Conclusion Costs estimates are highly dependent on the method used, so an approach, which clearly identifies resource‐use data collected from a bottom‐up or top‐down perspective, together with capacity measurement, is recommended as a pragmatic approach to capture true incremental cost where routine cost data are scarce. PMID:26763594
Cunnama, Lucy; Sinanovic, Edina; Ramma, Lebogang; Foster, Nicola; Berrie, Leigh; Stevens, Wendy; Molapo, Sebaka; Marokane, Puleng; McCarthy, Kerrigan; Churchyard, Gavin; Vassall, Anna
2016-02-01
Estimating the incremental costs of scaling-up novel technologies in low-income and middle-income countries is a methodologically challenging and substantial empirical undertaking, in the absence of routine cost data collection. We demonstrate a best practice pragmatic approach to estimate the incremental costs of new technologies in low-income and middle-income countries, using the example of costing the scale-up of Xpert Mycobacterium tuberculosis (MTB)/resistance to riframpicin (RIF) in South Africa. We estimate costs, by applying two distinct approaches of bottom-up and top-down costing, together with an assessment of processes and capacity. The unit costs measured using the different methods of bottom-up and top-down costing, respectively, are $US16.9 and $US33.5 for Xpert MTB/RIF, and $US6.3 and $US8.5 for microscopy. The incremental cost of Xpert MTB/RIF is estimated to be between $US14.7 and $US17.7. While the average cost of Xpert MTB/RIF was higher than previous studies using standard methods, the incremental cost of Xpert MTB/RIF was found to be lower. Costs estimates are highly dependent on the method used, so an approach, which clearly identifies resource-use data collected from a bottom-up or top-down perspective, together with capacity measurement, is recommended as a pragmatic approach to capture true incremental cost where routine cost data are scarce. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.
An Initial Multi-Domain Modeling of an Actively Cooled Structure
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur
1997-01-01
A methodology for the simulation of turbine cooling flows is being developed. The methodology seeks to combine numerical techniques that optimize both accuracy and computational efficiency. Key components of the methodology include the use of multiblock grid systems for modeling complex geometries, and multigrid convergence acceleration for enhancing computational efficiency in highly resolved fluid flow simulations. The use of the methodology has been demonstrated in several turbo machinery flow and heat transfer studies. Ongoing and future work involves implementing additional turbulence models, improving computational efficiency, adding AMR.
Konchak, Chad; Prasad, Kislaya
2012-01-01
Objectives To develop a methodology for integrating social networks into traditional cost-effectiveness analysis (CEA) studies. This will facilitate the economic evaluation of treatment policies in settings where health outcomes are subject to social influence. Design This is a simulation study based on a Markov model. The lifetime health histories of a cohort are simulated, and health outcomes compared, under alternative treatment policies. Transition probabilities depend on the health of others with whom there are shared social ties. Setting The methodology developed is shown to be applicable in any healthcare setting where social ties affect health outcomes. The example of obesity prevention is used for illustration under the assumption that weight changes are subject to social influence. Main outcome measures Incremental cost-effectiveness ratio (ICER). Results When social influence increases, treatment policies become more cost effective (have lower ICERs). The policy of only treating individuals who span multiple networks can be more cost effective than the policy of treating everyone. This occurs when the network is more fragmented. Conclusions (1) When network effects are accounted for, they result in very different values of incremental cost-effectiveness ratios (ICERs). (2) Treatment policies can be devised to take network structure into account. The integration makes it feasible to conduct a cost-benefit evaluation of such policies. PMID:23117559
A Numerical Process Control Method for Circular-Tube Hydroforming Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Kenneth I.; Nguyen, Ba Nghiep; Davies, Richard W.
2004-03-01
This paper describes the development of a solution control method that tracks the stresses, strains and mechanical behavior of a tube during hydroforming to estimate the proper axial feed (end-feed) and internal pressure loads through time. The analysis uses the deformation theory of plasticity and Hill?s criterion to describe the plastic flow. Before yielding, the pressure and end-feed increments are estimated based on the initial tube geometry, elastic properties and yield stress. After yielding, the pressure increment is calculated based on the tube geometry at the previous solution increment and the current hoop stress increment. The end-feed increment is computedmore » from the increment of the axial plastic strain. Limiting conditions such as column buckling (of long tubes), local axi-symmetric wrinkling of shorter tubes, and bursting due to localized wall thinning are considered. The process control method has been implemented in the Marc finite element code. Hydroforming simulations using this process control method were conducted to predict the load histories for controlled expansion of 6061-T4 aluminum tubes within a conical die shape and under free hydroforming conditions. The predicted loading paths were transferred to the hydroforming equipment to form the conical and free-formed tube shapes. The model predictions and experimental results are compared for deformed shape, strains and the extent of forming at rupture.« less
How Fast Do Europa's Ridges Grow?
NASA Astrophysics Data System (ADS)
Melosh, H. J.; Turtle, E. P.; Freed, A. M.
2017-11-01
We demonstrate with our incremental wedging model of ridge formation that ridges must grow in 5000 years or less to prevent their material flowing down an underlying warm ice channel. This conclusion holds for other models as well.
NASA Astrophysics Data System (ADS)
Liu, Fang; Luo, Qingming; Xu, Guodong; Li, Pengcheng
2003-12-01
Near infrared spectroscopy (NIRS) has been developed as a non-invasive method to assess O2 delivery, O2 consumption and blood flow, in diverse local muscle groups at rest and during exercise. The aim of this study was to investigate local O2 consumption in exercising muscle by use of near-infrared spectroscopy (NIRS). Ten elite athletes of different sport items were tested in rest and during step incremental load exercise. Local variations of quadriceps muscles were investigated with our wireless NIRS blood oxygen monitor system. The results show that the changes of blood oxygen relate on the sport items, type of muscle, kinetic capacity et al. These results indicate that NIRS is a potential useful tool to detect local muscle oxygenation and blood flow profiles; therefore it might be easily applied for evaluating the effect of athletes training.
Transport of Internetwork Magnetic Flux Elements in the Solar Photosphere
NASA Astrophysics Data System (ADS)
Agrawal, Piyush; Rast, Mark P.; Gošić, Milan; Bellot Rubio, Luis R.; Rempel, Matthias
2018-02-01
The motions of small-scale magnetic flux elements in the solar photosphere can provide some measure of the Lagrangian properties of the convective flow. Measurements of these motions have been critical in estimating the turbulent diffusion coefficient in flux-transport dynamo models and in determining the Alfvén wave excitation spectrum for coronal heating models. We examine the motions of internetwork flux elements in Hinode/Narrowband Filter Imager magnetograms and study the scaling of their mean squared displacement and the shape of their displacement probability distribution as a function of time. We find that the mean squared displacement scales super-diffusively with a slope of about 1.48. Super-diffusive scaling has been observed in other studies for temporal increments as small as 5 s, increments over which ballistic scaling would be expected. Using high-cadence MURaM simulations, we show that the observed super-diffusive scaling at short increments is a consequence of random changes in barycenter positions due to flux evolution. We also find that for long temporal increments, beyond granular lifetimes, the observed displacement distribution deviates from that expected for a diffusive process, evolving from Rayleigh to Gaussian. This change in distribution can be modeled analytically by accounting for supergranular advection along with granular motions. These results complicate the interpretation of magnetic element motions as strictly advective or diffusive on short and long timescales and suggest that measurements of magnetic element motions must be used with caution in turbulent diffusion or wave excitation models. We propose that passive tracer motions in measured photospheric flows may yield more robust transport statistics.
Allocating Virtual and Physical Flows for Multiagent Teams in Mutable, Networked Environments
2012-08-01
dividing between z flow among the first l − 1 children and reserving the remaining y − z flow for the lth child , for some z ∈ [0..y]. In lines 13–21 we...use them when considering the parent of v, which must consider all possible ways to divide its flow between its children (i.e., v and v’s siblings ... studies the solution quality and runtime results for LP 2 for k = 1 and for k between 10 to 50 in increments of 10, as shown in Table 4.2. Note that
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Rui; Zhang, Yingchen
Designing market mechanisms for electricity distribution systems has been a hot topic due to the increased presence of smart loads and distributed energy resources (DERs) in distribution systems. The distribution locational marginal pricing (DLMP) methodology is one of the real-time pricing methods to enable such market mechanisms and provide economic incentives to active market participants. Determining the DLMP is challenging due to high power losses, the voltage volatility, and the phase imbalance in distribution systems. Existing DC Optimal Power Flow (OPF) approaches are unable to model power losses and the reactive power, while single-phase AC OPF methods cannot capture themore » phase imbalance. To address these challenges, in this paper, a three-phase AC OPF based approach is developed to define and calculate DLMP accurately. The DLMP is modeled as the marginal cost to serve an incremental unit of demand at a specific phase at a certain bus, and is calculated using the Lagrange multipliers in the three-phase AC OPF formulation. Extensive case studies have been conducted to understand the impact of system losses and the phase imbalance on DLMPs as well as the potential benefits of flexible resources.« less
ERIC Educational Resources Information Center
Brotheridge, Celeste M.; Power, Jacqueline L.
2008-01-01
Purpose: This study seeks to examine the extent to which the use of career center services results in the significant incremental prediction of career outcomes beyond its established predictors. Design/methodology/approach: The authors survey the clients of a public agency's career center and use hierarchical multiple regressions in order to…
Successful Principalship in Norway: Sustainable Ethos and Incremental Changes?
ERIC Educational Resources Information Center
Moller, Jorunn; Vedoy, Gunn; Presthus, Anne Marie; Skedsmo, Guri
2009-01-01
Purpose: The purpose of this paper is to explore whether and how success has been sustained over time in schools which were identified as being successful five years ago. Design/methodology/approach: Three schools were selected for a revisit, and the sample included two combined schools (grade 1-10) and one upper secondary school (grade 11-13). In…
Code of Federal Regulations, 2011 CFR
2011-04-01
... section 212 of the FPA, access to the electric transmission system for the purposes of wholesale... on its transmission system. (5) The names of any other parties likely to provide transmission service... requirement by specifying a rate methodology (e.g., embedded or incremental cost) or by referencing an...
NASA Astrophysics Data System (ADS)
Chen, Zhi; Hu, Kun; Stanley, H. Eugene; Novak, Vera; Ivanov, Plamen Ch.
2006-03-01
We investigate the relationship between the blood flow velocities (BFV) in the middle cerebral arteries and beat-to-beat blood pressure (BP) recorded from a finger in healthy and post-stroke subjects during the quasisteady state after perturbation for four different physiologic conditions: supine rest, head-up tilt, hyperventilation, and CO2 rebreathing in upright position. To evaluate whether instantaneous BP changes in the steady state are coupled with instantaneous changes in the BFV, we compare dynamical patterns in the instantaneous phases of these signals, obtained from the Hilbert transform, as a function of time. We find that in post-stroke subjects the instantaneous phase increments of BP and BFV exhibit well-pronounced patterns that remain stable in time for all four physiologic conditions, while in healthy subjects these patterns are different, less pronounced, and more variable. We propose an approach based on the cross-correlation of the instantaneous phase increments to quantify the coupling between BP and BFV signals. We find that the maximum correlation strength is different for the two groups and for the different conditions. For healthy subjects the amplitude of the cross-correlation between the instantaneous phase increments of BP and BFV is small and attenuates within 3-5 heartbeats. In contrast, for post-stroke subjects, this amplitude is significantly larger and cross-correlations persist up to 20 heartbeats. Further, we show that the instantaneous phase increments of BP and BFV are cross-correlated even within a single heartbeat cycle. We compare the results of our approach with three complementary methods: direct BP-BFV cross-correlation, transfer function analysis, and phase synchronization analysis. Our findings provide insight into the mechanism of cerebral vascular control in healthy subjects, suggesting that this control mechanism may involve rapid adjustments (within a heartbeat) of the cerebral vessels, so that BFV remains steady in response to changes in peripheral BP.
Chen, Zhi; Hu, Kun; Stanley, H Eugene; Novak, Vera; Ivanov, Plamen Ch
2006-03-01
We investigate the relationship between the blood flow velocities (BFV) in the middle cerebral arteries and beat-to-beat blood pressure (BP) recorded from a finger in healthy and post-stroke subjects during the quasisteady state after perturbation for four different physiologic conditions: supine rest, head-up tilt, hyperventilation, and CO2 rebreathing in upright position. To evaluate whether instantaneous BP changes in the steady state are coupled with instantaneous changes in the BFV, we compare dynamical patterns in the instantaneous phases of these signals, obtained from the Hilbert transform, as a function of time. We find that in post-stroke subjects the instantaneous phase increments of BP and BFV exhibit well-pronounced patterns that remain stable in time for all four physiologic conditions, while in healthy subjects these patterns are different, less pronounced, and more variable. We propose an approach based on the cross-correlation of the instantaneous phase increments to quantify the coupling between BP and BFV signals. We find that the maximum correlation strength is different for the two groups and for the different conditions. For healthy subjects the amplitude of the cross-correlation between the instantaneous phase increments of BP and BFV is small and attenuates within 3-5 heartbeats. In contrast, for post-stroke subjects, this amplitude is significantly larger and cross-correlations persist up to 20 heartbeats. Further, we show that the instantaneous phase increments of BP and BFV are cross-correlated even within a single heartbeat cycle. We compare the results of our approach with three complementary methods: direct BP-BFV cross-correlation, transfer function analysis, and phase synchronization analysis. Our findings provide insight into the mechanism of cerebral vascular control in healthy subjects, suggesting that this control mechanism may involve rapid adjustments (within a heartbeat) of the cerebral vessels, so that BFV remains steady in response to changes in peripheral BP.
Vallecilla, Carolina; Khiabani, Reza H; Sandoval, Néstor; Fogel, Mark; Briceño, Juan Carlos; Yoganathan, Ajit P
2014-06-03
The considerable blood mixing in the bidirectional Glenn (BDG) physiology further limits the capacity of the single working ventricle to pump enough oxygenated blood to the circulatory system. This condition is exacerbated under severe conditions such as physical activity or high altitude. In this study, the effect of high altitude exposure on hemodynamics and ventricular function of the BDG physiology is investigated. For this purpose, a mathematical approach based on a lumped parameter model was developed to model the BDG circulation. Catheterization data from 39 BDG patients at stabilized oxygen conditions was used to determine baseline flows and pressures for the model. The effect of high altitude exposure was modeled by increasing the pulmonary vascular resistance (PVR) and heart rate (HR) in increments up to 80% and 40%, respectively. The resulting differences in vascular flows, pressures and ventricular function parameters were analyzed. By simultaneously increasing PVR and HR, significant changes (p <0.05) were observed in cardiac index (11% increase at an 80% PVR and 40% HR increase) and pulmonary flow (26% decrease at an 80% PVR and 40% HR increase). Significant increase in mean systemic pressure (9%) was observed at 80% PVR (40% HR) increase. The results show that the poor ventricular function fails to overcome the increased preload and implied low oxygenation in BDG patients at higher altitudes, especially for those with high baseline PVRs. The presented mathematical model provides a framework to estimate the hemodynamic performance of BDG patients at different PVR increments. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Erickson, Gary E.; Murri, Daniel G.
1993-01-01
Wind tunnel investigations have been conducted of forebody strakes for yaw control on 0.06-scale models of the F/A-18 aircraft at free-stream Mach numbers of 0.20 to 0.90. The testing was conducted in the 7- by 10-Foot Transonic Tunnel at the David Taylor Research Center and the Langley 7- by 10-Foot High-Speed Tunnel. The principal objectives of the testing were to determine the effects of the Mach number and the strake plan form on the strake yaw control effectiveness and the corresponding strake vortex induced flow field. The wind tunnel model configurations simulated an actuated conformal strake deployed for maximum yaw control at high angles of attack. The test data included six-component forces and moments on the complete model, surface static pressure distributions on the forebody and wing leading-edge extensions, and on-surface and off-surface flow visualizations. The results from these studies show that the strake produces large yaw control increments at high angles of attack that exceed the effect of conventional rudders at low angles of attack. The strake yaw control increments diminish with increasing Mach number but continue to exceed the effect of rudder deflection at angles of attack greater than 30 degrees. The character of the strake vortex induced flow field is similar at subsonic and transonic speeds. Cropping the strake planform to account for geometric and structural constraints on the F-18 aircraft has a small effect on the yaw control increments at subsonic speeds and no effect at transonic speeds.
Xiong, L; Mazmanian, M; Chapelier, A R; Reignier, J; Weiss, M; Dartevelle, P G; Hervé, P
1994-09-01
Using isolated rat lungs, we compared prevention of ischemia-reperfusion injury provided by flushing the lungs with modified Euro-Collins solution (EC), University of Wisconsin solution (UW), low-potassium-dextran solution (LPD), or Wallwork solution (WA). After 4 hours' and 6 hours' cold ischemia, reperfusion injury was assessed on the basis of changes in filtration coefficients (Kfc) and pressure-flow curves, characterized by the slope of the curves (incremental resistance) and the extrapolation of this slope to zero flow (pulmonary pressure intercept [Ppi]). After 4 hours, Kfc and Ppi were higher with EC than with UW, LPD, and WA, and the incremental resistance was higher with EC and UW. After 6 hours, Kfc and incremental resistance Ppi were higher with LPD than with WA. Because ischemia-reperfusion injury is associated with decreased endothelial synthesis of prostacyclin and nitric oxide, we tested whether the addition of prostacyclin or the nitric oxide precursor L-arginine to WA would improve preservation. The Kfc and Ppi were lower with both treatments. In conclusion, ischemia-reperfusion injury was best prevented by using WA. The favorable effect of prostacyclin or L-arginine emphasizes the role played by endothelial dysfunction in ischemia-reperfusion injury.
An entropy-based method for determining the flow depth distribution in natural channels
NASA Astrophysics Data System (ADS)
Moramarco, Tommaso; Corato, Giovanni; Melone, Florisa; Singh, Vijay P.
2013-08-01
A methodology for determining the bathymetry of river cross-sections during floods by the sampling of surface flow velocity and existing low flow hydraulic data is developed . Similar to Chiu (1988) who proposed an entropy-based velocity distribution, the flow depth distribution in a cross-section of a natural channel is derived by entropy maximization. The depth distribution depends on one parameter, whose estimate is straightforward, and on the maximum flow depth. Applying to a velocity data set of five river gage sites, the method modeled the flow area observed during flow measurements and accurately assessed the corresponding discharge by coupling the flow depth distribution and the entropic relation between mean velocity and maximum velocity. The methodology unfolds a new perspective for flow monitoring by remote sensing, considering that the two main quantities on which the methodology is based, i.e., surface flow velocity and flow depth, might be potentially sensed by new sensors operating aboard an aircraft or satellite.
Irwin, Elise R.; Goar, Taconya
2015-01-01
Effects of hydrology on growth and hatching success of age-0 black basses and Channel Catfish were examined in regulated and unregulated reaches of the Tallapoosa River, Alabama. Species of the family Centrarchidae, Ictalurus punctatus Channel Catfish and Pylodictis olivaris Flathead Catfish were also collected from multiple tributaries in the basin. Fish were collected from 2010-2014 and were assigned daily ages using otoliths. Hatch dates of individuals of three species (Micropterus henshalli Alabama Bass, M. tallapoosae Tallapoosa Bass and Channel Catfish) were back calculated, and growth histories were estimated every 5 d post hatch from otolith sections using incremental growth analysis. Hatch dates and incremental growth were related to hydrologic and temperature metrics from environmental data collected during the same time periods. Hatch dates at the regulated sites were related to and typically occurred during periods with low and stable flow conditions; however no clear relations between hatch and thermal or flow metrics were evident for the unregulated sites. Some fish hatched during unsuitable thermal conditions at the regulated site suggesting that some fish may recruit from unregulated tributaries. Ages and growth rates of age-0 black basses ranged from 105 to 131 d and 0.53 to 1.33 mm/day at the regulated sites and 44 to 128 d and 0.44 to 0.96 mm/d at the unregulated sites. In general, growth was highest among age-0 fish from the regulated sites, consistent with findings of other studies. Mortality of age-0 to age-1 fish was also variable among years and between sites and with the exception of one year, was lower at regulated sites. Multiple and single regression models of incremental growth versus age, discharge, and temperature metrics were evaluated with Akaike’s Information Criterion (AICc) to assess models that best described growth parameters. Of the models evaluated, the best overall models predicted that daily incremental growth was positively related to low flow parameters and negatively related to the number of times the hydrograph changed direction (e.g., reversals). These results suggest that specific flow and temperature criteria provided from the dam could potentially enhance growth and hatch success of these important sport fish species.
Transonic Drag Prediction on a DLR-F6 Transport Configuration Using Unstructured Grid Solvers
NASA Technical Reports Server (NTRS)
Lee-Rausch, E. M.; Frink, N. T.; Mavriplis, D. J.; Rausch, R. D.; Milholen, W. E.
2004-01-01
A second international AIAA Drag Prediction Workshop (DPW-II) was organized and held in Orlando Florida on June 21-22, 2003. The primary purpose was to inves- tigate the code-to-code uncertainty. address the sensitivity of the drag prediction to grid size and quantify the uncertainty in predicting nacelle/pylon drag increments at a transonic cruise condition. This paper presents an in-depth analysis of the DPW-II computational results from three state-of-the-art unstructured grid Navier-Stokes flow solvers exercised on similar families of tetrahedral grids. The flow solvers are USM3D - a tetrahedral cell-centered upwind solver. FUN3D - a tetrahedral node-centered upwind solver, and NSU3D - a general element node-centered central-differenced solver. For the wingbody, the total drag predicted for a constant-lift transonic cruise condition showed a decrease in code-to-code variation with grid refinement as expected. For the same flight condition, the wing/body/nacelle/pylon total drag and the nacelle/pylon drag increment predicted showed an increase in code-to-code variation with grid refinement. Although the range in total drag for the wingbody fine grids was only 5 counts, a code-to-code comparison of surface pressures and surface restricted streamlines indicated that the three solvers were not all converging to the same flow solutions- different shock locations and separation patterns were evident. Similarly, the wing/body/nacelle/pylon solutions did not appear to be converging to the same flow solutions. Overall, grid refinement did not consistently improve the correlation with experimental data for either the wingbody or the wing/body/nacelle pylon configuration. Although the absolute values of total drag predicted by two of the solvers for the medium and fine grids did not compare well with the experiment, the incremental drag predictions were within plus or minus 3 counts of the experimental data. The correlation with experimental incremental drag was not significantly changed by specifying transition. Although the sources of code-to-code variation in force and moment predictions for the three unstructured grid codes have not yet been identified, the current study reinforces the necessity of applying multiple codes to the same application to assess uncertainty.
Experimental Investigation of Inlet Distortion in a Multistage Axial Compressor
NASA Astrophysics Data System (ADS)
Rusu, Razvan
The primary objective of this research is to present results and methodologies used to study total pressure inlet distortion in a multi-stage axial compressor environment. The study was performed at the Purdue 3-Stage Axial Compressor Facility (P3S) which models the final three stages of a production turbofan engine's high-pressure compressor (HPC). The goal of this study was twofold; first, to design, implement, and validate a circumferentially traversable total pressure inlet distortion generation system, and second, to demonstrate data acquisition methods to characterize the inter-stage total pressure flow fields to study the propagation and attenuation of a one-per-rev total pressure distortion. The datasets acquired for this study are intended to support the development and validation of novel computational tools and flow physics models for turbomachinery flow analysis. Total pressure inlet distortion was generated using a series of low-porosity wire gauze screens placed upstream of the compressor in the inlet duct. The screens are mounted to a rotatable duct section that can be precisely controlled. The P3S compressor features fixed instrumentation stations located at the aerodynamic interface plane (AIP) and downstream and upstream of each vane row. Furthermore, the compressor features individually indexable stator vanes which can be traverse by up to two vane passages. Using a series of coordinated distortion and vane traverses, the total pressure flow field at the AIP and subsequent inter-stage stations was characterized with a high circumferential resolution. The uniformity of the honeycomb carrier was demonstrated by characterizing the flow field at the AIP while no distortion screens where installed. Next, the distortion screen used for this study was selected following three iterations of porosity reduction. The selected screen consisted of a series of layered screens with a 100% radial extent and a 120° circumferential extent. A detailed total pressure flow field characterization of the AIP was performed using the selected screen at nominal, low, and high compressor loading. Thermal anemometry was used to characterize the spatial variation in turbulence intensity at the AIP in an effort to further define inlet boundary conditions for future computational investigations. Two data acquisition methods for the study of distortion propagation and attenuation were utilized in this study. The first method approximated the bulk flow through each vane passage using a single rake measurement positioned near the center of the passage. All vane passages were measured virtually by rotating the distortion upstream by an increment equal to one vane passage. This method proved successful in tracking the distortion propagation and attenuation from the AIP up until the compressor exit. A second, more detailed, inter-stage flow field characterization method was used that generated a total pressure field with a circumferential resolution of 880 increments, or one every 0.41°. The resulting fields demonstrated the importance of secondary flows in the propagation of a total pressure distortion at the different loading conditions investigated. A second objective of this research was to document proposals and design efforts to outfit the existing P3S research compressor with a strain gage telemetry system. The purpose of this system is to validate and supplement existing blade tip timing data on the embedded rotor stage to support the development and validation of novel aeromechanical analysis tools. Integration strategies and telemetry considerations are discussed based on proposals and consultation provided by suppliers.
Magnetic field effects on peristaltic flow of blood in a non-uniform channel
NASA Astrophysics Data System (ADS)
Latha, R.; Rushi Kumar, B.
2017-11-01
The objective of this paper is to carry out the effect of the MHD on the peristaltic transport of blood in a non-uniform channel have been explored under long wavelength approximation with low (zero) Reynolds number. Blood is made of an incompressible, viscous and electrically conducting. Explicit expressions for the axial velocity, axial pressure gradient are derived using long wavelength assumptions with slip and regularity conditions. It is determined that the pressure gradient diminishes as the couple stress parameter increments and it decreases as the magnetic parameter increments. We additionally concentrate the embedded parameters through graphs.
[Economic impact of nosocomial bacteraemia. A comparison of three calculation methods].
Riu, Marta; Chiarello, Pietro; Terradas, Roser; Sala, Maria; Castells, Xavier; Knobel, Hernando; Cots, Francesc
2016-12-01
The excess cost associated with nosocomial bacteraemia (NB) is used as a measurement of the impact of these infections. However, some authors have suggested that traditional methods overestimate the incremental cost due to the presence of various types of bias. The aim of this study was to compare three assessment methods of NB incremental cost to correct biases in previous analyses. Patients who experienced an episode of NB between 2005 and 2007 were compared with patients grouped within the same All Patient Refined-Diagnosis-Related Group (APR-DRG) without NB. The causative organisms were grouped according to the Gram stain, and whether bacteraemia was caused by a single or multiple microorganisms, or by a fungus. Three assessment methods are compared: stratification by disease; econometric multivariate adjustment using a generalised linear model (GLM); and propensity score matching (PSM) was performed to control for biases in the econometric model. The analysis included 640 admissions with NB and 28,459 without NB. The observed mean cost was €24,515 for admissions with NB and €4,851.6 for controls (without NB). Mean incremental cost was estimated at €14,735 in stratified analysis. Gram positive microorganism had the lowest mean incremental cost, €10,051. In the GLM, mean incremental cost was estimated as €20,922, and adjusting with PSM, the mean incremental cost was €11,916. The three estimates showed important differences between groups of microorganisms. Using enhanced methodologies improves the adjustment in this type of study and increases the value of the results. Copyright © 2015 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
Methodologies for extracting kinetic constants for multiphase reacting flow simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, S.L.; Lottes, S.A.; Golchert, B.
1997-03-01
Flows in industrial reactors often involve complex reactions of many species. A computational fluid dynamics (CFD) computer code, ICRKFLO, was developed to simulate multiphase, multi-species reacting flows. The ICRKFLO uses a hybrid technique to calculate species concentration and reaction for a large number of species in a reacting flow. This technique includes a hydrodynamic and reacting flow simulation with a small but sufficient number of lumped reactions to compute flow field properties followed by a calculation of local reaction kinetics and transport of many subspecies (order of 10 to 100). Kinetic rate constants of the numerous subspecies chemical reactions aremore » difficult to determine. A methodology has been developed to extract kinetic constants from experimental data efficiently. A flow simulation of a fluid catalytic cracking (FCC) riser was successfully used to demonstrate this methodology.« less
Testing of Liquid Metal Components for Nuclear Surface Power Systems
NASA Technical Reports Server (NTRS)
Polzin, Kurt A.; Godfroy, Thomas J.; Pearson, J. Boise
2010-01-01
The Early Flight Fission Test Facility (EFF-TF) was established by the Marshall Space Flight Center (MSFC) to provide a capability for performing hardware-directed activities to support multiple in-space nuclear reactor concepts by using a non-nuclear test methodology. This includes fabrication and testing at both the module/component level and near prototypic reactor configurations. The EFF-TF is currently supporting an effort to develop an affordable fission surface power (AFSP) system that could be deployed on the Lunar surface. The AFSP system is presently based on a pumped liquid metal-cooled (Sodium-Potassium eutectic, NaK-78) reactor design. This design was derived from the only fission system that the United States has deployed for space operation, the Systems for Nuclear Auxiliary Power (SNAP) 10A reactor, which was launched in 1965. Two prototypical components recently tested at MSFC were a pair of Stirling power conversion units that would be used in a reactor system to convert heat to electricity, and an annular linear induction pump (ALIP) that uses travelling electromagnetic fields to pump the liquid metal coolant through the reactor loop. First ever tests were conducted at MSFC to determine baseline performance of a pair of 1 kW Stirling convertors using NaK as the hot side working fluid. A special test rig was designed and constructed and testing was conducted inside a vacuum chamber at MSFC. This test rig delivered pumped NaK for the hot end temperature to the Stirlings and water as the working fluid on the cold end temperature. These test were conducted through a hot end temperature range between 400 to 550C in increments of 50 C and a cold end temperature range from 30 to 70 C in 20 C increments. Piston amplitudes were varied from 6 to 1 1mm in .5 mm increments. A maximum of 2240 Watts electric was produced at the design point of 550 hot end, 40 C cold end with a piston amplitude of 10.5mm. This power level was reached at a gross thermal efficiency of 28%. A baseline performance map was established for the pair of 1kW Stirling convertors. The performance data will then be used for design modification to the Stirling convertors. The ALIP tested at MSFC has no moving parts and no direct electrical connections to the liquid metal containing components. Pressure is developed by the interaction of the magnetic field produced by the stator and the current which flows as a result of the voltage induced in the liquid metal contained in the pump duct. Flow is controlled by variation of the voltage supplied to the pump windings. Under steady-state conditions, pump performance is measured for flow rates from 0.5-4.3 kg/s. The pressure rise developed by the pump to support these flow rates is roughly 5-65 kPa. The RMS input voltage (phase-to-phase voltage) ranges from 5-120 V, while the frequency can be varied arbitrarily up to 60 Hz. Performance is quantified at different loop temperature levels from 50 C up to 650 C, which is the peak operating temperature of the proposed AFSP reactor. The transient response of the pump is also evaluated to determine its behavior during startup and shut-down procedures.
Tooth brushing frequency and risk of new carious lesions.
Holmes, Richard D
2016-12-01
Data sourcesMedline, Embase, CINHAL and the Cochrane databases.Study selectionTwo reviewers selected studies, and case-control, prospective cohort, retrospective cohort and experimental trials evaluating the effect of toothbrushing frequency on the incidence or increment of new carious lesions were considered.Data extraction and synthesisTwo reviewers undertook data abstraction independently using pre-piloted forms. Study quality was assessed using a quality assessment tool for quantitative studies developed by the Effective Public Health Practice Project (EPHPP). Meta-analysis of caries outcomes was carried out using RefMan and meta-regressions undertaken to assess the influence of sample size, follow-up period, caries diagnosis level and study methodological quality.ResultsThirty-three studies were included of which 13 were considered to be methodologically strong, 14 moderate and six weak. Twenty-five studies contributed to the quantitative analysis. Compared with frequent brushers, self-reported infrequent brushers demonstrated a higher incidence of carious lesions, OR=1.50 (95%CI: 1.34 -1.69). The odds of having carious lesions differed little when subgroup analysis was conducted to compare the incidence between ≥2 times/d vs <2 times/d; OR=1.45; (95%CI; 1.21 - 1.74) and ≥1 time/d vs <1 time/d brushers OR=1.56; (95%CI; 1.37 - 1.78). Brushing <2 times/day significantly caused an increment of carious lesions compared with ≥2/day brushing, standardised mean difference [SMD] =0.34; (95%CI; 0.18 - 0.49). Overall, infrequent brushing was associated with an increment of carious lesions, SMD= 0.28; (95%CI; 0.13 - 0.44). Meta-analysis conducted with the type of dentition as subgroups found the effect of infrequent brushing on incidence and increment of carious lesions was higher in deciduous, OR=1.75; (95%CI; 1.49 - 2.06) than permanent dentition OR=1.39; (95% CI: 1.29 -1.49). Meta-regression indicated that none of the included variables influenced the effect estimate.ConclusionsIndividuals who state that they brush their teeth infrequently are at greater risk for the incidence or increment of new carious lesions than those brushing more frequently. The effect is more pronounced in the deciduous than in the permanent dentition. A few studies indicate that this effect is independent of the presence of fluoride in toothpaste.
Constructing Space-Time Views from Fixed Size Statistical Data: Getting the Best of both Worlds
NASA Technical Reports Server (NTRS)
Schmidt, Melisa; Yan, Jerry C.
1997-01-01
Many performance monitoring tools are currently available to the super-computing community. The performance data gathered and analyzed by these tools fall under two categories: statistics and event traces. Statistical data is much more compact but lacks the probative power event traces offer. Event traces, on the other hand, can easily fill up the entire file system during execution such that the instrumented execution may have to be terminated half way through. In this paper, we propose an innovative methodology for performance data gathering and representation that offers a middle ground. The user can trade-off tracing overhead, trace data size vs. data quality incrementally. In other words, the user will be able to limit the amount of trace collected and, at the same time, carry out some of the analysis event traces offer using space-time views for the entire execution. Two basic ideas arc employed: the use of averages to replace recording data for each instance and formulae to represent sequences associated with communication and control flow. With the help of a few simple examples, we illustrate the use of these techniques in performance tuning and compare the quality of the traces we collected vs. event traces. We found that the trace files thus obtained are, in deed, small, bounded and predictable before program execution and that the quality of the space time views generated from these statistical data are excellent. Furthermore, experimental results showed that the formulae proposed were able to capture 100% of all the sequences associated with 11 of the 15 applications tested. The performance of the formulae can be incrementally improved by allocating more memory at run-time to learn longer sequences.
Constructing Space-Time Views from Fixed Size Statistical Data: Getting the Best of Both Worlds
NASA Technical Reports Server (NTRS)
Schmidt, Melisa; Yan, Jerry C.; Bailey, David (Technical Monitor)
1996-01-01
Many performance monitoring tools are currently available to the super-computing community. The performance data gathered and analyzed by these tools fall under two categories: statistics and event traces. Statistical data is much more compact but lacks the probative power event traces offer. Event traces, on the other hand, can easily fill up the entire file system during execution such that the instrumented execution may have to be terminated half way through. In this paper, we propose an innovative methodology for performance data gathering and representation that offers a middle ground. The user can trade-off tracing overhead, trace data size vs. data quality incrementally. In other words, the user will be able to limit the amount of trace collected and, at the same time, carry out some of the analysis event traces offer using spacetime views for the entire execution. Two basic ideas are employed: the use of averages to replace recording data for each instance and "formulae" to represent sequences associated with communication and control flow. With the help of a few simple examples, we illustrate the use of these techniques in performance tuning and compare the quality of the traces we collected vs. event traces. We found that the trace files thus obtained are, in deed, small, bounded and predictable before program execution and that the quality of the space time views generated from these statistical data are excellent. Furthermore, experimental results showed that the formulae proposed were able to capture 100% of all the sequences associated with 11 of the 15 applications tested. The performance of the formulae can be incrementally improved by allocating more memory at run-time to learn longer sequences.
[Analysis on microdialysis probe recovery of baicalin in vitro and in vivo based on LC-MS/MS].
Chen, Teng-Fei; Liu, Jian-Xun; Zhang, Ying; Lin, Li; Song, Wen-Ting; Yao, Ming-Jiang
2017-06-01
To further study the brain behavior and the pharmacokinetics of baicalin in intercellular fluid of brain, and study the recovery rate and stability of brain and blood microdialysis probe of baicalin in vitro and in vivo. The concentration of baicalin in brain and blood microdialysates was determined by LC-MS/MS and the probe recovery for baicalin was calculated. The effects of different flow rates (0.50, 1.0, 1.5, 2.0,3.0 μL•min⁻¹) on recovery in vitro were determined by incremental method and decrement method. The effects of different drug concentrations (50.00, 200.0, 500.0, 1 000 μg•L⁻¹) and using times (0, 1, 2) on recovery in vitro were determined by incremental method. The probe recovery stability and effect of flow rate on recovery in vivo were determined by decrement method, and its results were compared with those in in vitro trial. The in vitro recovery of brain and blood probe of baicalin was decreased with the increase of flow rate under the same concentration; and at the same flow rate, different concentrations of baicalin had little influence on the recovery. The probe which had been used for 2 times showed no obvious change in probe recovery by syringe with 2% heparin sodium and ultrapure water successively. In vitro recovery rates obtained by incremental method and decrement method were approximately equal under the same condition, and the in vivo recovery determined by decrement method was similar with the in vitro results and they were showed a good stability within 10 h. The results showed that decrement method can be used for pharmacokinetic study of baicalin, and can be used to study probe recovery in vivo at the same time. Copyright© by the Chinese Pharmaceutical Association.
Method and apparatus for continuous electrophoresis
Watson, Jack S.
1992-01-01
A method and apparatus for conducting continuous separation of substances by electrophoresis are disclosed. The process involves electrophoretic separation combined with couette flow in a thin volume defined by opposing surfaces. By alternating the polarity of the applied potential and producing reciprocating short rotations of at least one of the surfaces relative to the other, small increments of separation accumulate to cause substantial, useful segregation of electrophoretically separable components in a continuous flow system.
Code of Federal Regulations, 2011 CFR
2011-01-01
... time duration of the turn and must show increments not to exceed one second. The series of tumble turns... FAA will measure any proposed alternative analysis approach. This appendix also identifies the... approach provides an equivalent level of safety. If a Federal launch range performs the launch operator's...
Synthesis Guidebook. Volume 1. Methodology Definition
1992-10-16
OV I. Introductiaa " Synthesis Transition Strategies (Williams 1990b), which discusses a strategy for the incremental transitioning of Synthesis into... strategies for mitigating those risks. Use checkpoints, reviews, and metrics to reveal flaws and misconceptions. 4. Interaction With Other Activities... markets and customer requirements lead to evolving product and process needs of Application Engineering projects. OV2-5 OV.2. Fundamentals of Synthesis
Constraints and Opportunities in GCM Model Development
NASA Technical Reports Server (NTRS)
Schmidt, Gavin; Clune, Thomas
2010-01-01
Over the past 30 years climate models have evolved from relatively simple representations of a few atmospheric processes to complex multi-disciplinary system models which incorporate physics from bottom of the ocean to the mesopause and are used for seasonal to multi-million year timescales. Computer infrastructure over that period has gone from punchcard mainframes to modern parallel clusters. Constraints of working within an ever evolving research code mean that most software changes must be incremental so as not to disrupt scientific throughput. Unfortunately, programming methodologies have generally not kept pace with these challenges, and existing implementations now present a heavy and growing burden on further model development as well as limiting flexibility and reliability. Opportunely, advances in software engineering from other disciplines (e.g. the commercial software industry) as well as new generations of powerful development tools can be incorporated by the model developers to incrementally and systematically improve underlying implementations and reverse the long term trend of increasing development overhead. However, these methodologies cannot be applied blindly, but rather must be carefully tailored to the unique characteristics of scientific software development. We will discuss the need for close integration of software engineers and climate scientists to find the optimal processes for climate modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, R.O.
1995-11-01
Trends of mean annual increment and periodic annual increment were examined in 17 long-term thinning studies in Douglas-fir (Pseuditsuga menziesii var. menziesii (Mirb.) Franco) in western Washington, western Oregon, and British Columbia. Problems in evaluating growth trends and culmination ages are discussed. None of the stands had clearly reached culmination of mean annual increment, although some seemed close. The observed trends seem generally consistent with some other recent comparisons. These comparisons indicate that rotations can be considerably extended without reducing long-term timber production; value production probably would increase. A major problem in such a strategy is design of thinning regimesmore » that can maintain a reasonable level of timber flow during the transition period while producing stand conditions compatible with other management objectives. The continuing value of long-term permanent plot studies is emphasized.« less
Systems Engineering and Integration (SE and I)
NASA Technical Reports Server (NTRS)
Chevers, ED; Haley, Sam
1990-01-01
The issue of technology advancement and future space transportation vehicles is addressed. The challenge is to develop systems which can be evolved and improved in small incremental steps where each increment reduces present cost, improves, reliability, or does neither but sets the stage for a second incremental upgrade that does. Future requirements are interface standards for commercial off the shelf products to aid in the development of integrated facilities; enhanced automated code generation system slightly coupled to specification and design documentation; modeling tools that support data flow analysis; and shared project data bases consisting of technical characteristics cast information, measurement parameters, and reusable software programs. Topics addressed include: advanced avionics development strategy; risk analysis and management; tool quality management; low cost avionics; cost estimation and benefits; computer aided software engineering; computer systems and software safety; system testability; and advanced avionics laboratories - and rapid prototyping. This presentation is represented by viewgraphs only.
Thermal elastoplastic structural analysis of non-metallic thermal protection systems
NASA Technical Reports Server (NTRS)
Chung, T. J.; Yagawa, G.
1972-01-01
An incremental theory and numerical procedure to analyze a three-dimensional thermoelastoplastic structure subjected to high temperature, surface heat flux, and volume heat supply as well as mechanical loadings are presented. Heat conduction equations and equilibrium equations are derived by assuming a specific form of incremental free energy, entropy, stresses and heat flux together with the first and second laws of thermodynamics, von Mises yield criteria and Prandtl-Reuss flow rule. The finite element discretization using the linear isotropic three-dimensional element for the space domain and a difference operator corresponding to a linear variation of temperature within a small time increment for the time domain lead to systematic solutions of temperature distribution and displacement and stress fields. Various boundary conditions such as insulated surfaces and convection through uninsulated surface can be easily treated. To demonstrate effectiveness of the present formulation a number of example problems are presented.
NASA Astrophysics Data System (ADS)
Molz, F. J.; Kozubowski, T. J.; Miller, R. S.; Podgorski, K.
2005-12-01
The theory of non-stationary stochastic processes with stationary increments gives rise to stochastic fractals. When such fractals are used to represent measurements of (assumed stationary) physical properties, such as ln(k) increments in sediments or velocity increments "delta(v)" in turbulent flows, the resulting measurements exhibit scaling, either spatial, temporal or both. (In the present context, such scaling refers to systematic changes in the statistical properties of the increment distributions, such as variance, with the lag size over which the increments are determined.) Depending on the class of probability density functions (PDFs) that describe the increment distributions, the resulting stochastic fractals will display different properties. Until recently, the stationary increment process was represented using mainly Gaussian, Gamma or Levy PDFs. However, measurements in both sediments and fluid turbulence indicate that these PDFs are not commonly observed. Based on recent data and previous studies referenced and discussed in Meerschaert et al. (2004) and Molz et al. (2005), the measured increment PDFs display an approximate double exponential (Laplace) shape at smaller lags, and this shape evolves towards Gaussian at larger lags. A model for this behavior based on the Generalized Laplace PDF family called fractional Laplace motion, in analogy with its Gaussian counterpart - fractional Brownian motion, has been suggested (Meerschaert et al., 2004) and the necessary mathematics elaborated (Kozubowski et al., 2005). The resulting stochastic fractal is not a typical self-affine monofractal, but it does exhibit monofractal-like scaling in certain lag size ranges. To date, it has been shown that the Generalized Laplace family fits ln(k) increment distributions and reproduces the original 1941 theory of Kolmogorov when applied to Eulerian turbulent velocity increments. However, to make a physically self-consistent application to turbulence, one must adopt a Lagrangian viewpoint, and the details of this approach are still being developed. The potential analogy between turbulent delta(v) and sediment delta[ln(k)] is intriguing, and perhaps offers insight into the underlying chaotic processes that constitute turbulence and may result also in the pervasive heterogeneity observed in most natural sediments. Properties of the new Laplace fractal are presented, and potential applications to both sediments and fluid turbulence are discussed.
Moody, Jonathan B; Lee, Benjamin C; Corbett, James R; Ficaro, Edward P; Murthy, Venkatesh L
2015-10-01
A number of exciting advances in PET/CT technology and improvements in methodology have recently converged to enhance the feasibility of routine clinical quantification of myocardial blood flow and flow reserve. Recent promising clinical results are pointing toward an important role for myocardial blood flow in the care of patients. Absolute blood flow quantification can be a powerful clinical tool, but its utility will depend on maintaining precision and accuracy in the face of numerous potential sources of methodological errors. Here we review recent data and highlight the impact of PET instrumentation, image reconstruction, and quantification methods, and we emphasize (82)Rb cardiac PET which currently has the widest clinical application. It will be apparent that more data are needed, particularly in relation to newer PET technologies, as well as clinical standardization of PET protocols and methods. We provide recommendations for the methodological factors considered here. At present, myocardial flow reserve appears to be remarkably robust to various methodological errors; however, with greater attention to and more detailed understanding of these sources of error, the clinical benefits of stress-only blood flow measurement may eventually be more fully realized.
Pīrāga, Dace; Tabors, Guntis; Nikodemus, Oļģerts; Žīgure, Zane; Brūmelis, Guntis
2017-05-01
The aim of this study was to evaluate the use of various indicators in the assessment of environmental pollution and to determine the response of pine to changes of pollution levels. Mezaparks is a part of Riga that has been subject to various long-term effects of atmospheric pollution and, in particular, historically from a large superphosphate factory. To determine the spatial distribution of pollution, moss, pine bark and soil O and B horizons were used as sorbents in this study, as well as the additional annual increment of pine trees. The current spatial distribution of pollution is best shown by heavy metal accumulation in mosses and the long-term accumulation of P 2 O 5 pollution by the soil O horizon. The methodological problems of using these sorbents were explored in the study. Environmental pollution and its changes could be associated with the tree growth ring annual additional increment of Mezaparks pine forest stands. The additional increment increased after the closing of the Riga superphosphate factory.
Suydam, Robert S.; Ortiz, Joseph D.; Thewissen, J. G. M.
2018-01-01
Counts of Growth Layer Groups (GLGs) in the dentin of marine mammal teeth are widely used as indicators of age. In most marine mammals, observations document that GLGs are deposited yearly, but in beluga whales, some studies have supported the view that two GLGs are deposited each year. Our understanding of beluga life-history differs substantially depending on assumptions regarding the timing of GLG deposition; therefore, resolving this issue has important considerations for population assessments. In this study, we used incremental lines that represent daily pulses of dentin mineralization to test the hypothesis that GLGs in beluga dentin are deposited on a yearly basis. Our estimate of the number of daily growth lines within one GLG is remarkably close to 365 days within error, supporting the hypothesis that GLGs are deposited annually in beluga. We show that measurement of daily growth increments can be used to validate the time represented by GLGs in beluga. Furthermore, we believe this methodology may have broader applications to age estimation in other taxa. PMID:29338011
Waugh, David A; Suydam, Robert S; Ortiz, Joseph D; Thewissen, J G M
2018-01-01
Counts of Growth Layer Groups (GLGs) in the dentin of marine mammal teeth are widely used as indicators of age. In most marine mammals, observations document that GLGs are deposited yearly, but in beluga whales, some studies have supported the view that two GLGs are deposited each year. Our understanding of beluga life-history differs substantially depending on assumptions regarding the timing of GLG deposition; therefore, resolving this issue has important considerations for population assessments. In this study, we used incremental lines that represent daily pulses of dentin mineralization to test the hypothesis that GLGs in beluga dentin are deposited on a yearly basis. Our estimate of the number of daily growth lines within one GLG is remarkably close to 365 days within error, supporting the hypothesis that GLGs are deposited annually in beluga. We show that measurement of daily growth increments can be used to validate the time represented by GLGs in beluga. Furthermore, we believe this methodology may have broader applications to age estimation in other taxa.
Verma, Mahendra K.
2015-01-01
The objective of this report is to provide basic technical information regarding the CO2-EOR process, which is at the core of the assessment methodology, to estimate the technically recoverable oil within the fields of the identified sedimentary basins of the United States. Emphasis is on CO2-EOR because this is currently one technology being considered as an ultimate long-term geologic storage solution for CO2 owing to its economic profitability from incremental oil production offsetting the cost of carbon sequestration.
Patel, N H; Sasadeusz, K J; Seshadri, R; Chalasani, N; Shah, H; Johnson, M S; Namyslowski, J; Moresco, K P; Trerotola, S O
2001-11-01
To determine (i) whether there is a significant increase in hepatic artery blood flow (HABF) after transjugular intrahepatic portosystemic shunt (TIPS) creation and (ii) whether the extent of incremental increase in HABF is predictive of clinical outcome after TIPS creation. Prospective, nonrandomized, nonblinded duplex Doppler ultrasound (US) examinations were performed on 24 consecutive patients (19 men; Child Class A/B/C: 4/12/8, respectively) with a mean age of 52.8 years who were referred for TIPS creation for variceal bleeding. Peak hepatic artery velocity and vessel dimensions were used to calculate the hepatic arterial blood flow (HABF) before and after TIPS creation. Patients were clinically followed in the gastrohepatology clinic and TIPS US surveillance was performed at 1 and 3 months to assess shunt function. The extent of incremental increase in HABF was analyzed as a predictor of post-TIPS encephalopathy and/or death. The technical success rate of TIPS creation was 100%. The shunt diameters were either 10 mm (n = 11) or 12 mm (n = 13). TIPS resulted in a significant reduction in the portosystemic gradient from 24.3 mm Hg +/- 5.7 to 9.3 mm Hg +/- 2.9 (P <.001). The hepatic artery peak systolic velocity and HABF increased significantly after TIPS creation, from 60.8 cm/sec +/- 26.7 to 121 cm/sec +/- 51.5 (P <.001) and from 254.2 mL/min +/- 142.2 to 507.8 mL/min +/- 261.3 (P <.001), respectively. The average incremental increase in HABF from pre-TIPS to post-TIPS was 253.6 mL/min +/- 174.2 and the average decremental decrease in portosystemic gradient was 15.0 mm Hg +/- 5.3, but there was no significant correlation (r = 0.04; P =.86) between the two. All shunts were patent at 30 and 90 days without sonographic evidence of shunt dysfunction. After TIPS creation, new or worsened encephalopathy developed in five patients at 30 days and in an additional three at 90 days. They were all successfully managed medically. Three patients (12.5%) died within 30 days of the TIPS procedure. The extent of incremental increase in HABF after TIPS was variable and did not correlate with the development of 30-day and 90-day encephalopathy (P =.41 and P =.83, respectively) or 30-day mortality (P =.2). HABF increases significantly after TIPS but is not predictive of clinical outcome. The significance of the incremental increase is yet to be determined.
Calibration of CORSIM models under saturated traffic flow conditions.
DOT National Transportation Integrated Search
2013-09-01
This study proposes a methodology to calibrate microscopic traffic flow simulation models. : The proposed methodology has the capability to calibrate simultaneously all the calibration : parameters as well as demand patterns for any network topology....
Laboratory administration--capital budgeting.
Butros, F
1997-01-01
The process of capital budgeting varies among different health-care institutions. Understanding the concept of present value of money, incremental cash flow statements, and the basic budgeting techniques will enable the laboratory manager to make the rational and logical decisions that are needed in today's competitive health-care environment.
NASA Astrophysics Data System (ADS)
Roa, A. M.; Aumelas, V.; Maître, T.; Pellone, C.
2010-08-01
The aim of this paper is to present the results of the analysis of a Darrieus-type cross flow water turbine in bare and shrouded configurations. Numerical results are compared to experimental data and differences found in values are also highlighted. The benefit of the introduction of a channelling device, which generates an efficiency increment factor varying from 2 to 5, depending on the configuration, is discussed.
NASA Technical Reports Server (NTRS)
Warmbrod, J. D.; Martindale, M. R.; Matthews, R. K.
1972-01-01
The results of a wind tunnel test program to determine the surface pressures and flow distribution on the McDonnell Douglas Orbiter configuration are presented. Tests were conducted in hypersonic wind tunnel at Mach 8. The freestream unit Reynolds number was 3.7 time one million per foot. Angle of attack was varied from 10 degrees to 60 degrees in 10 degree increments.
On the anomaly of velocity-pressure decoupling in collocated mesh solutions
NASA Technical Reports Server (NTRS)
Kim, Sang-Wook; Vanoverbeke, Thomas
1991-01-01
The use of various pressure correction algorithms originally developed for fully staggered meshes can yield a velocity-pressure decoupled solution for collocated meshes. The mechanism that causes velocity-pressure decoupling is identified. It is shown that the use of a partial differential equation for the incremental pressure eliminates such a mechanism and yields a velocity-pressure coupled solution. Example flows considered are a three dimensional lid-driven cavity flow and a laminar flow through a 90 deg bend square duct. Numerical results obtained using the collocated mesh are in good agreement with the measured data and other numerical results.
Tularosa Basin Play Fairway Analysis: Methodology Flow Charts
Adam Brandt
2015-11-15
These images show the comprehensive methodology used for creation of a Play Fairway Analysis to explore the geothermal resource potential of the Tularosa Basin, New Mexico. The deterministic methodology was originated by the petroleum industry, but was custom-modified to function as a knowledge-based geothermal exploration tool. The stochastic PFA flow chart uses weights of evidence, and is data-driven.
Evaulation of the Quality of an Aquatic Habitat on the Drietomica River
NASA Astrophysics Data System (ADS)
Stankoci, Ivan; Jariabková, Jana; Macura, Viliam
2014-03-01
The ecological status of a river is influenced by many factors, of which the most important are fauna and flora; in this paper they are defined as a habitat. During the years 2004, 2005, 2006 and 2011, research on the hydroecological quality of a habitat was evaluated in the reference section of the Drietomica River. Drietomica is a typical representative river of the Slovak flysch area and is located in the region of the White Carpathians in the northwestern part of Slovakia. In this article the results of modeling a microhabitat by means of the Instream Flow Incremental Methodology (IFIM) are presented. For the one-dimensional modeling, the River Habitat Simulation System (RHABSIM) was used to analyse the interaction between a water flow, the morphology of a riverbed, and the biological components of the environment. The habitat ´s hydroecological quality was evaluated after detailed ichthyological, topographical and hydro-morphological surveys. The main step was assessing the biotic characteristics of the habitat through the suitability curves for the Brown trout (Salmo trutta m. fario). Suitability curves are a graphic representation of the main biotic and abiotic preferences of a microhabitat's components. The suitability curves were derived for the depth, velocity, fish covers and degree of the shading. For evaluating the quality of the aquatic habitat, 19 fish covers were closely monitored and evaluated. The results of the Weighted Usable Area (WUA = f (Q)) were evaluated from a comprehensive assessment of the referenced reach of the Drietomica River.
Incremental Upgrade of Legacy Systems (IULS)
2001-04-01
analysis task employed SEI’s Feature-Oriented Domain Analysis methodology (see FODA reference) and included several phases: • Context Analysis • Establish...Legacy, new Host and upgrade system and software. The Feature Oriented Domain Analysis approach ( FODA , see SUM References) was used for this step...Feature-Oriented Domain Analysis ( FODA ) Feasibility Study (CMU/SEI-90-TR- 21, ESD-90-TR-222); Software Engineering Institute, Carnegie Mellon University
Automated Methodologies for the Design of Flow Diagrams for Development and Maintenance Activities
NASA Astrophysics Data System (ADS)
Shivanand M., Handigund; Shweta, Bhat
The Software Requirements Specification (SRS) of the organization is a text document prepared by strategic management incorporating the requirements of the organization. These requirements of ongoing business/ project development process involve the software tools, the hardware devices, the manual procedures, the application programs and the communication commands. These components are appropriately ordered for achieving the mission of the concerned process both in the project development and the ongoing business processes, in different flow diagrams viz. activity chart, workflow diagram, activity diagram, component diagram and deployment diagram. This paper proposes two generic, automatic methodologies for the design of various flow diagrams of (i) project development activities, (ii) ongoing business process. The methodologies also resolve the ensuing deadlocks in the flow diagrams and determine the critical paths for the activity chart. Though both methodologies are independent, each complements other in authenticating its correctness and completeness.
Kimura, Sumito; Streiff, Cole; Zhu, Meihua; Shimada, Eriko; Datta, Saurabh; Ashraf, Muhammad; Sahn, David J
2014-02-01
The aim of this study was to assess the accuracy, feasibility, and reproducibility of determining stroke volume from a novel 3-dimensional (3D) color Doppler flow quantification method for mitral valve (MV) inflow and left ventricular outflow tract (LVOT) outflow at different stroke volumes when compared with the actual flow rate in a pumped porcine cardiac model. Thirteen freshly harvested pig hearts were studied in a water tank. We inserted a latex balloon into each left ventricle from the MV annulus to the LVOT, which were passively pumped at different stroke volumes (30-80 mL) using a calibrated piston pump at increments of 10 mL. Four-dimensional flow volumes were obtained without electrocardiographic gating. The digital imaging data were analyzed offline using prototype software. Two hemispheric flow-sampling planes for color Doppler velocity measurements were placed at the MV annulus and LVOT. The software computed the flow volumes at the MV annulus and LVOT within the user-defined volume and cardiac cycle. This novel 3D Doppler flow quantification method detected incremental increases in MV inflow and LVOT outflow in close agreement with pumped stroke volumes (MV inflow, r = 0.96; LVOT outflow, r = 0.96; P < .01). Bland-Altman analysis demonstrated overestimation of both (MV inflow, 5.42 mL; LVOT outflow, 4.46 mL) with 95% of points within 95% limits of agreement. Interobserver variability values showed good agreement for all stroke volumes at both the MV annulus and LVOT. This study has shown that the 3D color Doppler flow quantification method we used is able to compute stroke volumes accurately at the MV annulus and LVOT in the same cardiac cycle without electrocardiographic gating. This method may be valuable for assessment of cardiac output in clinical studies.
NASA Astrophysics Data System (ADS)
Zhao, An; Jin, Ning-de; Ren, Ying-yu; Zhu, Lei; Yang, Xia
2016-01-01
In this article we apply an approach to identify the oil-gas-water three-phase flow patterns in vertical upwards 20 mm inner-diameter pipe based on the conductance fluctuating signals. We use the approach to analyse the signals with long-range correlations by decomposing the signal increment series into magnitude and sign series and extracting their scaling properties. We find that the magnitude series relates to nonlinear properties of the original time series, whereas the sign series relates to the linear properties. The research shows that the oil-gas-water three-phase flows (slug flow, churn flow, bubble flow) can be classified by a combination of scaling exponents of magnitude and sign series. This study provides a new way of characterising linear and nonlinear properties embedded in oil-gas-water three-phase flows.
Bovee, Ken; Zuboy, J.R.
1988-01-01
The development of reliable habitat suitability criteria is critical to the successful implementation of the Instream Flow Incremental Methodology (IFIM), or any other habitat based evaluation technology. It is also a fascinating topic of research, for several reasons. First, the “science” of habitat quantification is relatively young. Descriptions of habitat use and partitioning can be traced back to Darwin, if not further. Attempts to actually quantify habitat use can be found predominantly during the last two decades, with most of the activity occurring in about the last five years. Second, this work is challenging because we are usually working with fish or some other organism that lives out of sight in an environment that is foreign to humans. Most of the data collection techniques that have been developed for standard fisheries work are unsuited, without modification, for criteria development. These factors make anyone involved in this type of research a pioneer, of sorts. Pioneers often make new and wonderful discoveries, but they also sometimes get lost. In our opinion, however, there is an even more rewarding aspect to criteria development research. It seems that the field of biology has tended to become increasingly clinical over the years. Criteria development demands the unobtrusive observation of organisms in their natural environment, a fact that allows the biological to be a naturalist and still get paid for it. The relative youth and importance of habitat quantification have resulted in rapid advancements in the state of the art. The expansion of methods is vividly demonstrated simply by comparing the two Instream Flow Information Papers written on the subject in 1978 and in 1986. One of the missions of the Aquatic Systems Branch (formerly the Instream Flow Group) is to serve as a clearinghouse for new techniques and methods. In keeping with this role, a workshop was conducted during December 1986 to discuss current and newly evolving methods for developing and evaluating habitat suitability criteria. Participation in this workshop was largely by invitation only. The objective was to obtain insights into problems and possible solutions to criteria development, from the perspective of professionals closely involved with the subject. These proceedings of that workshop are intended to supplement the information contained in Instream Flow Information Paper 21, "Development and Evaluation of Habitat Suitability Criteria for Use in the Instream Flow Incremental Methodology." The workshop was closely arranged in five sessions, roughly following the outline of Information Paper 21. The first session dealt with various aspects of study design and how they can influence the outcome of a study. Session two investigated techniques for developing criteria from professional judgment, and some of the problems encountered when personal or agency prejudice enters the picture. Session three concentrated on field data collection procedures, whereas session four examined methods of converting field data into curves. Field verification studies were discussed in session five. Each presentation in the workshop was followed by a question and answer period of 15 to 30 minutes. These discussions were recorded, transcribed, and appended to the end of each paper in these proceedings. We have attempted to capture the essence of these discussions as accurately as possible, but hope that the reader can appreciate the difficulty in translating a free-ranging discussion (from a barely audible tape) to something that makes sense in print. These question and answer sessions constitute the peer review for each of the papers. This provides the reader with the unique opportunity to review the interactions between authors and reviewers.
Reference manual for generation and analysis of Habitat Time Series: version II
Milhous, Robert T.; Bartholow, John M.; Updike, Marlys A.; Moos, Alan R.
1990-01-01
The selection of an instream flow requirement for water resource management often requires the review of how the physical habitat changes through time. This review is referred to as 'Time Series Analysis." The Tune Series Library (fSLIB) is a group of programs to enter, transform, analyze, and display time series data for use in stream habitat assessment. A time series may be defined as a sequence of data recorded or calculated over time. Examples might be historical monthly flow, predicted monthly weighted usable area, daily electrical power generation, annual irrigation diversion, and so forth. The time series can be analyzed, both descriptively and analytically, to understand the importance of the variation in the events over time. This is especially useful in the development of instream flow needs based on habitat availability. The TSLIB group of programs assumes that you have an adequate study plan to guide you in your analysis. You need to already have knowledge about such things as time period and time step, species and life stages to consider, and appropriate comparisons or statistics to be produced and displayed or tabulated. Knowing your destination, you must first evaluate whether TSLIB can get you there. Remember, data are not answers. This publication is a reference manual to TSLIB and is intended to be a guide to the process of using the various programs in TSLIB. This manual is essentially limited to the hands-on use of the various programs. a TSLIB use interface program (called RTSM) has been developed to provide an integrated working environment where the use has a brief on-line description of each TSLIB program with the capability to run the TSLIB program while in the user interface. For information on the RTSM program, refer to Appendix F. Before applying the computer models described herein, it is recommended that the user enroll in the short course "Problem Solving with the Instream Flow Incremental Methodology (IFIM)." This course is offered by the Aquatic Systems Branch of the National Ecology Research Center. For more information about the TSLIB software, refer to the Memorandum of Understanding. Chapter 1 provides a brief introduction to the Instream Flow Incremental Methodology and TSLIB. Other chapters in this manual provide information on the different aspects of using the models. The information contained in the other chapters includes (2) acquisition, entry, manipulation, and listing of streamflow data; (3) entry, manipulation, and listing of the habitat-versus-streamflow function; (4) transferring streamflow data; (5) water resources systems analysis; (6) generation and analysis of daily streamflow and habitat values; (7) generation of the time series of monthly habitats; (8) manipulation, analysis, and display of month time series data; and (9) generation, analysis, and display of annual time series data. Each section includes documentation for the programs therein with at least one page of information for each program, including a program description, instructions for running the program, and sample output. The Appendixes contain the following: (A) sample file formats; (B) descriptions of default filenames; (C) alphabetical summary of batch-procedure files; (D) installing and running TSLIB on a microcomputer; (E) running TSLIB on a CDC Cyber computer; (F) using the TSLIB user interface program (RTSM); and (G) running WATSTORE on the USGS Amdahl mainframe computer. The number for this version of TSLIB--Version II-- is somewhat arbitrary, as the TSLIB programs were collected into a library some time ago; but operators tended to use and manage them as individual programs. Therefore, we will consider the group of programs from the past that were only on the CDC Cyber computer as Version 0; the programs from the past that were on both the Cyber and the IBM-compatible microcomputer as Version I; and the programs contained in this reference manual as Version II.
Modeling CANDU-6 liquid zone controllers for effects of thorium-based fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
St-Aubin, E.; Marleau, G.
2012-07-01
We use the DRAGON code to model the CANDU-6 liquid zone controllers and evaluate the effects of thorium-based fuels on their incremental cross sections and reactivity worth. We optimize both the numerical quadrature and spatial discretization for 2D cell models in order to provide accurate fuel properties for 3D liquid zone controller supercell models. We propose a low computer cost parameterized pseudo-exact 3D cluster geometries modeling approach that avoids tracking issues on small external surfaces. This methodology provides consistent incremental cross sections and reactivity worths when the thickness of the buffer region is reduced. When compared with an approximate annularmore » geometry representation of the fuel and coolant region, we observe that the cluster description of fuel bundles in the supercell models does not increase considerably the precision of the results while increasing substantially the CPU time. In addition, this comparison shows that it is imperative to finely describe the liquid zone controller geometry since it has a strong impact of the incremental cross sections. This paper also shows that liquid zone controller reactivity worth is greatly decreased in presence of thorium-based fuels compared to the reference natural uranium fuel, since the fission and the fast to thermal scattering incremental cross sections are higher for the new fuels. (authors)« less
Incremental Support Vector Machine Framework for Visual Sensor Networks
NASA Astrophysics Data System (ADS)
Awad, Mariette; Jiang, Xianhua; Motai, Yuichi
2006-12-01
Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM) technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM) formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.
Minimal Residual Disease Evaluation in Childhood Acute Lymphoblastic Leukemia: An Economic Analysis
Gajic-Veljanoski, O.; Pham, B.; Pechlivanoglou, P.; Krahn, M.; Higgins, Caroline; Bielecki, Joanna
2016-01-01
Background Minimal residual disease (MRD) testing by higher performance techniques such as flow cytometry and polymerase chain reaction (PCR) can be used to detect the proportion of remaining leukemic cells in bone marrow or peripheral blood during and after the first phases of chemotherapy in children with acute lymphoblastic leukemia (ALL). The results of MRD testing are used to reclassify these patients and guide changes in treatment according to their future risk of relapse. We conducted a systematic review of the economic literature, cost-effectiveness analysis, and budget-impact analysis to ascertain the cost-effectiveness and economic impact of MRD testing by flow cytometry for management of childhood precursor B-cell ALL in Ontario. Methods A systematic literature search (1998–2014) identified studies that examined the incremental cost-effectiveness of MRD testing by either flow cytometry or PCR. We developed a lifetime state-transition (Markov) microsimulation model to quantify the cost-effectiveness of MRD testing followed by risk-directed therapy to no MRD testing and to estimate its marginal effect on health outcomes and on costs. Model input parameters were based on the literature, expert opinion, and data from the Pediatric Oncology Group of Ontario Networked Information System. Using predictions from our Markov model, we estimated the 1-year cost burden of MRD testing versus no testing and forecasted its economic impact over 3 and 5 years. Results In a base-case cost-effectiveness analysis, compared with no testing, MRD testing by flow cytometry at the end of induction and consolidation was associated with an increased discounted survival of 0.0958 quality-adjusted life-years (QALYs) and increased discounted costs of $4,180, yielding an incremental cost-effectiveness ratio (ICER) of $43,613/QALY gained. After accounting for parameter uncertainty, incremental cost-effectiveness of MRD testing was associated with an ICER of $50,249/QALY gained. In the budget-impact analysis, the 1-year cost expenditure for MRD testing by flow cytometry in newly diagnosed patients with precursor B-cell ALL was estimated at $340,760. We forecasted that the province would have to pay approximately $1.3 million over 3 years and $2.4 million over 5 years for MRD testing by flow cytometry in this population. Conclusions Compared with no testing, MRD testing by flow cytometry in newly diagnosed patients with precursor B-cell ALL represents good value for money at commonly used willingness-to-pay thresholds of $50,000/QALY and $100,000/QALY. PMID:27099644
Minimal Residual Disease Evaluation in Childhood Acute Lymphoblastic Leukemia: An Economic Analysis.
2016-01-01
Minimal residual disease (MRD) testing by higher performance techniques such as flow cytometry and polymerase chain reaction (PCR) can be used to detect the proportion of remaining leukemic cells in bone marrow or peripheral blood during and after the first phases of chemotherapy in children with acute lymphoblastic leukemia (ALL). The results of MRD testing are used to reclassify these patients and guide changes in treatment according to their future risk of relapse. We conducted a systematic review of the economic literature, cost-effectiveness analysis, and budget-impact analysis to ascertain the cost-effectiveness and economic impact of MRD testing by flow cytometry for management of childhood precursor B-cell ALL in Ontario. A systematic literature search (1998-2014) identified studies that examined the incremental cost-effectiveness of MRD testing by either flow cytometry or PCR. We developed a lifetime state-transition (Markov) microsimulation model to quantify the cost-effectiveness of MRD testing followed by risk-directed therapy to no MRD testing and to estimate its marginal effect on health outcomes and on costs. Model input parameters were based on the literature, expert opinion, and data from the Pediatric Oncology Group of Ontario Networked Information System. Using predictions from our Markov model, we estimated the 1-year cost burden of MRD testing versus no testing and forecasted its economic impact over 3 and 5 years. In a base-case cost-effectiveness analysis, compared with no testing, MRD testing by flow cytometry at the end of induction and consolidation was associated with an increased discounted survival of 0.0958 quality-adjusted life-years (QALYs) and increased discounted costs of $4,180, yielding an incremental cost-effectiveness ratio (ICER) of $43,613/QALY gained. After accounting for parameter uncertainty, incremental cost-effectiveness of MRD testing was associated with an ICER of $50,249/QALY gained. In the budget-impact analysis, the 1-year cost expenditure for MRD testing by flow cytometry in newly diagnosed patients with precursor B-cell ALL was estimated at $340,760. We forecasted that the province would have to pay approximately $1.3 million over 3 years and $2.4 million over 5 years for MRD testing by flow cytometry in this population. Compared with no testing, MRD testing by flow cytometry in newly diagnosed patients with precursor B-cell ALL represents good value for money at commonly used willingness-to-pay thresholds of $50,000/QALY and $100,000/QALY.
Building Program Models Incrementally from Informal Descriptions.
1979-10-01
specified at each step. Since the user controls the interaction, the user may determine the order in which information flows into PMB. Information is received...until only ten years ago the term aautomatic programming" referred to the development of the assemblers, macro expanders, and compilers for these
NASA Technical Reports Server (NTRS)
Matthews, R. K.; Martindale, W. R.; Warmbrod, J. D.
1972-01-01
The results are presented of a wind tunnel test program to determine surface pressures and flow field properties on the space shuttle orbiter configuration. The tests were conducted in September 1971. Data were obtained at a nominal Mach number of 8 and a free stream unit Reynolds number of 3.7 million per foot. Angle of attack was varied from 10 to 50 deg in 10-deg increments.
Self-Contained Automated Methodology for Optimal Flow Control
NASA Technical Reports Server (NTRS)
Joslin, Ronald D.; Gunzburger, Max D.; Nicolaides, Roy A.; Erlebacherl, Gordon; Hussaini, M. Yousuff
1997-01-01
This paper describes a self-contained, automated methodology for active flow control which couples the time-dependent Navier-Stokes system with an adjoint Navier-Stokes system and optimality conditions from which optimal states, i.e., unsteady flow fields and controls (e.g., actuators), may be determined. The problem of boundary layer instability suppression through wave cancellation is used as the initial validation case to test the methodology. Here, the objective of control is to match the stress vector along a portion of the boundary to a given vector; instability suppression is achieved by choosing the given vector to be that of a steady base flow. Control is effected through the injection or suction of fluid through a single orifice on the boundary. The results demonstrate that instability suppression can be achieved without any a priori knowledge of the disturbance, which is significant because other control techniques have required some knowledge of the flow unsteadiness such as frequencies, instability type, etc. The present methodology has been extended to three dimensions and may potentially be applied to separation control, re-laminarization, and turbulence control applications using one to many sensors and actuators.
Rodríguez, M T Torres; Andrade, L Cristóbal; Bugallo, P M Bello; Long, J J Casares
2011-09-15
Life cycle thinking (LCT) is one of the philosophies that has recently appeared in the context of the sustainable development. Some of the already existing tools and methods, as well as some of the recently emerged ones, which seek to understand, interpret and design the life of a product, can be included into the scope of the LCT philosophy. That is the case of the material and energy flow analysis (MEFA), a tool derived from the industrial metabolism definition. This paper proposes a methodology combining MEFA with another technique derived from sustainable development which also fits the LCT philosophy, the BAT (best available techniques) analysis. This methodology, applied to an industrial process, seeks to identify the so-called improvable flows by MEFA, so that the appropriate candidate BAT can be selected by BAT analysis. Material and energy inputs, outputs and internal flows are quantified, and sustainable solutions are provided on the basis of industrial metabolism. The methodology has been applied to an exemplary roof tile manufacture plant for validation. 14 Improvable flows have been identified and 7 candidate BAT have been proposed aiming to reduce these flows. The proposed methodology provides a way to detect improvable material or energy flows in a process and selects the most sustainable options to enhance them. Solutions are proposed for the detected improvable flows, taking into account their effectiveness on improving such flows. Copyright © 2011 Elsevier B.V. All rights reserved.
Impacts of flare emissions from an ethylene plant shutdown to regional air quality
NASA Astrophysics Data System (ADS)
Wang, Ziyuan; Wang, Sujing; Xu, Qiang; Ho, Thomas
2016-08-01
Critical operations of chemical process industry (CPI) plants such as ethylene plant shutdowns could emit a huge amount of VOCs and NOx, which may result in localized and transient ozone pollution events. In this paper, a general methodology for studying dynamic ozone impacts associated with flare emissions from ethylene plant shutdowns has been developed. This multi-scale simulation study integrates process knowledge of plant shutdown emissions in terms of flow rate and speciation together with regional air-quality modeling to quantitatively investigate the sensitivity of ground-level ozone change due to an ethylene plant shutdown. The study shows the maximum hourly ozone increments can vary significantly by different plant locations and temporal factors including background ozone data and solar radiation intensity. It helps provide a cost-effective air-quality control strategy for industries by choosing the optimal starting time of plant shutdown operations in terms of minimizing the induced ozone impact (reduced from 34.1 ppb to 1.2 ppb in the performed case studies). This study provides valuable technical supports for both CPI and environmental policy makers on cost-effective air-quality controls in the future.
Metric integration architecture for product development
NASA Astrophysics Data System (ADS)
Sieger, David B.
1997-06-01
Present-day product development endeavors utilize the concurrent engineering philosophy as a logical means for incorporating a variety of viewpoints into the design of products. Since this approach provides no explicit procedural provisions, it is necessary to establish at least a mental coupling with a known design process model. The central feature of all such models is the management and transformation of information. While these models assist in structuring the design process, characterizing the basic flow of operations that are involved, they provide no guidance facilities. The significance of this feature, and the role it plays in the time required to develop products, is increasing in importance due to the inherent process dynamics, system/component complexities, and competitive forces. The methodology presented in this paper involves the use of a hierarchical system structure, discrete event system specification (DEVS), and multidimensional state variable based metrics. This approach is unique in its capability to quantify designer's actions throughout product development, provide recommendations about subsequent activity selection, and coordinate distributed activities of designers and/or design teams across all design stages. Conceptual design tool implementation results are used to demonstrate the utility of this technique in improving the incremental decision making process.
40 CFR 86.331-79 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for New Gasoline-Fueled and Diesel-Fueled Heavy-Duty Engines; Gaseous Exhaust Test Procedures § 86... difference between the span-gas response and the zero-gas response. Incrementally adjust the fuel flow above...
40 CFR 86.331-79 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for New Gasoline-Fueled and Diesel-Fueled Heavy-Duty Engines; Gaseous Exhaust Test Procedures § 86... difference between the span-gas response and the zero-gas response. Incrementally adjust the fuel flow above...
Net Reclassification Indices for Evaluating Risk-Prediction Instruments: A Critical Review
Kerr, Kathleen F.; Wang, Zheyu; Janes, Holly; McClelland, Robyn L.; Psaty, Bruce M.; Pepe, Margaret S.
2014-01-01
Net reclassification indices have recently become popular statistics for measuring the prediction increment of new biomarkers. We review the various types of net reclassification indices and their correct interpretations. We evaluate the advantages and disadvantages of quantifying the prediction increment with these indices. For pre-defined risk categories, we relate net reclassification indices to existing measures of the prediction increment. We also consider statistical methodology for constructing confidence intervals for net reclassification indices and evaluate the merits of hypothesis testing based on such indices. We recommend that investigators using net reclassification indices should report them separately for events (cases) and nonevents (controls). When there are two risk categories, the components of net reclassification indices are the same as the changes in the true-positive and false-positive rates. We advocate use of true- and false-positive rates and suggest it is more useful for investigators to retain the existing, descriptive terms. When there are three or more risk categories, we recommend against net reclassification indices because they do not adequately account for clinically important differences in shifts among risk categories. The category-free net reclassification index is a new descriptive device designed to avoid pre-defined risk categories. However, it suffers from many of the same problems as other measures such as the area under the receiver operating characteristic curve. In addition, the category-free index can mislead investigators by overstating the incremental value of a biomarker, even in independent validation data. When investigators want to test a null hypothesis of no prediction increment, the well-established tests for coefficients in the regression model are superior to the net reclassification index. If investigators want to use net reclassification indices, confidence intervals should be calculated using bootstrap methods rather than published variance formulas. The preferred single-number summary of the prediction increment is the improvement in net benefit. PMID:24240655
NASA Technical Reports Server (NTRS)
Hunt, J. L.; Souders, S. W.
1975-01-01
Normal- and oblique-shock flow parameters for air in thermochemical equilibrium are tabulated as a function of shock angle for altitudes ranging from 15.24 km to 91.44 km in increments of 7.62 km at selected hypersonic speeds. Post-shock parameters tabulated include flow-deflection angle, velocity, Mach number, compressibility factor, isentropic exponent, viscosity, Reynolds number, entropy difference, and static pressure, temperature, density, and enthalpy ratios across the shock. A procedure is presented for obtaining oblique-shock flow properties in equilibrium air on surfaces at various angles of attack, sweep, and dihedral by use of the two-dimensional tabulations. Plots of the flow parameters against flow-deflection angle are presented at altitudes of 30.48, 60.96, and 91.44 km for various stream velocities.
Identification of high shears and compressive discontinuities in the inner heliosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greco, A.; Perri, S.
2014-04-01
Two techniques, the Partial Variance of Increments (PVI) and the Local Intermittency Measure (LIM), have been applied and compared using MESSENGER magnetic field data in the solar wind at a heliocentric distance of about 0.3 AU. The spatial properties of the turbulent field at different scales, spanning the whole inertial range of magnetic turbulence down toward the proton scales have been studied. LIM and PVI methodologies allow us to identify portions of an entire time series where magnetic energy is mostly accumulated, and regions of intermittent bursts in the magnetic field vector increments, respectively. A statistical analysis has revealed thatmore » at small time scales and for high level of the threshold, the bursts present in the PVI and the LIM series correspond to regions of high shear stress and high magnetic field compressibility.« less
Aakre, Kenneth T; Valley, Timothy B; O'Connor, Michael K
2010-03-01
Lean Six Sigma process improvement methodologies have been used in manufacturing for some time. However, Lean Six Sigma process improvement methodologies also are applicable to radiology as a way to identify opportunities for improvement in patient care delivery settings. A multidisciplinary team of physicians and staff conducted a 100-day quality improvement project with the guidance of a quality advisor. By using the framework of DMAIC (define, measure, analyze, improve, and control), time studies were performed for all aspects of patient and technologist involvement. From these studies, value stream maps for the current state and for the future were developed, and tests of change were implemented. Comprehensive value stream maps showed that before implementation of process changes, an average time of 20.95 minutes was required for completion of a bone densitometry study. Two process changes (ie, tests of change) were undertaken. First, the location for completion of a patient assessment form was moved from inside the imaging room to the waiting area, enabling patients to complete the form while waiting for the technologist. Second, the patient was instructed to sit in a waiting area immediately outside the imaging rooms, rather than in the main reception area, which is far removed from the imaging area. Realignment of these process steps, with reduced technologist travel distances, resulted in a 3-minute average decrease in the patient cycle time. This represented a 15% reduction in the initial patient cycle time with no change in staff or costs. Radiology process improvement projects can yield positive results despite small incremental changes.
ERIC Educational Resources Information Center
Cuadra, Ernesto; Crouch, Luis
Student promotion, repetition, and dropout rates constitute the basic data needed to forecast future enrollment and new resources. Information on student flow is significantly related to policy formulation aimed at improving internal efficiency, because dropping out and grade repetition increase per pupil cost, block access to eligible school-age…
Bovee, Ken D.; Gore, James A.; Silverman, Arnold J.
1978-01-01
A comprehensive, multi-component in-stream flow methodology was developed and field tested in the Tongue River in southeastern Montana. The methodology incorporates a sensitivity for the flow requirements of a wide variety of in-stream uses, and the flexibility to adjust flows to accommodate seasonal and sub-seasonal changes in the flow requirements for different areas. In addition, the methodology provides the means to accurately determine the magnitude of the water requirement for each in-stream use. The methodology can be a powerful water management tool in that it provides the flexibility and accuracy necessary in water use negotiations and evaluation of trade-offs. In contrast to most traditional methodologies, in-stream flow requirements were determined by additive independent methodologies developed for: 1) fisheries, including spawning, rearing, and food production; 2) sediment transport; 3) the mitigation of adverse impacts of ice; and 4) evapotranspiration losses. Since each flow requirement varied in important throughout the year, the consideration of a single in-stream use as a basis for a flow recommendation is inadequate. The study shows that the base flow requirement for spawning shovelnose sturgeon was 13.0 m3/sec. During the same period of the year, the flow required to initiate the scour of sediment from pools is 18.0 m3/sec, with increased scour efficiency occurring at flows between 20.0 and 25.0 m3/sec. An over-winter flow of 2.83 m3/sec. would result in the loss of approximately 80% of the riffle areas to encroachment by surface ice. At the base flow for insect production, approximately 60% of the riffle area is lost to ice. Serious damage to the channel could be incurred from ice jams during the spring break-up period. A flow of 12.0 m3/sec. is recommended to alleviate this problem. Extensive ice jams would be expected at the base rearing and food production levels. The base rearing flow may be profoundly influenced by the loss of streamflow to transpiration. Transpiration losses to riparian vegetation ranged from 0.78 m3/sec. in April, to 1.54 m3/sec. in July, under drought conditions. Requirement for irrigation were estimated to range from 5.56 m3/sec. in May to 7.97 m3/sec. in July, under drought conditions. It was concluded that flow requirements to satisfy monthly water losses to transpiration must be added to the base fishery flows to provide adequate protection to the resources in the lower reaches of the river. Integration of the in-stream requirements for various use components shows that a base flow of at least 23.6 m3/sec. must be reserved during the month of June to initiate scour of sediment from pools, provide spawning habitat to shovelnose sturgeon, and to accommodate water losses from the system. In comparison, a base flow of 3.85 m3/sec. would be required during early February to provide fish rearing habitat and insect productivity, and to prevent excessive loss of food production areas to surface ice formation. During mid to late February, a flow of 12 m3/sec. would be needed to facilitate ice break-up and prevent ice jams from forming. Following break-up, the base flow would again be 3.85 m3/sec. until the start of spawning season.
The interplay of biology and technology
Fields, Stanley
2001-01-01
Technologies for biological research arise in multiple ways—through serendipity, through inspired insights, and through incremental advances—and they are tightly coupled to progress in engineering. Underlying the complex dynamics of technology and biology are the different motivations of those who work in the two realms. Consideration of how methodologies emerge has implications for the planning of interdisciplinary centers and the training of the next generation of scientists. PMID:11517346
NASA Astrophysics Data System (ADS)
Komovkin, S. V.; Lavrenov, S. M.; Tuchin, A. G.; Tuchin, D. A.; Yaroshevsky, V. S.
2016-12-01
The article describes a model of the two-way measurements of radial velocity based on the Doppler effect. The relations are presented for the instantaneous value of the increment range at the time of measurement and the radial velocity of the mid-dimensional interval. The compensation of methodological errors of interpretation of the two-way Doppler measurements is considered.
2010-03-01
service consumers, and infrastructure. Techniques from any iterative and incremental software development methodology followed by the organiza- tion... Service -Oriented Architecture Environment (CMU/SEI-2008-TN-008). Software Engineering Institute, Carnegie Mellon University, 2008. http://www.sei.cmu.edu...Integrating Legacy Software into a Service Oriented Architecture.” Proceedings of the 10th European Conference on Software Maintenance (CSMR 2006). Bari
NASA Technical Reports Server (NTRS)
Strash, D. J.; Summa, J. M.
1996-01-01
In the work reported herein, a simplified, uncoupled, zonal procedure is utilized to assess the capability of numerically simulating icing effects on a Boeing 727-200 aircraft. The computational approach combines potential flow plus boundary layer simulations by VSAERO for the un-iced aircraft forces and moments with Navier-Stokes simulations by NPARC for the incremental forces and moments due to iced components. These are compared with wind tunnel force and moment data, supplied by the Boeing Company, examining longitudinal flight characteristics. Grid refinement improved the local flow features over previously reported work with no appreciable difference in the incremental ice effect. The computed lift curve slope with and without empennage ice matches the experimental value to within 1%, and the zero lift angle agrees to within 0.2 of a degree. The computed slope of the un-iced and iced aircraft longitudinal stability curve is within about 2% of the test data. This work demonstrates the feasibility of a zonal method for the icing analysis of complete aircraft or isolated components within the linear angle of attack range. In fact, this zonal technique has allowed for the viscous analysis of a complete aircraft with ice which is currently not otherwise considered tractable.
Ginsberg, M D; Chang, J Y; Kelley, R E; Yoshii, F; Barker, W W; Ingenito, G; Boothe, T E
1988-02-01
To investigate local metabolic and hemodynamic interrelationships during functional activation of the brain, paired studies of local cerebral glucose utilization (lCMRGlc) and blood flow (lCBF) were carried out in 10 normal subjects (9 right-handed, 1 ambidextrous) at rest and during a unilateral discriminative somatosensory/motor task--palpation and sorting of mah-jongg tiles by engraved design. The extent of activation was assessed on the basis of percentage difference images following normalization to compensate for global shifts. The somatosensory stimulus elevated lCMRGlc by 16.9 +/- 3.5% (mean +/- standard deviation) and lCBF by 26.5 +/- 5.1% in the contralateral sensorimotor cortical focus; smaller increments were noted in the homologous ipsilateral site. The increments of lCMRGlc and lCBF correlated poorly with one another in individual subjects. Stimulation of the right hand resulted in significantly higher contralateral lCMRGlc activation (19.6%) than did stimulation of the left hand (14.1%) (p less than 0.005), whereas the lCBF response was independent of the hand stimulated. Our results indicate that both glycolytic metabolism and blood flow increase locally with the execution of an active sensorimotor task and suggest that both measures may serve as reliable markers of functional activation of the normal brain.
Numerical study of aero-excitation of steam-turbine rotor blade self-oscillations
NASA Astrophysics Data System (ADS)
Galaev, S. A.; Makhnov, V. Yu.; Ris, V. V.; Smirnov, E. M.
2018-05-01
Blade aero-excitation increment is evaluated by numerical solution of the full 3D unsteady Reynolds-averaged Navier-Stokes equations governing wet steam flow in a powerful steam-turbine last stage. The equilibrium wet steam model was adopted. Blade surfaces oscillations are defined by eigen-modes of a row of blades bounded by a shroud. Grid dependency study was performed with a reduced model being a set of blades multiple an eigen-mode nodal diameter. All other computations were carried out for the entire blade row. Two cases are considered, with an original-blade row and with a row of modified (reinforced) blades. Influence of eigen-mode nodal diameter and blade reinforcing on aero-excitation increment is analyzed. It has been established, in particular, that maximum value of the aero-excitation increment for the reinforced-blade row is two times less as compared with the original-blade row. Generally, results of the study point definitely to less probability of occurrence of blade self-oscillations in case of the reinforced blade-row.
Refoios Camejo, Rodrigo; McGrath, Clare; Herings, Ron; Meerding, Willem-Jan; Rutten, Frans
2012-01-01
When comparators' prices decrease due to market competition and loss of exclusivity, the incremental clinical effectiveness required for a new technology to be cost-effective is expected to increase; and/or the minimum price at which it will be funded will tend to decrease. This may be, however, either unattainable physiologically or financially unviable for drug development. The objective of this study is to provide an empirical basis for this discussion by estimating the potential for price decreases to impact on the cost-effectiveness of new therapies in hypertension. Cost-effectiveness at launch was estimated for all antihypertensive drugs launched between 1998 and 2008 in the United Kingdom using hypothetical degrees of incremental clinical effectiveness within the methodologic framework applied by the UK National Institute for Health and Clinical Excellence. Incremental cost-effectiveness ratios were computed and compared with funding thresholds. In addition, the levels of incremental clinical effectiveness required to achieve specific cost-effectiveness thresholds at given prices were estimated. Significant price decreases were observed for existing drugs. This was shown to markedly affect cost-effectiveness of technologies entering the market. The required incremental clinical effectiveness was in many cases greater than physiologically possible so, as a consequence, a number of products might not be available today if current methods of economic appraisal had been applied. We conclude that the definition of cost-effectiveness thresholds is fundamental in promoting efficient innovation. Our findings demonstrate that comparator price attrition has the potential to put pressure in the pharmaceutical research model and presents a challenge to new therapies being accepted for funding. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Older adults' engagement with a video game training program.
Belchior, Patrícia; Marsiske, Michael; Sisco, Shannon; Yam, Anna; Mann, William
2012-12-19
The current study investigated older adults' level of engagement with a video game training program. Engagement was measured using the concept of Flow (Csikszentmihalyi, 1975). Forty-five older adults were randomized to receive practice with an action game ( Medal of Honor ), a puzzle-like game ( Tetris ), or a gold-standard Useful Field of View (UFOV) training program. Both Medal of Honor and Tetris participants reported significantly higher Flow ratings at the conclusion, relative to the onset of training. Participants are more engaged in games that can be adjusted to their skill levels and that provide incremental levels of difficulty. This finding was consistent with the Flow theory (Csikszentmihalyi, 1975).
Bio mathematical venture for the metallic nanoparticles due to ciliary motion.
Akbar, Noreen Sher; Butt, Adil Wahid
2016-10-01
The present investigation is associated with the contemporary study of viscous flow in a vertical tube with ciliary motion. The main flow problem has been modeled using cylindrical coordinates; flow equations are simplified to ordinary differential equations using longwave length and low Reynold's number approximation; and exact solutions have been obtained for velocity, pressure gradient and temperature. Results acquired are discussed graphically for better understanding. Streamlines for the velocity profile are plotted to discuss the trapping phenomenon. It is seen that with an increment in the Grashof number, the velocity of the governing fluids starts to decrease significantly. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Veillette, Marc; Avalos Ramirez, Antonio; Heitz, Michèle
2012-01-01
An evaluation of the effect of ammonium on the performance of two up-flow inorganic packed bed biofilters treating methane was conducted. The air flow rate was set to 3.0 L min(-1) for an empty bed residence time of 6.0 min. The biofilter was fed with a methane concentration of 0.30% (v/v). The ammonium concentration in the nutrient solution was increased by small increments (from 0.01 to 0.025 gN-NH(4) (+) L(-1)) for one biofilter and by large increments of 0.05 gN-NH(4) (+) L(-1) in the other biofilter. The total concentration of nitrogen was kept constant at 0.5 gN-NH(4) (+) L(-1) throughout the experiment by balancing ammonium with nitrate. For both biofilters, the methane elimination capacity, carbon dioxide production, nitrogen bed retention and biomass content decreased with the ammonium concentration in the nutrient solution. The biofilter with smaller ammonium increments featured a higher elimination capacity and carbon dioxide production rate, which varied from 4.9 to 14.3 g m(-3) h(-1) and from 11.5 to 30 g m(-3) h(-1), respectively. Denitrification was observed as some values of the nitrate production rate were negative for ammonium concentrations below 0.2 gN-NH(4) (+) L(-1). A Michalelis-Menten-type model fitted the ammonium elimination rate and the nitrate production rate.
2014-01-01
Background Australia’s health workforce is facing significant challenges now and into the future. Health Workforce Australia (HWA) was established by the Council of Australian Governments as the national agency to progress health workforce reform to address the challenges of providing a skilled, innovative and flexible health workforce in Australia. HWA developed Australia’s first major, long-term national workforce projections for doctors, nurses and midwives over a planning horizon to 2025 (called Health Workforce 2025; HW 2025), which provided a national platform for developing policies to help ensure Australia’s health workforce meets the community’s needs. Methods A review of existing workforce planning methodologies, in concert with the project brief and an examination of data availability, identified that the best fit-for-purpose workforce planning methodology was the stock and flow model for estimating workforce supply and the utilisation method for estimating workforce demand. Scenario modelling was conducted to explore the implications of possible alternative futures, and to demonstrate the sensitivity of the model to various input parameters. Extensive consultation was conducted to test the methodology, data and assumptions used, and also influenced the scenarios selected for modelling. Additionally, a number of other key principles were adopted in developing HW 2025 to ensure the workforce projections were robust and able to be applied nationally. Results The findings from HW 2025 highlighted that a ‘business as usual’ approach to Australia’s health workforce is not sustainable over the next 10 years, with a need for co-ordinated, long-term reforms by government, professions and the higher education and training sector for a sustainable and affordable health workforce. The main policy levers identified to achieve change were innovation and reform, immigration, training capacity and efficiency and workforce distribution. Conclusion While HW 2025 has provided a national platform for health workforce policy development, it is not a one-off project. It is an ongoing process where HWA will continue to develop and improve health workforce projections incorporating data and methodology improvements to support incremental health workforce changes. PMID:24490586
Crettenden, Ian F; McCarty, Maureen V; Fenech, Bethany J; Heywood, Troy; Taitz, Michelle C; Tudman, Sam
2014-02-03
Australia's health workforce is facing significant challenges now and into the future. Health Workforce Australia (HWA) was established by the Council of Australian Governments as the national agency to progress health workforce reform to address the challenges of providing a skilled, innovative and flexible health workforce in Australia. HWA developed Australia's first major, long-term national workforce projections for doctors, nurses and midwives over a planning horizon to 2025 (called Health Workforce 2025; HW 2025), which provided a national platform for developing policies to help ensure Australia's health workforce meets the community's needs. A review of existing workforce planning methodologies, in concert with the project brief and an examination of data availability, identified that the best fit-for-purpose workforce planning methodology was the stock and flow model for estimating workforce supply and the utilisation method for estimating workforce demand. Scenario modelling was conducted to explore the implications of possible alternative futures, and to demonstrate the sensitivity of the model to various input parameters. Extensive consultation was conducted to test the methodology, data and assumptions used, and also influenced the scenarios selected for modelling. Additionally, a number of other key principles were adopted in developing HW 2025 to ensure the workforce projections were robust and able to be applied nationally. The findings from HW 2025 highlighted that a 'business as usual' approach to Australia's health workforce is not sustainable over the next 10 years, with a need for co-ordinated, long-term reforms by government, professions and the higher education and training sector for a sustainable and affordable health workforce. The main policy levers identified to achieve change were innovation and reform, immigration, training capacity and efficiency and workforce distribution. While HW 2025 has provided a national platform for health workforce policy development, it is not a one-off project. It is an ongoing process where HWA will continue to develop and improve health workforce projections incorporating data and methodology improvements to support incremental health workforce changes.
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.
2006-01-01
Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.
Ohta, Haruhiko; Ohno, Toshiyuki; Hioki, Fumiaki; Shinmoto, Yasuhisa
2004-11-01
A two-phase flow loop is a promising method for application to thermal management systems for large-scale space platforms handling large amounts of energy. Boiling heat transfer reduces the size and weight of cold plates. The transportation of latent heat reduces the mass flow rate of working fluid and pump power. To develop compact heat exchangers for the removal of waste heat from electronic devices with high heat generation density, experiments on a method to increase the critical heat flux for a narrow heated channel between parallel heated and unheated plates were conducted. Fine grooves are machined on the heating surface in a transverse direction to the flow and liquid is supplied underneath flattened bubbles by the capillary pressure difference from auxiliary liquid channels separated by porous metal plates from the main heated channel. The critical heat flux values for the present heated channel structure are more than twice those for a flat surface at gap sizes 2 mm and 0.7 mm. The validity of the present structure with auxiliary liquid channels is confirmed by experiments in which the liquid supply to the grooves is interrupted. The increment in the critical heat flux compared to those for a flat surface takes a maximum value at a certain flow rate of liquid supply to the heated channel. The increment is expected to become larger when the length of the heated channel is increased and/or the gravity level is reduced.
Why and how Mastering an Incremental and Iterative Software Development Process
NASA Astrophysics Data System (ADS)
Dubuc, François; Guichoux, Bernard; Cormery, Patrick; Mescam, Jean Christophe
2004-06-01
One of the key issues regularly mentioned in the current software crisis of the space domain is related to the software development process that must be performed while the system definition is not yet frozen. This is especially true for complex systems like launchers or space vehicles.Several more or less mature solutions are under study by EADS SPACE Transportation and are going to be presented in this paper. The basic principle is to develop the software through an iterative and incremental process instead of the classical waterfall approach, with the following advantages:- It permits systematic management and incorporation of requirements changes over the development cycle with a minimal cost. As far as possible the most dimensioning requirements are analyzed and developed in priority for validating very early the architecture concept without the details.- A software prototype is very quickly available. It improves the communication between system and software teams, as it enables to check very early and efficiently the common understanding of the system requirements.- It allows the software team to complete a whole development cycle very early, and thus to become quickly familiar with the software development environment (methodology, technology, tools...). This is particularly important when the team is new, or when the environment has changed since the previous development. Anyhow, it improves a lot the learning curve of the software team.These advantages seem very attractive, but mastering efficiently an iterative development process is not so easy and induces a lot of difficulties such as:- How to freeze one configuration of the system definition as a development baseline, while most of thesystem requirements are completely and naturally unstable?- How to distinguish stable/unstable and dimensioning/standard requirements?- How to plan the development of each increment?- How to link classical waterfall development milestones with an iterative approach: when should theclassical reviews be performed: Software Specification Review? Preliminary Design Review? CriticalDesign Review? Code Review? Etc...Several solutions envisaged or already deployed by EADS SPACE Transportation will be presented, both from a methodological and technological point of view:- How the MELANIE EADS ST internal methodology improves the concurrent engineering activitiesbetween GNC, software and simulation teams in a very iterative and reactive way.- How the CMM approach can help by better formalizing Requirements Management and Planningprocesses.- How the Automatic Code Generation with "certified" tools (SCADE) can still dramatically shorten thedevelopment cycle.Then the presentation will conclude by showing an evaluation of the cost and planning reduction based on a pilot application by comparing figures on two similar projects: one with the classical waterfall process, the other one with an iterative and incremental approach.
Boosting flood warning schemes with fast emulator of detailed hydrodynamic models
NASA Astrophysics Data System (ADS)
Bellos, V.; Carbajal, J. P.; Leitao, J. P.
2017-12-01
Floods are among the most destructive catastrophic events and their frequency has incremented over the last decades. To reduce flood impact and risks, flood warning schemes are installed in flood prone areas. Frequently, these schemes are based on numerical models which quickly provide predictions of water levels and other relevant observables. However, the high complexity of flood wave propagation in the real world and the need of accurate predictions in urban environments or in floodplains hinders the use of detailed simulators. This sets the difficulty, we need fast predictions that meet the accuracy requirements. Most physics based detailed simulators although accurate, will not fulfill the speed demand. Even if High Performance Computing techniques are used (the magnitude of required simulation time is minutes/hours). As a consequence, most flood warning schemes are based in coarse ad-hoc approximations that cannot take advantage a detailed hydrodynamic simulation. In this work, we present a methodology for developing a flood warning scheme using an Gaussian Processes based emulator of a detailed hydrodynamic model. The methodology consists of two main stages: 1) offline stage to build the emulator; 2) online stage using the emulator to predict and generate warnings. The offline stage consists of the following steps: a) definition of the critical sites of the area under study, and the specification of the observables to predict at those sites, e.g. water depth, flow velocity, etc.; b) generation of a detailed simulation dataset to train the emulator; c) calibration of the required parameters (if measurements are available). The online stage is carried on using the emulator to predict the relevant observables quickly, and the detailed simulator is used in parallel to verify key predictions of the emulator. The speed gain given by the emulator allows also to quantify uncertainty in predictions using ensemble methods. The above methodology is applied in real world scenario.
Methodology of modeling and measuring computer architectures for plasma simulations
NASA Technical Reports Server (NTRS)
Wang, L. P. T.
1977-01-01
A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.
Kierkegaard, Axel; Boij, Susann; Efraimsson, Gunilla
2010-02-01
Acoustic wave propagation in flow ducts is commonly modeled with time-domain non-linear Navier-Stokes equation methodologies. To reduce computational effort, investigations of a linearized approach in frequency domain are carried out. Calculations of sound wave propagation in a straight duct are presented with an orifice plate and a mean flow present. Results of transmission and reflections at the orifice are presented on a two-port scattering matrix form and are compared to measurements with good agreement. The wave propagation is modeled with a frequency domain linearized Navier-Stokes equation methodology. This methodology is found to be efficient for cases where the acoustic field does not alter the mean flow field, i.e., when whistling does not occur.
NASA Astrophysics Data System (ADS)
Tetrault, Philippe-Andre
2000-10-01
In transonic flow, the aerodynamic interference that occurs on a strut-braced wing airplane, pylons, and other applications is significant. The purpose of this work is to provide relationships to estimate the interference drag of wing-strut, wing-pylon, and wing-body arrangements. Those equations are obtained by fitting a curve to the results obtained from numerous Computational Fluid Dynamics (CFD) calculations using state-of-the-art codes that employ the Spalart-Allmaras turbulence model. In order to estimate the effect of the strut thickness, the Reynolds number of the flow, and the angle made by the strut with an adjacent surface, inviscid and viscous calculations are performed on a symmetrical strut at an angle between parallel walls. The computations are conducted at a Mach number of 0.85 and Reynolds numbers of 5.3 and 10.6 million based on the strut chord. The interference drag is calculated as the drag increment of the arrangement compared to an equivalent two-dimensional strut of the same cross-section. The results show a rapid increase of the interference drag as the angle of the strut deviates from a position perpendicular to the wall. Separation regions appear for low intersection angles, but the viscosity generally provides a positive effect in alleviating the strength of the shock near the junction and thus the drag penalty. When the thickness-to-chord ratio of the strut is reduced, the flowfield is disturbed only locally at the intersection of the strut with the wall. This study provides an equation to estimate the interference drag of simple intersections in transonic flow. In the course of performing the calculations associated with this work, an unstructured flow solver was utilized. Accurate drag prediction requires a very fine grid and this leads to problems associated with the grid generator. Several challenges facing the unstructured grid methodology are discussed: slivers, grid refinement near the leading edge and at the trailing edge, grid convergence studies, volume grid generation, and other practical matters concerning such calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stapp, H.
There are deep similarities between Whitehead's idea of the process by which nature unfolds and the ideas of quantum theory. Whitehead says that the world is made of ''actual occasions'', each of which arises from potentialities created by prior actual occasions. These actual occasions are happenings modeled on experiential events, each of which comes into being and then perishes, only to be replaced by a successor. It is these experience-like happenings that are the basic realities of nature, according to Whitehead, not the persisting physical particles that Newtonian physics took be the basic entities. Similarly, Heisenberg says that what ismore » really happening in a quantum process is the emergence of an actual from potentialities created by prior actualities. In the orthodox Copenhagen interpretation of quantum theory the actual things to which the theory refer are increments in ''our knowledge''. These increments are experiential events. The particles of classical physics lose their fundamental status: they dissolve into diffuse clouds of possibilities. At each stage of the unfolding of nature the complete cloud of possibilities acts like the potentiality for the occurrence of a next increment in knowledge, whose occurrence can radically change the cloud of possibilities/potentialities for the still-later increments in knowledge. The fundamental difference between these ideas about nature and the classical ideas that reigned from the time of Newton until this century concerns the status of the experiential aspects of nature. These are things such as thoughts, ideas, feelings, and sensations. They are distinguished from the physical aspects of nature, which are described in terms of quantities explicitly located in tiny regions of space and time. According to the ideas of classical physics the physical world is made up exclusively of things of this latter type, and the unfolding of the physical world is determined by causal connections involving only these things. Thus experiential-type things could be considered to influence the flow of physical events only insofar as they themselves were completely determined by physical things. In other words, experiential-type qualities. insofar as they could affect the flow of physical events, could--within the framework of classical physics--not be free: they must be completely determined by the physical aspects of nature that are, by themselves,sufficient to determine the flow of physical events.« less
Summary of Data from the Sixth AIAA CFD Drag Prediction Workshop: CRM Cases 2 to 5
NASA Technical Reports Server (NTRS)
Tinoco, Edward N.; Brodersen, Olaf P.; Keye, Stefan; Laflin, Kelly R.; Feltrop, Edward; Vassberg, John C.; Mani, Mori; Rider, Ben; Wahls, Richard A.; Morrison, Joseph H.;
2017-01-01
Results from the Sixth AIAA CFD Drag Prediction Workshop Common Research Model Cases 2 to 5 are presented. As with past workshops, numerical calculations are performed using industry-relevant geometry, methodology, and test cases. Cases 2 to 5 focused on force/moment and pressure predictions for the NASA Common Research Model wing-body and wing-body-nacelle-pylon configurations, including Case 2 - a grid refinement study and nacelle-pylon drag increment prediction study; Case 3 - an angle-of-attack buffet study; Case 4 - an optional wing-body grid adaption study; and Case 5 - an optional wing-body coupled aero-structural simulation. The Common Research Model geometry differed from previous workshops in that it was deformed to the appropriate static aeroelastic twist and deflection at each specified angle-of-attack. The grid refinement study used a common set of overset and unstructured grids, as well as user created Multiblock structured, unstructured, and Cartesian based grids. For the supplied common grids, six levels of refinement were created resulting in grids ranging from 7x10(exp 6) to 208x10(exp 6) cells. This study (Case 2) showed further reduced scatter from previous workshops, and very good prediction of the nacelle-pylon drag increment. Case 3 studied buffet onset at M=0.85 using the Medium grid (20 to 40x10(exp 6) nodes) from the above described sequence. The prescribed alpha sweep used finely spaced intervals through the zone where wing separation was expected to begin. Although the use of the prescribed aeroelastic twist and deflection at each angle-of-attack greatly improved the wing pressure distribution agreement with test data, many solutions still exhibited premature flow separation. The remaining solutions exhibited a significant spread of lift and pitching moment at each angle-of-attack, much of which can be attributed to excessive aft pressure loading and shock location variation. Four Case 4 grid adaption solutions were submitted. Starting with grids less than 2x10(exp 6) grid points, two solutions showed a rapid convergence to an acceptable solution. Four Case 5 coupled aerostructural solutions were submitted. Both showed good agreement with experimental data. Results from this workshop highlight the continuing need for CFD improvement, particularly for conditions with significant flow separation. These comparisons also suggest the need for improved experimental diagnostics to guide future CFD development.
Hybrid intelligent optimization methods for engineering problems
NASA Astrophysics Data System (ADS)
Pehlivanoglu, Yasin Volkan
The purpose of optimization is to obtain the best solution under certain conditions. There are numerous optimization methods because different problems need different solution methodologies; therefore, it is difficult to construct patterns. Also mathematical modeling of a natural phenomenon is almost based on differentials. Differential equations are constructed with relative increments among the factors related to yield. Therefore, the gradients of these increments are essential to search the yield space. However, the landscape of yield is not a simple one and mostly multi-modal. Another issue is differentiability. Engineering design problems are usually nonlinear and they sometimes exhibit discontinuous derivatives for the objective and constraint functions. Due to these difficulties, non-gradient-based algorithms have become more popular in recent decades. Genetic algorithms (GA) and particle swarm optimization (PSO) algorithms are popular, non-gradient based algorithms. Both are population-based search algorithms and have multiple points for initiation. A significant difference from a gradient-based method is the nature of the search methodologies. For example, randomness is essential for the search in GA or PSO. Hence, they are also called stochastic optimization methods. These algorithms are simple, robust, and have high fidelity. However, they suffer from similar defects, such as, premature convergence, less accuracy, or large computational time. The premature convergence is sometimes inevitable due to the lack of diversity. As the generations of particles or individuals in the population evolve, they may lose their diversity and become similar to each other. To overcome this issue, we studied the diversity concept in GA and PSO algorithms. Diversity is essential for a healthy search, and mutations are the basic operators to provide the necessary variety within a population. After having a close scrutiny of the diversity concept based on qualification and quantification studies, we improved new mutation strategies and operators to provide beneficial diversity within the population. We called this new approach as multi-frequency vibrational GA or PSO. They were applied to different aeronautical engineering problems in order to study the efficiency of these new approaches. These implementations were: applications to selected benchmark test functions, inverse design of two-dimensional (2D) airfoil in subsonic flow, optimization of 2D airfoil in transonic flow, path planning problems of autonomous unmanned aerial vehicle (UAV) over a 3D terrain environment, 3D radar cross section minimization problem for a 3D air vehicle, and active flow control over a 2D airfoil. As demonstrated by these test cases, we observed that new algorithms outperform the current popular algorithms. The principal role of this multi-frequency approach was to determine which individuals or particles should be mutated, when they should be mutated, and which ones should be merged into the population. The new mutation operators, when combined with a mutation strategy and an artificial intelligent method, such as, neural networks or fuzzy logic process, they provided local and global diversities during the reproduction phases of the generations. Additionally, the new approach also introduced random and controlled diversity. Due to still being population-based techniques, these methods were as robust as the plain GA or PSO algorithms. Based on the results obtained, it was concluded that the variants of the present multi-frequency vibrational GA and PSO were efficient algorithms, since they successfully avoided all local optima within relatively short optimization cycles.
Gómez-Restrepo, Carlos; Gómez-García, María Juliana; Naranjo, Salomé; Rondón, Martín Alonso; Acosta-Hernández, Andrés Leonardo
2014-12-01
Identify the possibility that alcohol consumption represents an incremental factor in healthcare costs of patients involved in traffic accidents. Data of people admitted into three major health institutions from an intermediate city in Colombia was collected. Socio-demographic characteristics, health care costs and alcohol consumption levels by breath alcohol concentration (BrAC) methodology were identified. Generalized linear models were applied to investigate whether alcohol consumption acts as an incremental factor for healthcare costs. The average cost of healthcare was 878 USD. In general, there are differences between health care costs for patients with positive blood alcohol level compared with those who had negative levels. Univariate analysis shows that the average cost of care can be 2.26 times higher (95% CI: 1.20-4.23), and after controlling for patient characteristics, alcohol consumption represents an incremental factor of almost 1.66 times (95% CI: 1.05-2.62). Alcohol is identified as a possible factor associated with the increased use of direct health care resources. The estimates show the need to implement and enhance prevention programs against alcohol consumption among citizens, in order to mitigate the impact that traffic accidents have on their health status. The law enforcement to help reduce driving under the influence of alcoholic beverages could help to diminish the economic and social impacts of this problem. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Jacobs, P. F.
1982-01-01
The purpose of this study was to determine if advanced supercritical wings incur higher trim drag values at cruise conditions than current wide body technology wings. Relative trim drag increments were measured in an experimental wind tunnel investigation conducted in the Langley 8 Foot Transonic Pressure Tunnel. The tests utilized a high aspect ratio supercritical wing and a wide body aircraft wing, in conjunction with five different horizontal tail configurations, mounted on a representative wide body fuselage. The three low tail and two T-tail configurations were designed to measure the effects of horizontal tail size, location, and camber on the trim drag increments for the two wings. Longitudinal force and moment data were taken at a Mach number of 0.82 and design cruise lift coefficients for the wide body and supercritical wings of 0.45 and 0.55, respectively. The data indicate that the supercritical wing does not have significantly higher trim drag than the wide body wing. A reduction in tail size, combined with relaxed static stability, produced trim drag reductions for both wings. The cambered tails had higher trim drag increments than the symmetrical tails for both wings, and the T-tail configurations had lower trim drag increments than the low tail configurations.
Statistical Analysis of the AIAA Drag Prediction Workshop CFD Solutions
NASA Technical Reports Server (NTRS)
Morrison, Joseph H.; Hemsch, Michael J.
2007-01-01
The first AIAA Drag Prediction Workshop (DPW), held in June 2001, evaluated the results from an extensive N-version test of a collection of Reynolds-Averaged Navier-Stokes CFD codes. The code-to-code scatter was more than an order of magnitude larger than desired for design and experimental validation of cruise conditions for a subsonic transport configuration. The second AIAA Drag Prediction Workshop, held in June 2003, emphasized the determination of installed pylon-nacelle drag increments and grid refinement studies. The code-to-code scatter was significantly reduced compared to the first DPW, but still larger than desired. However, grid refinement studies showed no significant improvement in code-to-code scatter with increasing grid refinement. The third AIAA Drag Prediction Workshop, held in June 2006, focused on the determination of installed side-of-body fairing drag increments and grid refinement studies for clean attached flow on wing alone configurations and for separated flow on the DLR-F6 subsonic transport model. This report compares the transonic cruise prediction results of the second and third workshops using statistical analysis.
NASA Technical Reports Server (NTRS)
Dittmar, J. H.; Woodward, R. P.; Mackinnon, M. J.
1984-01-01
The noise source caused by the interaction of the rotor tip flow irregularities (vortices and velocity defects) with the downstream stator vanes was studied. Fan flow was removed behind a 0.508 meter (20 in.) diameter model turbofan through an outer wall slot between the rotor and stator. Noise measurements were made with far-field microphones positioned in an arc about the fan inlet and with a pressure transducer in the duct behind the stator. Little tone noise reduction was observed in the forward arc during flow removal; possibly because the rotor-stator interaction noise did not propagate upstream through the rotor. Noise reductions were maded in the duct behind the stator and the largest decrease occurred with the first increment of flow removal. This result indicates that the rotor tip flow irregularity-stator interaction is as important a noise producing mechanism as the normally considered rotor wake-stator interaction.
Effects of Humidity On the Flow Characteristics of PS304 Plasma Spray Feedstock Powder Blend
NASA Technical Reports Server (NTRS)
Stanford, Malcolm K.; DellaCorte, Christopher
2002-01-01
The effects of environmental humidity on the flow characteristics of PS304 feedstock have been investigated. Angular and spherical BaF2-CaF2 powder was fabricated by comminution and by atomization, respectively. The fluorides were added incrementally to the nichrome, chromia, and silver powders to produce PS304 feedstock. The powders were dried in a vacuum oven and cooled to a Tom temperature under dry nitrogen. The flow of the powder was studied from 2 to 100 percent relative humidity (RH) The results suggest that the feedstock flow is slightly degraded with increasing humidity below 66 percent RH and is more affected above 66 percent RH. There was no flow above 88 percent RH. Narrower particle size distributions of the angular fluorides allowed flow up to 95 percent RH. These results offer guidance that enhances the commercial potential for this material system.
2011-09-20
optimal portfolio point on the efficient frontier, for example, Portfolio B on the chart in Figure A1. Then, by subsequently changing some of the ... optimized portfolio controlling for risk using the IRM methodology and tool suite. Results indicate that both rapid and incremental implementation...Results of the KVA and SD scenario analysis provided the financial information required to forecast an optimized
Incremental Sampling Methodology (ISM) for Metallic Residues
2013-08-01
Deviation (also %RSD) Sb Antimony Sn Tin Sr Strontium STD Standard Deviation ERDC TR-13-5 x SU Sampling Unit Ti Titanium UCL Upper Confidence Limit...Ce), chromium (Cr), Cu, Fe, Pb, mag- nesium (Mg), Mn, potassium (K), sodium (Na), strontium (Sr), titanium (Ti), W, zirconium (Zr), and Zn (Clausen...wastes. A proposed alternative to EPA SW 846 Method 3050. Environmental Science and Technology 23: 89 −900. Matzke, B., N. Hassig, J. Wilson, R. Gilber
Toward a Formal Evaluation of Refactorings
NASA Technical Reports Server (NTRS)
Paul, John; Kuzmina, Nadya; Gamboa, Ruben; Caldwell, James
2008-01-01
Refactoring is a software development strategy that characteristically alters the syntactic structure of a program without changing its external behavior [2]. In this talk we present a methodology for extracting formal models from programs in order to evaluate how incremental refactorings affect the verifiability of their structural specifications. We envision that this same technique may be applicable to other types of properties such as those that concern the design and maintenance of safety-critical systems.
Tables for Supersonic Flow of Helium Around Right Circular Cones at Zero Angle of Attack
NASA Technical Reports Server (NTRS)
Sims, J. L.
1973-01-01
The results of the calculation of supersonic flow of helium about right circular cones at zero angle of attack are presented in tabular form. The calculations were performed using the Taylor-Maccoll theory. Numerical integrations were performed using a Runge-Kutta method for second-order differential equations. Results were obtained for cone angles from 2.5 to 30 degrees in regular increments of 2.5 degrees. In all calculations the desired free-stream Mach number was obtained to five or more significant figures.
Future Climate Change Impact Assessment of River Flows at Two Watersheds of Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Ercan, A.; Ishida, K.; Kavvas, M. L.; Chen, Z. R.; Jang, S.; Amin, M. Z. M.; Shaaban, A. J.
2016-12-01
Impacts of climate change on the river flows under future climate change conditions were assessed over Muda and Dungun watersheds of Peninsular Malaysia by means of a coupled regional climate model and a physically-based hydrology model utilizing an ensemble of 15 different future climate realizations. Coarse resolution GCMs' future projections covering a wide range of emission scenarios were dynamically downscaled to 6 km resolution over the study area. Hydrologic simulations of the two selected watersheds were carried out at hillslope-scale and at hourly increments.
Cost-Effectiveness Analysis: a proposal of new reporting standards in statistical analysis
Bang, Heejung; Zhao, Hongwei
2014-01-01
Cost-effectiveness analysis (CEA) is a method for evaluating the outcomes and costs of competing strategies designed to improve health, and has been applied to a variety of different scientific fields. Yet, there are inherent complexities in cost estimation and CEA from statistical perspectives (e.g., skewness, bi-dimensionality, and censoring). The incremental cost-effectiveness ratio that represents the additional cost per one unit of outcome gained by a new strategy has served as the most widely accepted methodology in the CEA. In this article, we call for expanded perspectives and reporting standards reflecting a more comprehensive analysis that can elucidate different aspects of available data. Specifically, we propose that mean and median-based incremental cost-effectiveness ratios and average cost-effectiveness ratios be reported together, along with relevant summary and inferential statistics as complementary measures for informed decision making. PMID:24605979
NASA Technical Reports Server (NTRS)
Cotton, William B.; Hilb, Robert; Koczo, Stefan, Jr.; Wing, David J.
2016-01-01
A set of five developmental steps building from the NASA TASAR (Traffic Aware Strategic Aircrew Requests) concept are described, each providing incrementally more efficiency and capacity benefits to airspace system users and service providers, culminating in a Full Airborne Trajectory Management capability. For each of these steps, the incremental Operational Hazards and Safety Requirements are identified for later use in future formal safety assessments intended to lead to certification and operational approval of the equipment and the associated procedures. Two established safety assessment methodologies that are compliant with the FAA's Safety Management System were used leading to Failure Effects Classifications (FEC) for each of the steps. The most likely FEC for the first three steps, Basic TASAR, Digital TASAR, and 4D TASAR, is "No effect". For step four, Strategic Airborne Trajectory Management, the likely FEC is "Minor". For Full Airborne Trajectory Management (Step 5), the most likely FEC is "Major".
Multi-point optimization of recirculation flow type casing treatment in centrifugal compressors
NASA Astrophysics Data System (ADS)
Tun, Min Thaw; Sakaguchi, Daisaku
2016-06-01
High-pressure ratio and wide operating range are highly required for a turbocharger in diesel engines. A recirculation flow type casing treatment is effective for flow range enhancement of centrifugal compressors. Two ring grooves on a suction pipe and a shroud casing wall are connected by means of an annular passage and stable recirculation flow is formed at small flow rates from the downstream groove toward the upstream groove through the annular bypass. The shape of baseline recirculation flow type casing is modified and optimized by using a multi-point optimization code with a metamodel assisted evolutionary algorithm embedding a commercial CFD code CFX from ANSYS. The numerical optimization results give the optimized design of casing with improving adiabatic efficiency in wide operating flow rate range. Sensitivity analysis of design parameters as a function of efficiency has been performed. It is found that the optimized casing design provides optimized recirculation flow rate, in which an increment of entropy rise is minimized at grooves and passages of the rotating impeller.
Higher mortgages, lower energy bills: The real economics of buying an energy-efficient home
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, E.
1987-02-01
To measure the actual costs and benefits of buying an energy- efficient home, it is necessary to employ a cash-flow model that accounts for mortgage interest and other charges associated with the incremental costs of conservation measures. The ability to make payments gradually over the term of a mortgage, energy savings, and tax benefits contribute to increased cost effectiveness. Conversely, financial benefits are reduced by interest payments, insurance, taxes, and various fees linked to the (higher) sale price of an energy-efficient home. Accounting for these factors can yield a strikingly different picture from those given by commonly used ''engineering'' indicators,more » such as simple payback time, internal rate of return, or net present value (NPV), which are based solely on incremental costs and energy savings. This analysis uses actual energy savings data and incremental construction costs to evaluate the mortgage cash flow for 79 of the 144 energy-efficient homes constructed in Minnesota under the Energy-Efficient Housing Demonstration Program (EEHDP) initiated in 1980 by the Minnesota Housing Finance Agency. Using typical lending terms and fees, we find that the mean mortgage-NPV derived from the homeowners' real cash flow (including construction and financing costs) is 20% lower than the standard engineering-NPV of the conservation investment: $7981 versus $9810. For eight homes, the mortgage-NPV becomes negative once we account for the various mortgage-related effects. Sensitivities to interest rates, down payment, loan term, and marginal tax rate are included to illustrate the often large impact of alternative assumptions about these parameters. The most dramatic effect occurs when the loan term is reduced from 30 to 15 years and the mortgage NPV falls to -$925. We also evaluate the favorable Federal Home Administration (FHA) terms actually applied to the EEHDP homes. 8 refs., 4 figs., 3 tabs.« less
Scaling of wet granular flows in a rotating drum
NASA Astrophysics Data System (ADS)
Jarray, Ahmed; Magnanimo, Vanessa; Ramaioli, Marco; Luding, Stefan
2017-06-01
In this work, we investigate the effect of capillary forces and particle size on wet granular flows and we propose a scaling methodology that ensures the conservation of the bed flow. We validate the scaling law experimentally by using different size glass beads with tunable capillary forces. The latter is obtained using mixtures of ethanol-water as interstitial liquid and by increasing the hydrophobicity of glass beads with an ad-hoc silanization procedure. The scaling methodology in the flow regimes considered (slipping, slumping and rolling) yields similar bed flow for different particle sizes including the angle of repose that normally increases when decreasing the particle size.
Trangmar, Steven J; Chiesa, Scott T; Stock, Christopher G; Kalsi, Kameljit K; Secher, Niels H; González-Alonso, José
2014-07-15
Intense exercise is associated with a reduction in cerebral blood flow (CBF), but regulation of CBF during strenuous exercise in the heat with dehydration is unclear. We assessed internal (ICA) and common carotid artery (CCA) haemodynamics (indicative of CBF and extra-cranial blood flow), middle cerebral artery velocity (MCA Vmean), arterial-venous differences and blood temperature in 10 trained males during incremental cycling to exhaustion in the heat (35°C) in control, dehydrated and rehydrated states. Dehydration reduced body mass (75.8 ± 3 vs. 78.2 ± 3 kg), increased internal temperature (38.3 ± 0.1 vs. 36.8 ± 0.1°C), impaired exercise capacity (269 ± 11 vs. 336 ± 14 W), and lowered ICA and MCA Vmean by 12-23% without compromising CCA blood flow. During euhydrated incremental exercise on a separate day, however, exercise capacity and ICA, MCA Vmean and CCA dynamics were preserved. The fast decline in cerebral perfusion with dehydration was accompanied by increased O2 extraction (P < 0.05), resulting in a maintained cerebral metabolic rate for oxygen (CMRO2). In all conditions, reductions in ICA and MCA Vmean were associated with declining cerebral vascular conductance, increasing jugular venous noradrenaline, and falling arterial carbon dioxide tension (P aCO 2) (R(2) ≥ 0.41, P ≤ 0.01) whereas CCA flow and conductance were related to elevated blood temperature. In conclusion, dehydration accelerated the decline in CBF by decreasing P aCO 2 and enhancing vasoconstrictor activity. However, the circulatory strain on the human brain during maximal exercise does not compromise CMRO2 because of compensatory increases in O2 extraction. © 2014 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
Trangmar, Steven J; Chiesa, Scott T; Stock, Christopher G; Kalsi, Kameljit K; Secher, Niels H; González-Alonso, José
2014-01-01
Intense exercise is associated with a reduction in cerebral blood flow (CBF), but regulation of CBF during strenuous exercise in the heat with dehydration is unclear. We assessed internal (ICA) and common carotid artery (CCA) haemodynamics (indicative of CBF and extra-cranial blood flow), middle cerebral artery velocity (MCA Vmean), arterial–venous differences and blood temperature in 10 trained males during incremental cycling to exhaustion in the heat (35°C) in control, dehydrated and rehydrated states. Dehydration reduced body mass (75.8 ± 3 vs. 78.2 ± 3 kg), increased internal temperature (38.3 ± 0.1 vs. 36.8 ± 0.1°C), impaired exercise capacity (269 ± 11 vs. 336 ± 14 W), and lowered ICA and MCA Vmean by 12–23% without compromising CCA blood flow. During euhydrated incremental exercise on a separate day, however, exercise capacity and ICA, MCA Vmean and CCA dynamics were preserved. The fast decline in cerebral perfusion with dehydration was accompanied by increased O2 extraction (P < 0.05), resulting in a maintained cerebral metabolic rate for oxygen (CMRO2). In all conditions, reductions in ICA and MCA Vmean were associated with declining cerebral vascular conductance, increasing jugular venous noradrenaline, and falling arterial carbon dioxide tension () (R2 ≥ 0.41, P ≤ 0.01) whereas CCA flow and conductance were related to elevated blood temperature. In conclusion, dehydration accelerated the decline in CBF by decreasing and enhancing vasoconstrictor activity. However, the circulatory strain on the human brain during maximal exercise does not compromise CMRO2 because of compensatory increases in O2 extraction. PMID:24835170
Smit, Jeff M; Koning, Gerhard; van Rosendael, Alexander R; Dibbets-Schneider, Petra; Mertens, Bart J; Jukema, J Wouter; Delgado, Victoria; Reiber, Johan H C; Bax, Jeroen J; Scholte, Arthur J
2017-10-01
A new method has been developed to calculate fractional flow reserve (FFR) from invasive coronary angiography, the so-called "contrast-flow quantitative flow ratio (cQFR)". Recently, cQFR was compared to invasive FFR in intermediate coronary lesions showing an overall diagnostic accuracy of 85%. The purpose of this study was to investigate the relationship between cQFR and myocardial ischemia assessed by single-photon emission computed tomography myocardial perfusion imaging (SPECT MPI). Patients who underwent SPECT MPI and coronary angiography within 3 months were included. The cQFR computation was performed offline, using dedicated software. The cQFR computation was based on 3-dimensional quantitative coronary angiography (QCA) and computational fluid dynamics. The standard 17-segment model was used to determine the vascular territories. Myocardial ischemia was defined as a summed difference score ≥2 in a vascular territory. A cQFR of ≤0.80 was considered abnormal. Two hundred and twenty-four coronary arteries were analysed in 85 patients. Overall accuracy of cQFR to detect ischemia on SPECT MPI was 90%. In multivariable analysis, cQFR was independently associated with ischemia on SPECT MPI (OR per 0.01 decrease of cQFR: 1.10; 95% CI 1.04-1.18, p = 0.002), whereas clinical and QCA parameters were not. Furthermore, cQFR showed incremental value for the detection of ischemia compared to clinical and QCA parameters (global chi square 48.7 to 62.6; p <0.001). A good relationship between cQFR and SPECT MPI was found. cQFR was independently associated with ischemia on SPECT MPI and showed incremental value to detect ischemia compared to clinical and QCA parameters.
Improvement of emergency department patient flow using lean thinking.
Sánchez, Miquel; Suárez, Montse; Asenjo, María; Bragulat, Ernest
2018-05-01
To apply lean thinking in triage acuity level-3 patients in order to improve emergency department (ED) throughtput and waiting time. A prospective interventional study. An ED of a tertiary care hospital. Triage acuity level-3 patients. To apply lean techniques such as value stream mapping, workplace organization, reduction of wastes and standardization by the frontline staff. Two periods were compared: (i) pre-lean: April-September, 2015; and (ii) post-lean: April-September, 2016. Variables included: median process time (time from beginning of nurse preparation to the end of nurse finalization after doctor disposition) of both discharged and transferred to observation patients; median length of stay; median waiting time; left without being seen, 72-h revisit and mortality rates, and daily number of visits. There was no additional staff or bed after lean implementation. Despite an increment in the daily number of visits (+8.3%, P < 0.001), significant reductions in process time of discharged (182 vs 160 min, P < 0.001) and transferred to observation (186 vs 176 min, P < 0.001) patients, in length of stay (389 vs 329 min, P < 0.001), and in waiting time (71 vs 48 min, P < 0.001) were achieved after lean implementation. No significant differences were registered in left without being seen rate (5.23% vs 4.95%), 72-h revisit rate (3.41% vs 3.93%), and mortality rate (0.23% vs 0.15%). Lean thinking is a methodology that can improve triage acuity level-3 patient flow in the ED, resulting in better throughput along with reduced waiting time.
Effects of nonuniform Mach-number entrance on scramjet nozzle flowfield and performance
NASA Astrophysics Data System (ADS)
Zhang, Pu; Xu, Jinglei; Quan, Zhibin; Mo, Jianwei
2016-12-01
Considering the non-uniformities of nozzle entrance influenced by the upstream, the effects of nonuniform Mach-number coupled with shock and expansion-wave on the flowfield and performances of single expansion ramp nozzle (SERN) are numerically studied using Reynolds-Averaged Navier-Stokes equations. The adopted Reynolds-averaged Navier-Stokes methodology is validated by comparing the numerical results with the cold experimental data, and the average method used in this paper is discussed. Uniform and nonuniform facility nozzles are designed to generate different Mach-number profile for the inlet of SERN, which is direct-connected with different facility nozzle, and the whole flowfield is simulated. Because of the coupling of shock and expansion-wave, flow direction of nonuniform SERN entrance is distorted. Compared with Mach contour of uniform case, the line is more curved for coupling shock-wave entrance (SWE) case, and flatter for the coupling expansion-wave entrance (EWE) case. Wall pressure distribution of SWE case appears rising region, whereas decreases like stairs of EWE case. The numerical results reveal that the coupled shock and expansion-wave play significant roles on nozzle performances. Compared with the SERN performances of uniform entrance case at the same work conditions, the thrust of nonuniform entrance cases reduces by 3-6%, pitch moment decreases by 2.5-7%. The negative lift presents an incremental trend with EWE while the situation is the opposite with SWE. These results confirm that considering the entrance flow parameter nonuniformities of a scramjet nozzle coupled with shock or expansion-wave from the upstream is necessary.
NASA Astrophysics Data System (ADS)
Llorens, Pilar; Gallart, Francesc; Latron, Jérôme; Cid, Núria; Rieradevall, Maria; Prat, Narcís
2016-04-01
Aquatic life in temporary streams is strongly conditioned by the temporal variability of the hydrological conditions that control the occurrence and connectivity of diverse mesohabitats. In this context, the software TREHS (Temporary Rivers' Ecological and Hydrological Status) has been developed, in the framework of the LIFE Trivers project, to help managers for adequately implement the Water Framework Directive in this type of water bodies. TREHS, using the methodology described in Gallart et al (2012), defines six temporal 'aquatic states', based on the hydrological conditions representing different mesohabitats, for a given reach at a particular moment. Nevertheless, hydrological data for assessing the regime of temporary streams are often non-existent or scarce. The scarcity of flow data makes frequently impossible the characterization of temporary streams hydrological regimes and, as a consequence, the selection of the correct periods and methods to determine their ecological status. Because of its qualitative nature, the TREHS approach allows the use of alternative methodologies to assess the regime of temporary streams in the lack of observed flow data. However, to adapt the TREHS to this qualitative data both the temporal scheme (from monthly to seasonal) as well as the number of aquatic states (from 6 to 3) have been modified. Two alternatives complementary methodologies were tested within the TREHS framework to assess the regime of temporary streams: interviews and aerial photographs. All the gauging stations (13) belonging to the Catalan Internal Catchments (NE, Spain) with recurrent zero flows periods were selected to validate both methodologies. On one hand, non-structured interviews were carried out to inhabitants of villages and small towns near the gauging stations. Flow permanence metrics for input into TREHS were drawn from the notes taken during the interviews. On the other hand, the historical series of available aerial photographs (typically 10) were examined. In this case, flow permanence metrics were estimated as the proportion of photographs presenting stream flow. Results indicate that for streams being more than 25% of the time dry, interviews systematically underestimated flow, but the qualitative information given by inhabitants was of great interest to understand river dynamics. On the other hand, the use of aerial photographs gave a good estimation of flow permanence, but the seasonality was conditioned to the capture date of the aerial photographs. For these reasons, we recommend to use both methodologies together.
Relations between habitat variability and population dynamics of bass in the Huron River, Michigan
Bovee, Ken D.; Newcomb, Tammy J.; Coon, Thomas G.
1994-01-01
One of the assumption of the Instream Flow Incremental Methodology (IFIM) is that the dynamics of fish populations are directly or indirectly related to habitat availability. Because this assumption has not been successfully tested in coolwater streams, questions arise regarding the validity of the methodology in such streams. The purpose of our study was to determine whether relations existed between habitat availability and population dynamics of smallmouth bass (Micropterus dolomieu) and rock bass (Ambloplites rupestris) in a 16-km reach of the Huron River in southeastern Michigan. Both species exhibited strong to moderate carryover of year classes from age 0 through age 2, indicating that adult populations were related to factors affecting recruitment. Year-class strength and subsequent numbers of yearling bass were related to the availability of young-of-year habitat during the first growing season for a cohort. Number of age-0, age-1, and adult smallmouth bass were related to the average length at age 0 for the cohort. Length at age 0 was associated with young-of-year habitat and thermal regime during the first growing season. Rock bass populations exhibited similar associations among age classes and habitat variables. Compared to smallmouth bass, the number of age-2 rock bass was associated more closely with their length at age 0 than with year-class strength. Length at age 0 and year-class strength of rock bass were associated with the same habitat variables as those related to age-0 smallmouth bass. We hypothesize that an energetic mechanism linked thermal regime to length at age 0 and that increased growth resulted in higher survival rates from age 0 to age 1. We also postulate that young-of-year habitat provided protection from predators, higher production of food resources, and increased foraging efficiency. We conclude that the IFIM is a valid methodology for instream flow investigations of coolwater streams. The results for our study support the contention that the dynamics of bass populations are directly or indirectly related to habitat availability in coolwater streams. Our study also revealed several implications related to the operational application of the IFIM in coolwater streams: 1. Greater emphasis should be placed on the alleviation of habitat impacts to early life history phases of bass. 2. Effects of the thermal regime are important in some coolwater streams even if temperatures remain within nonlethal limits. Degree-day analyses should be routinely included in study plans for applications of the IFIM in coolwater streams. 3. The smallest amount of habitat occurring within or across years is not necessarily the most significant event affecting population dynamics. The timing of extreme events can be as important as their magnitude. 4. Population-related habitat limitations were associated with high flows more often than with low flows (although both occurred). Negotiations that focus only on minimum flows may preclude viable water management options and ignore significant biological events. This finding is particularly relevant to negotiations involving hydrospeaking operations. 5. IFIM users are advised to consider the use of binary criteria in place of conventional suitability index curves in microhabitat simulations. Criteria defining the optimal ranges of variables are preferable to broader rangers, and criteria that simply define suitable conditions should be avoided entirely.
An Expert System toward Buiding An Earth Science Knowledge Graph
NASA Astrophysics Data System (ADS)
Zhang, J.; Duan, X.; Ramachandran, R.; Lee, T. J.; Bao, Q.; Gatlin, P. N.; Maskey, M.
2017-12-01
In this ongoing work, we aim to build foundations of Cognitive Computing for Earth Science research. The goal of our project is to develop an end-to-end automated methodology for incrementally constructing Knowledge Graphs for Earth Science (KG4ES). These knowledge graphs can then serve as the foundational components for building cognitive systems in Earth science, enabling researchers to uncover new patterns and hypotheses that are virtually impossible to identify today. In addition, this research focuses on developing mining algorithms needed to exploit these constructed knowledge graphs. As such, these graphs will free knowledge from publications that are generated in a very linear, deterministic manner, and structure knowledge in a way that users can both interact and connect with relevant pieces of information. Our major contributions are two-fold. First, we have developed an end-to-end methodology for constructing Knowledge Graphs for Earth Science (KG4ES) using existing corpus of journal papers and reports. One of the key challenges in any machine learning, especially deep learning applications, is the need for robust and large training datasets. We have developed techniques capable of automatically retraining models and incrementally building and updating KG4ES, based on ever evolving training data. We also adopt the evaluation instrument based on common research methodologies used in Earth science research, especially in Atmospheric Science. Second, we have developed an algorithm to infer new knowledge that can exploit the constructed KG4ES. In more detail, we have developed a network prediction algorithm aiming to explore and predict possible new connections in the KG4ES and aid in new knowledge discovery.
Visceral Blood Flow Modulation: Potential Therapy for Morbid Obesity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Tyler J., E-mail: tjharris@gmail.com; Murphy, Timothy P.; Jay, Bryan S.
We present this preliminary investigation into the safety and feasibility of endovascular therapy for morbid obesity in a swine model. A flow-limiting, balloon-expandable covered stent was placed in the superior mesenteric artery of three Yorkshire swine after femoral arterial cutdown. The pigs were monitored for between 15 and 51 days after the procedure and then killed, with weights obtained at 2-week increments. In the two pigs in which the stent was flow limiting, a reduced rate of weight gain (0.42 and 0.53 kg/day) was observed relative to the third pig (0.69 kg/day), associated with temporary food aversion and signs ofmore » mesenteric ischemia in one pig.« less
A self-contained, automated methodology for optimal flow control validated for transition delay
NASA Technical Reports Server (NTRS)
Joslin, Ronald D.; Gunzburger, Max D.; Nicolaides, R. A.; Erlebacher, Gordon; Hussaini, M. Yousuff
1995-01-01
This paper describes a self-contained, automated methodology for flow control along with a validation of the methodology for the problem of boundary layer instability suppression. The objective of control is to match the stress vector along a portion of the boundary to a given vector; instability suppression is achieved by choosing the given vector to be that of a steady base flow, e.g., Blasius boundary layer. Control is effected through the injection or suction of fluid through a single orifice on the boundary. The present approach couples the time-dependent Navier-Stokes system with an adjoint Navier-Stokes system and optimality conditions from which optimal states, i.e., unsteady flow fields, and control, e.g., actuators, may be determined. The results demonstrate that instability suppression can be achieved without any a priori knowledge of the disturbance, which is significant because other control techniques have required some knowledge of the flow unsteadiness such as frequencies, instability type, etc.
Current Trends in Modeling Research for Turbulent Aerodynamic Flows
NASA Technical Reports Server (NTRS)
Gatski, Thomas B.; Rumsey, Christopher L.; Manceau, Remi
2007-01-01
The engineering tools of choice for the computation of practical engineering flows have begun to migrate from those based on the traditional Reynolds-averaged Navier-Stokes approach to methodologies capable, in theory if not in practice, of accurately predicting some instantaneous scales of motion in the flow. The migration has largely been driven by both the success of Reynolds-averaged methods over a wide variety of flows as well as the inherent limitations of the method itself. Practitioners, emboldened by their ability to predict a wide-variety of statistically steady, equilibrium turbulent flows, have now turned their attention to flow control and non-equilibrium flows, that is, separation control. This review gives some current priorities in traditional Reynolds-averaged modeling research as well as some methodologies being applied to a new class of turbulent flow control problems.
Financial options methodology for analyzing investments in new technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenning, B.D.
1994-12-31
The evaluation of investments in longer term research and development in emerging technologies, because of the nature of such subjects, must address inherent uncertainties. Most notably, future cash flow forecasts include substantial uncertainties. Conventional present value methodology, when applied to emerging technologies severely penalizes cash flow forecasts, and strategic investment opportunities are at risk of being neglected. Use of options valuation methodology adapted from the financial arena has been introduced as having applicability in such technology evaluations. Indeed, characteristics of superconducting magnetic energy storage technology suggest that it is a candidate for the use of options methodology when investment decisionsmore » are being contemplated.« less
Financial options methodology for analyzing investments in new technology
NASA Technical Reports Server (NTRS)
Wenning, B. D.
1995-01-01
The evaluation of investments in longer term research and development in emerging technologies, because of the nature of such subjects, must address inherent uncertainties. Most notably, future cash flow forecasts include substantial uncertainties. Conventional present value methodology, when applied to emerging technologies severely penalizes cash flow forecasts, and strategic investment opportunities are at risk of being neglected. Use of options evaluation methodology adapted from the financial arena has been introduced as having applicability in such technology evaluations. Indeed, characteristics of superconducting magnetic energy storage technology suggest that it is a candidate for the use of options methodology when investment decisions are being contemplated.
2009-03-01
29 4. NetFlow , sFlow, IPFIX ..........................30 a. NetFlow ...................................30 b. sFlow...From [22])...........................................27 Figure 8. NetFlow Datagram................................31 Figure 9. Deployed CoT...sFlow Monitoring Parameters over NetFlow (From [27]).............................32 Table 6. Collaboration Methodologies Matrix..............39
Axial Flow Conditioning Device for Mitigating Instabilities
NASA Technical Reports Server (NTRS)
Ahuja, Vineet (Inventor); Birkbeck, Roger M. (Inventor); Hosangadi, Ashvin (Inventor)
2017-01-01
A flow conditioning device for incrementally stepping down pressure within a piping system is presented. The invention includes an outer annular housing, a center element, and at least one intermediate annular element. The outer annular housing includes an inlet end attachable to an inlet pipe and an outlet end attachable to an outlet pipe. The outer annular housing and the intermediate annular element(s) are concentrically disposed about the center element. The intermediate annular element(s) separates an axial flow within the outer annular housing into at least two axial flow paths. Each axial flow path includes at least two annular extensions that alternately and locally direct the axial flow radially outward and inward or radially inward and outward thereby inducing a pressure loss or a pressure gradient within the axial flow. The pressure within the axial flow paths is lower than the pressure at the inlet end and greater than the vapor pressure for the axial flow. The invention minimizes fluidic instabilities, pressure pulses, vortex formation and shedding, and/or cavitation during pressure step down to yield a stabilized flow within a piping system.
Evaluating endothelial function of the common carotid artery: an in vivo human model.
Mazzucco, S; Bifari, F; Trombetta, M; Guidi, G C; Mazzi, M; Anzola, G P; Rizzuto, N; Bonadonna, R
2009-03-01
Flow mediated dilation (FMD) of peripheral conduit arteries is a well-established tool to evaluate endothelial function. The aims of this study are to apply the FMD model to cerebral circulation by using acetazolamide (ACZ)-induced intracranial vasodilation as a stimulus to increase common carotid artery (CCA) diameter in response to a local increase of blood flow velocity (BFV). In 15 healthy subjects, CCA end-diastolic diameter and BFV, middle cerebral artery (MCA) BFV and mean arterial blood pressure (MBP) were measured at basal conditions, after an intravenous bolus of 1g ACZ, and after placebo (saline) sublingual administration at the 15th and 20th minute. In a separate session, the same parameters were evaluated after placebo (saline) infusion instead of ACZ and after 10 microg/m(2) bs and 300 microg of glyceryl trinitrate (GTN), administered sublingually, at the 15th and 20th minute, respectively. After ACZ bolus, there was a 35% maximal MCA mean BFV increment (14th minute), together with a 22% increase of mean CCA end-diastolic BFV and a CCA diameter increment of 3.9% at the 3rd minute (p=0.024). There were no MBP significant variations up to the 15th minute (p=0.35). After GTN administration, there was a significant increment in CCA diameter (p<0.00001). ACZ causes a detectable CCA dilation in healthy individuals concomitantly with an increase in BFV. Upon demonstration that this phenomenon is endothelium dependent, this experimental model might become a valuable tool to assess endothelial function in the carotid artery.
USDA-ARS?s Scientific Manuscript database
A 4-unit dual-flow continuous culture fermentor system was used to assess the effect of increasing flax supplementation of an herbage-based diet on nutrient digestibility, bacterial N synthesis and methane output. Treatments were randomly assigned to fermentors in a 4 x 4 Latin square design with 7 ...
Characterization of air profiles impeded by plant canopies for a variable-rate air-assisted sprayer
USDA-ARS?s Scientific Manuscript database
The preferential design for variable-rate orchard and nursery sprayers relies on tree structure to control liquid and air flow rates. Demand for this advanced feature has been incremental as the public demand on reduction of pesticide use. A variable-rate, air assisted, five-port sprayer had been in...
NASA Technical Reports Server (NTRS)
Halyo, Nesim
1987-01-01
A combined stochastic feedforward and feedback control design methodology was developed. The objective of the feedforward control law is to track the commanded trajectory, whereas the feedback control law tries to maintain the plant state near the desired trajectory in the presence of disturbances and uncertainties about the plant. The feedforward control law design is formulated as a stochastic optimization problem and is embedded into the stochastic output feedback problem where the plant contains unstable and uncontrollable modes. An algorithm to compute the optimal feedforward is developed. In this approach, the use of error integral feedback, dynamic compensation, control rate command structures are an integral part of the methodology. An incremental implementation is recommended. Results on the eigenvalues of the implemented versus designed control laws are presented. The stochastic feedforward/feedback control methodology is used to design a digital automatic landing system for the ATOPS Research Vehicle, a Boeing 737-100 aircraft. The system control modes include localizer and glideslope capture and track, and flare to touchdown. Results of a detailed nonlinear simulation of the digital control laws, actuator systems, and aircraft aerodynamics are presented.
Demonstration of Incremental Sampling Methodology for Soil Containing Metallic Residues
2013-09-01
and includes metamorphic , sedimentary, and volcanic rocks of Paleozoic age (Péwé et al. 1966). Upland areas adjacent to the Tanana River usually are...as 5 m of silt, Late Pleistocene to Holocene in age. Gravel con- sists mostly of quartz and metamorphic rock with clasts ranging from 0.3 to 7.5 cm in...and Shawna Tazik September 2013 Approved for public release; distribution is unlimited. The US Army Engineer Research and
2013-09-01
much as 5 m of silt. Gravel consists mostly of quartz and metamorphic rock with clasts ranging from 0.6 to 7 cm in diameter. The gravel is 3 to...Thomas Georgian, and Anthony Bednar September 2013 Approved for public release; distribution is unlimited. The US Army Engineer Research and...Research and Engineering Laboratory (CRREL) US Army Engineer Research and Development Center 72 Lyme Road Hanover, NH 03755-1290 Thomas Georgian
A hybrid incremental projection method for thermal-hydraulics applications
NASA Astrophysics Data System (ADS)
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; Berndt, Markus; Francois, Marianne M.; Stagg, Alan K.; Xia, Yidong; Luo, Hong
2016-07-01
A new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya-Babuška-Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie-Chow interpolation or by using a Petrov-Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes, and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.
A hybrid incremental projection method for thermal-hydraulics applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.
In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less
A hybrid incremental projection method for thermal-hydraulics applications
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; ...
2016-07-01
In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less
Calculations of separated 3-D flows with a pressure-staggered Navier-Stokes equations solver
NASA Technical Reports Server (NTRS)
Kim, S.-W.
1991-01-01
A Navier-Stokes equations solver based on a pressure correction method with a pressure-staggered mesh and calculations of separated three-dimensional flows are presented. It is shown that the velocity pressure decoupling, which occurs when various pressure correction algorithms are used for pressure-staggered meshes, is caused by the ill-conditioned discrete pressure correction equation. The use of a partial differential equation for the incremental pressure eliminates the velocity pressure decoupling mechanism by itself and yields accurate numerical results. Example flows considered are a three-dimensional lid driven cavity flow and a laminar flow through a 90 degree bend square duct. For the lid driven cavity flow, the present numerical results compare more favorably with the measured data than those obtained using a formally third order accurate quadratic upwind interpolation scheme. For the curved duct flow, the present numerical method yields a grid independent solution with a very small number of grid points. The calculated velocity profiles are in good agreement with the measured data.
Compressible flow about symmetrical Joukowski profiles
NASA Technical Reports Server (NTRS)
Kaplan, Carl
1938-01-01
The method of Poggi is employed for the determination of the effects of compressibility upon the flow past an obstacle. A general expression for the velocity increment due to compressibility is obtained. The general result holds whatever the shape of the obstacle; but, in order to obtain the complete solution, it is necessary to know a certain Fourier expansion of the square of the velocity of flow past the obstacle. An application is made to the case flow of a symmetrical Joukowski profile with a sharp trailing edge, fixed in a stream of an arbitrary angle of attack and with the circulation determined by the Kutta condition. The results are obtained in a closed form and are exact insofar as the second approximation to the compressible flow is concerned, the first approximation being the result for the corresponding incompressible flow. Formulas for lift and moment analogous to the Blasius formulas in incompressible flow are developed and are applied to thin symmetrical Joukowski profiles for small angles of attack.
Fowler, Robert A; Mittmann, Nicole; Geerts, William H; Heels-Ansdell, Diane; Gould, Michael K; Guyatt, Gordon; Krahn, Murray; Finfer, Simon; Pinto, Ruxandra; Chan, Brian; Ormanidhi, Orges; Arabi, Yaseen; Qushmaq, Ismael; Rocha, Marcelo G; Dodek, Peter; McIntyre, Lauralyn; Hall, Richard; Ferguson, Niall D; Mehta, Sangeeta; Marshall, John C; Doig, Christopher James; Muscedere, John; Jacka, Michael J; Klinger, James R; Vlahakis, Nicholas; Orford, Neil; Seppelt, Ian; Skrobik, Yoanna K; Sud, Sachin; Cade, John F; Cooper, Jamie; Cook, Deborah
2014-12-20
Venous thromboembolism (VTE) is a common complication of critical illness with important clinical consequences. The Prophylaxis for ThromboEmbolism in Critical Care Trial (PROTECT) is a multicenter, blinded, randomized controlled trial comparing the effectiveness of the two most common pharmocoprevention strategies, unfractionated heparin (UFH) and low molecular weight heparin (LMWH) dalteparin, in medical-surgical patients in the intensive care unit (ICU). E-PROTECT is a prospective and concurrent economic evaluation of the PROTECT trial. The primary objective of E-PROTECT is to identify and quantify the total (direct and indirect, variable and fixed) costs associated with the management of critically ill patients participating in the PROTECT trial, and, to combine costs and outcome results to determine the incremental cost-effectiveness of LMWH versus UFH, from the acute healthcare system perspective, over a data-rich time horizon of ICU admission and hospital admission. We derive baseline characteristics and probabilities of in-ICU and in-hospital events from all enrolled patients. Total costs are derived from centers, proportional to the numbers of patients enrolled in each country. Direct costs include medication, physician and other personnel costs, diagnostic radiology and laboratory testing, operative and non-operative procedures, costs associated with bleeding, transfusions and treatment-related complications. Indirect costs include ICU and hospital ward overhead costs. Outcomes are the ratio of incremental costs per incremental effects of LMWH versus UFH during hospitalization; incremental cost to prevent a thrombosis at any site (primary outcome); incremental cost to prevent a pulmonary embolism, deep vein thrombosis, major bleeding event or episode of heparin-induced thrombocytopenia (secondary outcomes) and incremental cost per life-year gained (tertiary outcome). Pre-specified subgroups and sensitivity analyses will be performed and confidence intervals for the estimates of incremental cost-effectiveness will be obtained using bootstrapping. This economic evaluation employs a prospective costing methodology concurrent with a randomized controlled blinded clinical trial, with a pre-specified analytic plan, outcome measures, subgroup and sensitivity analyses. This economic evaluation has received only peer-reviewed funding and funders will not play a role in the generation, analysis or decision to submit the manuscripts for publication. Clinicaltrials.gov Identifier: NCT00182143 . Date of registration: 10 September 2005.
Juncture flow improvement for wing/pylon configurations by using CFD methodology
NASA Technical Reports Server (NTRS)
Gea, Lie-Mine; Chyu, Wei J.; Stortz, Michael W.; Chow, Chuen-Yen
1993-01-01
Transonic flow field around a fighter wing/pylon configuration was simulated by using an implicit upwinding Navier-Stokes flow solver (F3D) and overset grid technology (Chimera). Flow separation and local shocks near the wing/pylon junction were observed in flight and predicted by numerical calculations. A new pylon/fairing shape was proposed to improve the flow quality. Based on numerical results, the size of separation area is significantly reduced and the onset of separation is delayed farther downstream. A smoother pressure gradient is also obtained near the junction area. This paper demonstrates that computational fluid dynamics (CFD) methodology can be used as a practical tool for aircraft design.
Fluid-flow of a row of jets in crossflow - A numerical study
NASA Technical Reports Server (NTRS)
Kim, S.-W.; Benson, T. J.
1992-01-01
A detailed computer-visualized flow field of a row of jets in a confined crossflow is presented. The Reynolds averaged Navier-Stokes equations are solved using a finite volume method that incorporates a partial differential equation for incremental pressure to obtain a divergence-free flow field. The turbulence is described by a multiple-time-scale turbulence model. The computational domain includes the upstream region of the circular jet so that the interaction between the jet and the crossflow is simulated accurately. It is shown that the row of jets in the crossflow is characterized by a highly complex flow field that includes a horse-shoe vortex and two helical vortices whose secondary velocity components are co-rotating in space. It is also shown that the horse-shoe vortex is a ring of reversed flows located along the circumference of the jet exit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald J. MacDonald; Charles M. Boyer; Joseph H. Frantz Jr
Stripper gas and oil well operators frequently face a dilemma regarding maximizing production from low-productivity wells. With thousands of stripper wells in the United States covering extensive acreage, it is difficult to identify easily and efficiently marginal or underperforming wells. In addition, the magnitude of reviewing vast amounts of data places a strain on an operator's work force and financial resources. Schlumberger DCS, in cooperation with the National Energy Technology Laboratory (NETL) and the U.S. Department of Energy (DOE), has created software and developed in-house analysis methods to identify remediation potential in stripper wells relatively easily. This software is referredmore » to as Stripper Well Analysis Remediation Methodology (SWARM). SWARM was beta-tested with data pertaining to two gas fields located in northwestern Pennsylvania and had notable results. Great Lakes Energy Partners, LLC (Great Lakes) and Belden & Blake Corporation (B&B) both operate wells in the first field studied. They provided data for 729 wells, and we estimated that 41 wells were candidates for remediation. However, for reasons unbeknownst to Schlumberger these wells were not budgeted for rework by the operators. The second field (Cooperstown) is located in Crawford, Venango, and Warren counties, Pa and has more than 2,200 wells operated by Great Lakes. This paper discusses in depth the successful results of a candidate recognition study of this area. We compared each well's historical production with that of its offsets and identified 339 underperformers before considering remediation costs, and 168 economically viable candidates based on restimulation costs of $50,000 per well. From this data, we prioritized a list based on the expected incremental recoverable gas and 10% discounted net present value (NPV). For this study, we calculated the incremental gas by subtracting the volumes forecasted after remediation from the production projected at its current configuration. Assuming that remediation efforts increased production from the 168 marginal wells to the average of their respective offsets, approximately 6.4 Bscf of gross incremental gas with a NPV approximating $4.9 million after investment, would be made available to the domestic market. Seventeen wells have successfully been restimulated to date and have already obtained significant production increases. At the time of this report, eight of these wells had enough post-rework production data available to forecast the incremental gas and verify the project's success. This incremental gas is estimated at 615 MMscf. The outcome of the other ten wells will be determined after more post-refrac production data becomes available. Plans are currently underway for future restimulations. The success of this project has shown the value of this methodology to recognize underperforming wells quickly and efficiently in fields containing hundreds or thousands of wells. This contributes considerably to corporate net income and domestic natural gas and/or oil reserves.« less
NASA Astrophysics Data System (ADS)
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.
Supersonic/Hypersonic Correlations for In-Cavity Transition and Heating Augmentation
NASA Technical Reports Server (NTRS)
Everhart, Joel L.
2011-01-01
Laminar-entry cavity heating data with a non-laminar boundary layer exit flow have been retrieved from the database developed at Mach 6 and 10 in air on large flat plate models for the Space Shuttle Return-To-Flight Program. Building on previously published fully laminar and fully turbulent analysis methods, new descriptive correlations of the in-cavity floor-averaged heating and endwall maximum heating have been developed for transitional-to-turbulent exit flow. These new local-cavity correlations provide the expected flow and geometry conditions for transition onset; they provide the incremental heating augmentation induced by transitional flow; and, they provide the transitional-to-turbulent exit cavity length. Furthermore, they provide an upper application limit for the previously developed fully-laminar heating correlations. An example is provided that demonstrates simplicity of application. Heating augmentation factors of 12 and 3 above the fully laminar values are shown to exist on the cavity floor and endwall, respectively, if the flow exits in fully tripped-to-turbulent boundary layer state. Cavity floor heating data in geometries installed on the windward surface of 0.075-scale Shuttle wind tunnel models have also been retrieved from the boundary layer transition database developed for the Return-To-Flight Program. These data were independently acquired at Mach 6 and Mach 10 in air, and at Mach 6 in CF4. The correlation parameters for the floor-averaged heating have been developed and they offer an exceptionally positive comparison to previously developed laminar-cavity heating correlations. Non-laminar increments have been extracted from the Shuttle data and they fall on the newly developed transitional in-cavity correlations, and they are bounded by the 95% correlation prediction limits. Because the ratio of specific heats changes along the re-entry trajectory, turning angle into a cavity and boundary layer flow properties may be affected, raising concerns regarding the application validity of the heating augmentation predictions.
Land, K C; Guralnik, J M; Blazer, D G
1994-05-01
A fundamental limitation of current multistate life table methodology-evident in recent estimates of active life expectancy for the elderly-is the inability to estimate tables from data on small longitudinal panels in the presence of multiple covariates (such as sex, race, and socioeconomic status). This paper presents an approach to such an estimation based on an isomorphism between the structure of the stochastic model underlying a conventional specification of the increment-decrement life table and that of Markov panel regression models for simple state spaces. We argue that Markov panel regression procedures can be used to provide smoothed or graduated group-specific estimates of transition probabilities that are more stable across short age intervals than those computed directly from sample data. We then join these estimates with increment-decrement life table methods to compute group-specific total, active, and dependent life expectancy estimates. To illustrate the methods, we describe an empirical application to the estimation of such life expectancies specific to sex, race, and education (years of school completed) for a longitudinal panel of elderly persons. We find that education extends both total life expectancy and active life expectancy. Education thus may serve as a powerful social protective mechanism delaying the onset of health problems at older ages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lara-Castells, María Pilar de, E-mail: Pilar.deLara.Castells@csic.es; Mitrushchenkov, Alexander O.; Stoll, Hermann
2015-09-14
A combined density functional (DFT) and incremental post-Hartree-Fock (post-HF) approach, proven earlier to calculate He-surface potential energy surfaces [de Lara-Castells et al., J. Chem. Phys. 141, 151102 (2014)], is applied to describe the van der Waals dominated Ag{sub 2}/graphene interaction. It extends the dispersionless density functional theory developed by Pernal et al. [Phys. Rev. Lett. 103, 263201 (2009)] by including periodic boundary conditions while the dispersion is parametrized via the method of increments [H. Stoll, J. Chem. Phys. 97, 8449 (1992)]. Starting with the elementary cluster unit of the target surface (benzene), continuing through the realistic cluster model (coronene), andmore » ending with the periodic model of the extended system, modern ab initio methodologies for intermolecular interactions as well as state-of-the-art van der Waals-corrected density functional-based approaches are put together both to assess the accuracy of the composite scheme and to better characterize the Ag{sub 2}/graphene interaction. The present work illustrates how the combination of DFT and post-HF perspectives may be efficient to design simple and reliable ab initio-based schemes in extended systems for surface science applications.« less
MacKillop, James; Acker, John D; Bollinger, Jared; Clifton, Allan; Miller, Joshua D; Campbell, W Keith; Goodie, Adam S
2013-09-01
Alcohol misuse is substantially influenced by social factors, but systematic assessments of social network drinking are typically lengthy. The goal of the present study was to provide further validation of a brief measure of social network alcohol use, the Brief Alcohol Social Density Assessment (BASDA), in a sample of emerging adults. Specifically, the study sought to examine the BASDA's convergent, criterion, and incremental validity in relation to well-established measures of drinking motives and problematic drinking. Participants were 354 undergraduates who were assessed using the BASDA, the Alcohol Use Disorders Identification Test (AUDIT), and the Drinking Motives Questionnaire. Significant associations were observed between the BASDA index of alcohol-related social density and alcohol misuse, social motives, and conformity motives, supporting convergent validity. Criterion-related validity was supported by evidence that significantly greater alcohol involvement was present in the social networks of individuals scoring at or above an AUDIT score of 8, a validated criterion for hazardous drinking. Finally, the BASDA index was significantly associated with alcohol misuse above and beyond drinking motives in relation to AUDIT scores, supporting incremental validity. Taken together, these findings provide further support for the BASDA as an efficient measure of drinking in an individual's social network. Methodological considerations as well as recommendations for future investigations in this area are discussed.
NASA Technical Reports Server (NTRS)
Stanford, Malcolm K.; DellaCorte, Christopher; Eylon, Daniel
2002-01-01
The effects of BaF2-CaF 2 particle morphology on PS304 feedstock powder flow ability have been investigated. BaF2-CaF2 eutectic powders were fabricated by comminution (angular) and by gas atomization (spherical). The fluoride powders were added incrementally to the other powder constituents of the PS304 feedstock: nichrome, chromia, and silver powders. A linear relationship between flow time and concentration of BaF2-CaF2 powder was found. Flow of the powder blend with spherical BaF2-CaF2 was better than the angular BaF2-CaF2. Flow ability of the powder blend with angular fluorides decreased linearly with increasing fluoride concentration. Flow of the powder blend with spherical fluorides was independent of fluoride concentration. Results suggest that for this material blend, particle morphology plays a significant role in powder blend flow behavior, offering potential methods to improve powder flow ability and enhance the commercial potential. These findings may have applicability to other difficult-to-flow powders such as cohesive ceramics.
NASA Technical Reports Server (NTRS)
Logston, R. G.; Budris, G. D.
1977-01-01
The methodology to optimize the utilization of Spacelab racks and pallets and to apply this methodology to the early STS Spacelab missions was developed. A review was made of Spacelab Program requirements and flow plans, generic flow plans for racks and pallets were examined, and the principal optimization criteria and methodology were established. Interactions between schedule, inventory, and key optimization factors; schedule and cost sensitivity to optional approaches; and the development of tradeoff methodology were addressed. This methodology was then applied to early spacelab missions (1980-1982). Rack and pallet requirements and duty cycles were defined, a utilization assessment was made, and several trade studies performed involving varying degrees of Level IV integration, inventory level, and shared versus dedicated Spacelab racks and pallets.
Guidelines for using the Delphi Technique to develop habitat suitability index curves
Crance, Johnie H.
1987-01-01
Habitat Suitability Index (SI) curves are one method of presenting species habitat suitability criteria. The curves are often used with the Habitat Evaluation Procedures (HEP) and are necessary components of the Instream Flow Incremental Methodology (IFIM) (Armour et al. 1984). Bovee (1986) described three categories of SI curves or habitat suitability criteria based on the procedures and data used to develop the criteria. Category I curves are based on professional judgment, with 1ittle or no empirical data. Both Category II (utilization criteria) and Category III (preference criteria) curves have as their source data collected at locations where target species are observed or collected. Having Category II and Category III curves for all species of concern would be ideal. In reality, no SI curves are available for many species, and SI curves that require intensive field sampling often cannot be developed under prevailing constraints on time and costs. One alternative under these circumstances is the development and interim use of SI curves based on expert opinion. The Delphi technique (Pill 1971; Delbecq et al. 1975; Linstone and Turoff 1975) is one method used for combining the knowledge and opinions of a group of experts. The purpose of this report is to describe how the Delphi technique may be used to develop expert-opinion-based SI curves.
NASA Technical Reports Server (NTRS)
Hess, Robert V; Gardner, Clifford S
1947-01-01
By using the Prandtl-Glauert method that is valid for three-dimensional flow problems, the value of the maximum incremental velocity for compressible flow about thin ellipsoids at zero angle of attack is calculated as a function of the Mach number for various aspect ratios and thickness ratios. The critical Mach numbers of the various ellipsoids are also determined. The results indicate an increase in critical Mach number with decrease in aspect ratio which is large enough to explain experimental results on low-aspect-ratio wings at zero lift.
Atmospheric response to Saharan dust deduced from ECMWF reanalysis (ERA) temperature increments
NASA Astrophysics Data System (ADS)
Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.
2003-09-01
This study focuses on the atmospheric temperature response to dust deduced from a new source of data the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in the reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the lack of dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (>0.5), low correlation and high negative correlation (<-0.5). The innermost positive correlation area (PCA) is a large area near the center of the Sahara desert. For some local maxima inside this area the correlation even exceeds 0.8. The outermost negative correlation area (NCA) is not uniform. It consists of some areas over the eastern and western parts of North Africa with a relatively small amount of dust. Inside those areas both positive and negative high correlations exist at pressure levels ranging from 850 to 700 hPa, with the peak values near 775 hPa. Dust-forced heating (cooling) inside the PCA (NCA) is accompanied by changes in the static instability of the atmosphere above the dust layer. The reanalysis data of the European Center for Medium Range Weather Forecast (ECMWF) suggest that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity and downward (upward) airflow. These findings are associated with the interaction between dust-forced heating/cooling and atmospheric circulation. This paper contributes to a better understanding of dust radiative processes missed in the model.
ChargeOut! : discounted cash flow compared with traditional machine-rate analysis
Ted Bilek
2008-01-01
ChargeOut!, a discounted cash-flow methodology in spreadsheet format for analyzing machine costs, is compared with traditional machine-rate methodologies. Four machine-rate models are compared and a common data set representative of logging skiddersâ costs is used to illustrate the differences between ChargeOut! and the machine-rate methods. The study found that the...
NASA Astrophysics Data System (ADS)
Dib, Alain; Kavvas, M. Levent
2018-03-01
The Saint-Venant equations are commonly used as the governing equations to solve for modeling the spatially varied unsteady flow in open channels. The presence of uncertainties in the channel or flow parameters renders these equations stochastic, thus requiring their solution in a stochastic framework in order to quantify the ensemble behavior and the variability of the process. While the Monte Carlo approach can be used for such a solution, its computational expense and its large number of simulations act to its disadvantage. This study proposes, explains, and derives a new methodology for solving the stochastic Saint-Venant equations in only one shot, without the need for a large number of simulations. The proposed methodology is derived by developing the nonlocal Lagrangian-Eulerian Fokker-Planck equation of the characteristic form of the stochastic Saint-Venant equations for an open-channel flow process, with an uncertain roughness coefficient. A numerical method for its solution is subsequently devised. The application and validation of this methodology are provided in a companion paper, in which the statistical results computed by the proposed methodology are compared against the results obtained by the Monte Carlo approach.
Computing the Envelope for Stepwise Constant Resource Allocations
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Clancy, Daniel (Technical Monitor)
2001-01-01
Estimating tight resource level is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with noises equal to the events and edges equal to the necessary predecessor links between events. The incremental solution of a staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. The staged algorithm has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible for use in the inner loop of search-based scheduling algorithms.
1991-09-01
ref lect the of ficial policy or position of the Department of Defense or the U.S. Government. Accesion For NTIS CrA&,i By D, st ibtt:or~f 11--- ... Si...capability 3. A flexible, well-planned overall architecture 4. A plan for incremental achievement of full capability 5. Early definition, funding...2. a system architecture and design that will satisfy the requirements. 3. a development team that communicates effectively and have previous
2014-08-01
be evaluated. Orbits are determined with the OCEAN Weighted Least Squares Orbit Determination (WLS-OD) methodology using successive five day increments...of SLR data. The orbit solution from the first five day data arc is propagated forward in time to thirty days . The WLS-OD process is repeated for...successive five day data arcs. These orbit solutions are then compared to the predicted orbit from the first data arc solution. Thirty days was chosen as
Incremental Sampling Methodology (ISM). Part 1, Section 2: Principles
2012-03-01
Many contaminants adhere to the surfaces of certain minerals Organic carbon is composed of complex molecules that can act as molecular sponges...hydroxide particles “the iron in a cubic yard of soil [1-1.5 tons] is capable of adsorbing 0.5 to 5 lbs of soluble metals …or organics” (Vance...determine decision outcome! ISM addresses the problems of both micro- and short-scale heterogeneity Set of co-located samples for uranium (mg/kg) As
Implementing a conceptual model of physical and chemical soil profile evolution
NASA Astrophysics Data System (ADS)
Kirkby, Mike
2017-04-01
When soil profile composition is generalised in terms of the proportion, p, of bedrock remaining (= 1 - depletion ratio), then other soil processes can also be expressed in terms of p, and 'soil depth' described by the integral of (1-p) down to bedrock. Soil profile evolution is expressed as the advance of a sigmoidal weathering front into the critical zone under the action of upward ionic diffusion of weathering products; downward advection of solutes in percolating waters, with loss of (cleanish) water as evapotranspiration and (solute-laden) water as a lateral sub-surface flow increment; and mechanical denudation increment at the surface. Each component responds to the degree of weathering. Percolation is limited by precipitation, evapotranspiration demand and the degree of weathering at each level in the profile which diverts subsurface flow. Mechanical removal rates are considered to broadly increase as weathering proceeds, as grain size and dilation angle decreases. The implication of these assumptions can be examined for steady state profiles, for which observed relationships between mechanical and chemical denudation rates; and between chemical denudation and critical zone depth are reproduced. For non-steady state evolution, these relationships break down, but provide a basis for linking critical zone with hillslope/ landform evolution.
NASA Astrophysics Data System (ADS)
Anjum, Aisha; Mir, N. A.; Farooq, M.; Javed, M.; Ahmad, S.; Malik, M. Y.; Alshomrani, A. S.
2018-06-01
The present article concentrates on thermal stratification in the flow of second grade fluid past a Riga plate with linear stretching towards a stagnation region. Heat transfer phenomenon is disclosed with heat generation/absorption. Riga plate is known as electromagnetic actuator which comprises of permanent magnets and alternating electrodes placed on a plane surface. Cattaneo-Christov heat flux model is implemented to analyze the features of heat transfer. This new heat flux model is the generalization of classical Fourier's law with the contribution of thermal relaxation time. For the first time heat generation/absorption effect is computed with non-Fourier's law of heat conduction (i.e., Cattaneo-Christov heat flux model). Transformations are used to obtain the governing non-linear ordinary differential equations. Approximate convergent solutions are developed for the non-dimensionalized governing problems. Physical features of velocity and temperature distributions are graphically analyzed corresponding to various parameters in 2D and 3D. It is noted that velocity field enhances with an increment of modified Hartman number while it reduces with increasing variable thickness parameter. Increment in modified heat generation parameter results in reduction of temperature field.
Wind-Tunnel Investigations of Blunt-Body Drag Reduction Using Forebody Surface Roughness
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Sprague, Stephanie; Naughton, Jonathan W.; Curry, Robert E. (Technical Monitor)
2001-01-01
This paper presents results of wind-tunnel tests that demonstrate a novel drag reduction technique for blunt-based vehicles. For these tests, the forebody roughness of a blunt-based model was modified using micomachined surface overlays. As forebody roughness increases, boundary layer at the model aft thickens and reduces the shearing effect of external flow on the separated flow behind the base region, resulting in reduced base drag. For vehicle configurations with large base drag, existing data predict that a small increment in forebody friction drag will result in a relatively large decrease in base drag. If the added increment in forebody skin drag is optimized with respect to base drag, reducing the total drag of the configuration is possible. The wind-tunnel tests results conclusively demonstrate the existence of a forebody dragbase drag optimal point. The data demonstrate that the base drag coefficient corresponding to the drag minimum lies between 0.225 and 0.275, referenced to the base area. Most importantly, the data show a drag reduction of approximately 15% when the drag optimum is reached. When this drag reduction is scaled to the X-33 base area, drag savings approaching 45,000 N (10,000 lbf) can be realized.
Methods for determination of optic nerve blood flow.
Glazer, L. C.
1988-01-01
A variety of studies have been conducted over the past two decades to determine if decreased optic nerve blood flow has a role in the etiology of glaucomatous nerve damage. Five basic methods have been employed in examining blood flow. Invasive studies, utilizing electrodes placed in the optic nerve head, represent one of the first attempts to measure blood flow. More recently, the methodologies have included axoplasmic flow analysis, microspheres, radioactive tracers such as iodoantipyrine, and laser doppler measurements. The results of these studies are inconclusive and frequently contradictory. When the studies are grouped by methodology, only the iodoantipyrine data are consistent. While each of the experimental techniques has limitations, iodoantipyrine appears to have better resolution than either invasive studies or microspheres. PMID:3284212
Optimal Micro-Jet Flow Control for Compact Air Vehicle Inlets
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Miller, Daniel N.; Addington, Gregory A.; Agrell, Johan
2004-01-01
The purpose of this study on micro-jet secondary flow control is to demonstrate the viability and economy of Response Surface Methodology (RSM) to optimally design micro-jet secondary flow control arrays, and to establish that the aeromechanical effects of engine face distortion can also be included in the design and optimization process. These statistical design concepts were used to investigate the design characteristics of "low mass" micro-jet array designs. The term "low mass" micro-jet may refers to fluidic jets with total (integrated) mass flow ratios between 0.10 and 1.0 percent of the engine face mass flow. Therefore, this report examines optimal micro-jet array designs for compact inlets through a Response Surface Methodology.
Synchronization patterns in cerebral blood flow and peripheral blood pressure under minor stroke
NASA Astrophysics Data System (ADS)
Chen, Zhi; Ivanov, Plamen C.; Hu, Kun; Stanley, H. Eugene; Novak, Vera
2003-05-01
Stroke is a leading cause of death and disability in the United States. The autoregulation of cerebral blood flow that adapts to changes in systemic blood pressure is impaired after stroke. We investigate blood flow velocities (BFV) from right and left middle cerebral arteries (MCA) and beat-to-beat blood pressure (BP) simultaneously measured from the finger, in 13 stroke and 11 healthy subjects using the mean value statistics and phase synchronization method. We find an increase in the vascular resistance and a much stronger cross-correlation with a time lag up to 20 seconds with the instantaneous phase increment of the BFV and BP signals for the subjects with stroke compared to healthy subjects.
NASA Technical Reports Server (NTRS)
Wood, Richard M.; Byrd, James E.; Wesselmann, Gary F.
1992-01-01
An assessment of the influence of airfoil geometry on delta wing leading edge vortex flow and vortex induced aerodynamics at supersonic speeds is discussed. A series of delta wing wind tunnel models were tested over a Mach number range from 1.7 to 2.0. The model geometric variables included leading edge sweep and airfoil shape. Surface pressure data, vapor screen, and oil flow photograph data were taken to evaluate the complex structure of the vortices and shocks on the family of wings tested. The data show that airfoil shape has a significant impact on the wing upper surface flow structure and pressure distribution, but has a minimal impact on the integrated upper surface pressure increments.
NASA Astrophysics Data System (ADS)
Evans, John; Coley, Christopher; Aronson, Ryan; Nelson, Corey
2017-11-01
In this talk, a large eddy simulation methodology for turbulent incompressible flow will be presented which combines the best features of divergence-conforming discretizations and the residual-based variational multiscale approach to large eddy simulation. In this method, the resolved motion is represented using a divergence-conforming discretization, that is, a discretization that preserves the incompressibility constraint in a pointwise manner, and the unresolved fluid motion is explicitly modeled by subgrid vortices that lie within individual grid cells. The evolution of the subgrid vortices is governed by dynamical model equations driven by the residual of the resolved motion. Consequently, the subgrid vortices appropriately vanish for laminar flow and fully resolved turbulent flow. As the resolved velocity field and subgrid vortices are both divergence-free, the methodology conserves mass in a pointwise sense and admits discrete balance laws for energy, enstrophy, and helicity. Numerical results demonstrate the methodology yields improved results versus state-of-the-art eddy viscosity models in the context of transitional, wall-bounded, and rotational flow when a divergence-conforming B-spline discretization is utilized to represent the resolved motion.
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Canabal, Francisco; Chen, Yen-Sen; Cheng, Gary; Ito, Yasushi
2013-01-01
Nuclear thermal propulsion is a leading candidate for in-space propulsion for human Mars missions. This chapter describes a thermal hydraulics design and analysis methodology developed at the NASA Marshall Space Flight Center, in support of the nuclear thermal propulsion development effort. The objective of this campaign is to bridge the design methods in the Rover/NERVA era, with a modern computational fluid dynamics and heat transfer methodology, to predict thermal, fluid, and hydrogen environments of a hypothetical solid-core, nuclear thermal engine the Small Engine, designed in the 1960s. The computational methodology is based on an unstructured-grid, pressure-based, all speeds, chemically reacting, computational fluid dynamics and heat transfer platform, while formulations of flow and heat transfer through porous and solid media were implemented to describe those of hydrogen flow channels inside the solid24 core. Design analyses of a single flow element and the entire solid-core thrust chamber of the Small Engine were performed and the results are presented herein
Advanced Methodology for Simulation of Complex Flows Using Structured Grid Systems
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David
1995-01-01
Detailed simulations of viscous flows in complicated geometries pose a significant challenge to current capabilities of Computational Fluid Dynamics (CFD). To enable routine application of CFD to this class of problems, advanced methodologies are required that employ (a) automated grid generation, (b) adaptivity, (c) accurate discretizations and efficient solvers, and (d) advanced software techniques. Each of these ingredients contributes to increased accuracy, efficiency (in terms of human effort and computer time), and/or reliability of CFD software. In the long run, methodologies employing structured grid systems will remain a viable choice for routine simulation of flows in complex geometries only if genuinely automatic grid generation techniques for structured grids can be developed and if adaptivity is employed more routinely. More research in both these areas is urgently needed.
Lee, C H; Sapuan, S M; Lee, J H; Hassan, M R
2016-01-01
A study of the melt volume flow rate (MVR) and the melt flow rate (MFR) of kenaf fibre (KF) reinforced Floreon (FLO) and magnesium hydroxide (MH) biocomposites under different temperatures (160-180 °C) and weight loadings (2.16, 5, 10 kg) is presented in this paper. FLO has the lowest values of MFR and MVR. The increment of the melt flow properties (MVR and MFR) has been found for KF or MH insertion due to the hydrolytic degradation of the polylactic acid in FLO. Deterioration of the entanglement density at high temperature, shear thinning and wall slip velocity were the possible causes for the higher melt flow properties. Increasing the KF loadings caused the higher melt flow properties while the higher MH contents created stronger bonding for higher macromolecular chain flow resistance, hence lower melt flow properties were recorded. However, the complicated melt flow behaviour of the KF reinforced FLO/MH biocomposites was found in this study. The high probability of KF-KF and KF-MH collisions was expected and there were more collisions for higher fibre and filler loading causing lower melt flow properties.
Control of the permeability of fractures in geothermal rocks
NASA Astrophysics Data System (ADS)
Faoro, Igor
This thesis comprises three journal articles that will be submitted for publication (Journal of Geophysical Research-Solid Earth). Their respective titles are: "Undrained through Drained Evolution of Permeability in Dual Permeability Media" by Igor Faoro, Derek Elsworth and Chris Marone, "Evolution of Stiffness and Permeability in Fractures Subject to Thermally-and Mechanically-Activated Dissolution" by Igor Faoro, Derek Elsworth Chris Marone; "Linking permeability and mechanical damage for basalt from Mt. Etna volcano (Italy)" by Igor Faoro, Sergio Vinciguerra, Chris Marone and Derek Elsworth. Undrained through Drained Evolution of Permeability in Dual Permeability Media: temporary permeability changes of fractured aquifers subject to earthquakes have been observed and recorded worldwide, but their comprehension still remains a complex issue. In this study we report on flow-through fracture experiments on cracked westerly cores that reproduce, at laboratory scale, those (steps like) permeability changes that have been recorded when earthquakes occur. In particular our experiments show that under specific test boundary conditions, rapid increments of pore pressure induce transient variations of flow rate of the fracture whose peak magnitudes decrease as the variations of the effective stresses increase. We identify that the observed hydraulic behavior of the fracture is due to two principal mechanisms of origin; respectively mechanical (shortening of core) and poro-elastic (radial diffusion of the pore fluid into the matrix of the sample) whose interaction cause respectively an instantaneous opening and then a progressive closure of the fracture. Evolution of Stiffness and Permeability in Fractures Subject to Thermally-and Mechanically-Activated Dissolution: we report the results of radial flow-through experiments conducted on heated samples of Westerly granite. These experiments are performed to examine the influence of thermally and mechanically activated dissolution on the mechanical (stiffness) and transport (stress-permeability) characteristics of fractures. The sample is thermally stressed to 80 °C and measurements of the constrained axial stress acting on the sample and of the flow rate of the fracture are recorded with time. Net efflux of dissolved mineral mass is also measured periodically to provide a record of rates of net mass removal. During the experiment the fracture permeability shows high sensitivity to the changing conditions of stress and temperature but no significant permanent variation of permeability have been recorded once the thermal cycle ends. Linking permeability and mechanical damage for basalt from Mt. Etna volcano (Italy): volcanic edifices, such as Mt. Etna volcano (Italy), are affected from repeated episodes of pressurization due to magma emplacement from deep reservoirs to shallow depths. This mechanism pressurizes the large aquifers within the edifice and increases the level of crack damage within the rocks of the edifice over extended periods of times. In order to improve our understanding of the complex coupling between circulating fluids and the development of crack damage we performed flow-through tests using cylindrical cores of Etna Basalt (Etna, Italy) cyclically loaded either by constant increments of the principal stress: sigma1 (deviatoric condition), or by increments of the effective confining pressure: sigma1 = sigma 2 = sigma3 (isostatic conditions). Under hydrostatic stresses, the permeability values of the intact sample decrease linearly with the increments of pressure and range between 5.2*10-17 m2 and 1.5*10-17m2. At deviatoric stresses (up to 60 MPa) the permeability from the initial value of 5*10-17 m2 slightly decays to the minimum value of 2*10 -17 m2 observed when the axial deviatoric stresses range between 40 MPa and 60 MPa. For higher deviatoric stresses, increases to 10-16 m2 are then observed up to the peak stress at 92 MPa. After failure the permeability persisted steady at the value of 8*10-16 m2 for the whole duration of the test, independently from the applied stress. We interpreted the decrease observed as due to the progressive closure of the voids space, as the axial load is incremented.
Parallel adaptive discontinuous Galerkin approximation for thin layer avalanche modeling
NASA Astrophysics Data System (ADS)
Patra, A. K.; Nichita, C. C.; Bauer, A. C.; Pitman, E. B.; Bursik, M.; Sheridan, M. F.
2006-08-01
This paper describes the development of highly accurate adaptive discontinuous Galerkin schemes for the solution of the equations arising from a thin layer type model of debris flows. Such flows have wide applicability in the analysis of avalanches induced by many natural calamities, e.g. volcanoes, earthquakes, etc. These schemes are coupled with special parallel solution methodologies to produce a simulation tool capable of very high-order numerical accuracy. The methodology successfully replicates cold rock avalanches at Mount Rainier, Washington and hot volcanic particulate flows at Colima Volcano, Mexico.
Gallart, F; Llorens, P; Latron, J; Cid, N; Rieradevall, M; Prat, N
2016-09-15
Hydrological data for assessing the regime of temporary rivers are often non-existent or scarce. The scarcity of flow data makes impossible to characterize the hydrological regime of temporary streams and, in consequence, to select the correct periods and methods to determine their ecological status. This is why the TREHS software is being developed, in the framework of the LIFE Trivers project. It will help managers to implement adequately the European Water Framework Directive in this kind of water body. TREHS, using the methodology described in Gallart et al. (2012), defines six transient 'aquatic states', based on hydrological conditions representing different mesohabitats, for a given reach at a particular moment. Because of its qualitative nature, this approach allows using alternative methodologies to assess the regime of temporary rivers when there are no observed flow data. These methods, based on interviews and high-resolution aerial photographs, were tested for estimating the aquatic regime of temporary rivers. All the gauging stations (13) belonging to the Catalan Internal Catchments (NE Spain) with recurrent zero-flow periods were selected to validate this methodology. On the one hand, non-structured interviews were conducted with inhabitants of villages near the gauging stations. On the other hand, the historical series of available orthophotographs were examined. Flow records measured at the gauging stations were used to validate the alternative methods. Flow permanence in the reaches was estimated reasonably by the interviews and adequately by aerial photographs, when compared with the values estimated using daily flows. The degree of seasonality was assessed only roughly by the interviews. The recurrence of disconnected pools was not detected by flow records but was estimated with some divergences by the two methods. The combination of the two alternative methods allows substituting or complementing flow records, to be updated in the future through monitoring by professionals and citizens. Copyright © 2016 Elsevier B.V. All rights reserved.
Audebert, M; Oxarango, L; Duquennoi, C; Touze-Foltz, N; Forquet, N; Clément, R
2016-09-01
Leachate recirculation is a key process in the operation of municipal solid waste landfills as bioreactors. To ensure optimal water content distribution, bioreactor operators need tools to design leachate injection systems. Prediction of leachate flow by subsurface flow modelling could provide useful information for the design of such systems. However, hydrodynamic models require additional data to constrain them and to assess hydrodynamic parameters. Electrical resistivity tomography (ERT) is a suitable method to study leachate infiltration at the landfill scale. It can provide spatially distributed information which is useful for constraining hydrodynamic models. However, this geophysical method does not allow ERT users to directly measure water content in waste. The MICS (multiple inversions and clustering strategy) methodology was proposed to delineate the infiltration area precisely during time-lapse ERT survey in order to avoid the use of empirical petrophysical relationships, which are not adapted to a heterogeneous medium such as waste. The infiltration shapes and hydrodynamic information extracted with MICS were used to constrain hydrodynamic models in assessing parameters. The constraint methodology developed in this paper was tested on two hydrodynamic models: an equilibrium model where, flow within the waste medium is estimated using a single continuum approach and a non-equilibrium model where flow is estimated using a dual continuum approach. The latter represents leachate flows into fractures. Finally, this methodology provides insight to identify the advantages and limitations of hydrodynamic models. Furthermore, we suggest an explanation for the large volume detected by MICS when a small volume of leachate is injected. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sim, Jennifer A; Horowitz, M; Summers, M J; Trahair, L G; Goud, R S; Zaknic, A V; Hausken, T; Fraser, J D; Chapman, M J; Jones, K L; Deane, A M
2013-02-01
To compare nutrient-stimulated changes in superior mesenteric artery (SMA) blood flow, glucose absorption and glycaemia in individuals older than 65 years with, and without, critical illness. Following a 1-h 'observation' period (t (0)-t (60)), 0.9 % saline and glucose (1 kcal/ml) were infused directly into the small intestine at 2 ml/min between t (60)-t (120), and t (120)-t (180), respectively. SMA blood flow was measured using Doppler ultrasonography at t (60) (fasting), t (90) and t (150) and is presented as raw values and nutrient-stimulated increment from baseline (Δ). Glucose absorption was evaluated using serum 3-O-methylglucose (3-OMG) concentrations during, and for 1 h after, the glucose infusion (i.e. t (120)-t (180) and t (120)-t (240)). Mean arterial pressure was recorded between t (60)-t (240). Data are presented as median (25th, 75th percentile). Eleven mechanically ventilated critically ill patients [age 75 (69, 79) years] and nine healthy volunteers [70 (68, 77) years] were studied. The magnitude of the nutrient-stimulated increase in SMA flow was markedly less in the critically ill when compared with healthy subjects [Δt (150): patients 115 (-138, 367) versus health 836 (618, 1,054) ml/min; P = 0.001]. In patients, glucose absorption was reduced during, and for 1 h after, the glucose infusion when compared with health [AUC(120-180): 4.571 (2.591, 6.551) versus 11.307 (8.447, 14.167) mmol/l min; P < 0.001 and AUC(120-240): 26.5 (17.7, 35.3) versus 40.6 (31.7, 49.4) mmol/l min; P = 0.031]. A close relationship between the nutrient-stimulated increment in SMA flow and glucose absorption was evident (3-OMG AUC(120-180) and ∆SMA flow at t (150): r (2) = 0.29; P < 0.05). In critically ill patients aged >65 years, stimulation of SMA flow by small intestinal glucose infusion may be attenuated, which could account for the reduction in glucose absorption.
Theoretical analysis of non-Gaussian heterogeneity effects on subsurface flow and transport
NASA Astrophysics Data System (ADS)
Riva, Monica; Guadagnini, Alberto; Neuman, Shlomo P.
2017-04-01
Much of the stochastic groundwater literature is devoted to the analysis of flow and transport in Gaussian or multi-Gaussian log hydraulic conductivity (or transmissivity) fields, Y(x)=ln\\func K(x) (x being a position vector), characterized by one or (less frequently) a multiplicity of spatial correlation scales. Yet Y and many other variables and their (spatial or temporal) increments, ΔY, are known to be generally non-Gaussian. One common manifestation of non-Gaussianity is that whereas frequency distributions of Y often exhibit mild peaks and light tails, those of increments ΔY are generally symmetric with peaks that grow sharper, and tails that become heavier, as separation scale or lag between pairs of Y values decreases. A statistical model that captures these disparate, scale-dependent distributions of Y and ΔY in a unified and consistent manner has been recently proposed by us. This new "generalized sub-Gaussian (GSG)" model has the form Y(x)=U(x)G(x) where G(x) is (generally, but not necessarily) a multiscale Gaussian random field and U(x) is a nonnegative subordinator independent of G. The purpose of this paper is to explore analytically, in an elementary manner, lead-order effects that non-Gaussian heterogeneity described by the GSG model have on the stochastic description of flow and transport. Recognizing that perturbation expansion of hydraulic conductivity K=eY diverges when Y is sub-Gaussian, we render the expansion convergent by truncating Y's domain of definition. We then demonstrate theoretically and illustrate by way of numerical examples that, as the domain of truncation expands, (a) the variance of truncated Y (denoted by Yt) approaches that of Y and (b) the pdf (and thereby moments) of Yt increments approach those of Y increments and, as a consequence, the variogram of Yt approaches that of Y. This in turn guarantees that perturbing Kt=etY to second order in σYt (the standard deviation of Yt) yields results which approach those we obtain upon perturbing K=eY to second order in σY even as the corresponding series diverges. Our analysis is rendered mathematically tractable by considering mean-uniform steady state flow in an unbounded, two-dimensional domain of mildly heterogeneous Y with a single-scale function G having an isotropic exponential covariance. Results consist of expressions for (a) lead-order autocovariance and cross-covariance functions of hydraulic head, velocity, and advective particle displacement and (b) analogues of preasymptotic as well as asymptotic Fickian dispersion coefficients. We compare these theoretically and graphically with corresponding expressions developed in the literature for Gaussian Y. We find the former to differ from the latter by a factor k =
NASA Astrophysics Data System (ADS)
Most, Sebastian; Nowak, Wolfgang; Bijeljic, Branko
2015-04-01
Fickian transport in groundwater flow is the exception rather than the rule. Transport in porous media is frequently simulated via particle methods (i.e. particle tracking random walk (PTRW) or continuous time random walk (CTRW)). These methods formulate transport as a stochastic process of particle position increments. At the pore scale, geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Hence, it is important to get a better understanding of the processes at pore scale. For our analysis we track the positions of 10.000 particles migrating through the pore space over time. The data we use come from micro CT scans of a homogeneous sandstone and encompass about 10 grain sizes. Based on those images we discretize the pore structure and simulate flow at the pore scale based on the Navier-Stokes equation. This flow field realistically describes flow inside the pore space and we do not need to add artificial dispersion during the transport simulation. Next, we use particle tracking random walk and simulate pore-scale transport. Finally, we use the obtained particle trajectories to do a multivariate statistical analysis of the particle motion at the pore scale. Our analysis is based on copulas. Every multivariate joint distribution is a combination of its univariate marginal distributions. The copula represents the dependence structure of those univariate marginals and is therefore useful to observe correlation and non-Gaussian interactions (i.e. non-Fickian transport). The first goal of this analysis is to better understand the validity regions of commonly made assumptions. We are investigating three different transport distances: 1) The distance where the statistical dependence between particle increments can be modelled as an order-one Markov process. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks start. 2) The distance where bivariate statistical dependence simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW/CTRW). 3) The distance of complete statistical independence (validity of classical PTRW/CTRW). The second objective is to reveal characteristic dependencies influencing transport the most. Those dependencies can be very complex. Copulas are highly capable of representing linear dependence as well as non-linear dependence. With that tool we are able to detect persistent characteristics dominating transport even across different scales. The results derived from our experimental data set suggest that there are many more non-Fickian aspects of pore-scale transport than the univariate statistics of longitudinal displacements. Non-Fickianity can also be found in transverse displacements, and in the relations between increments at different time steps. Also, the found dependence is non-linear (i.e. beyond simple correlation) and persists over long distances. Thus, our results strongly support the further refinement of techniques like correlated PTRW or correlated CTRW towards non-linear statistical relations.
A Step-by-Step Design Methodology for a Base Case Vanadium Redox-Flow Battery
ERIC Educational Resources Information Center
Moore, Mark; Counce, Robert M.; Watson, Jack S.; Zawodzinski, Thomas A.; Kamath, Haresh
2012-01-01
The purpose of this work is to develop an evolutionary procedure to be used by Chemical Engineering students for the base-case design of a Vanadium Redox-Flow Battery. The design methodology is based on the work of Douglas (1985) and provides a profitability analysis at each decision level so that more profitable alternatives and directions can be…
E.M. (Ted) Bilek
2007-01-01
The model ChargeOut! was developed to determine charge-out rates or rates of return for machines and capital equipment. This paper introduces a costing methodology and applies it to a piece of capital equipment. Although designed for the forest industry, the methodology is readily transferable to other sectors. Based on discounted cash-flow analysis, ChargeOut!...
Using Six Sigma and Lean methodologies to improve OR throughput.
Fairbanks, Catharine B
2007-07-01
Improving patient flow in the perioperative environment is challenging, but it has positive implications for both staff members and for the facility. One facility in vermont improved patient throughput by incorporating Six Sigma and Lean methodologies for patients undergoing elective procedures. The results of the project were significantly improved patient flow and increased teamwork and pride among perioperative staff members. (c) AORN, Inc, 2007.
Eiken, Ola; Mekjavic, Igor B; Kölegård, Roger
2014-03-01
Recent studies are reviewed, concerning the in vivo wall stiffness of arteries and arterioles in healthy humans, and how these properties adapt to iterative increments or sustained reductions in local intravascular pressure. A novel technique was used, by which arterial and arteriolar stiffness was determined as changes in arterial diameter and flow, respectively, during graded increments in distending pressure in the blood vessels of an arm or a leg. Pressure-induced increases in diameter and flow were smaller in the lower leg than in the arm, indicating greater stiffness in the arteries/arterioles of the leg. A 5-week period of intermittent intravascular pressure elevations in one arm reduced pressure distension and pressure-induced flow in the brachial artery by about 50%. Conversely, prolonged reduction of arterial/arteriolar pressure in the lower body by 5 weeks of sustained horizontal bedrest, induced threefold increases of the pressure-distension and pressure-flow responses in a tibial artery. Thus, the wall stiffness of arteries and arterioles are plastic properties that readily adapt to changes in the prevailing local intravascular pressure. The discussion concerns mechanisms underlying changes in local arterial/arteriolar stiffness as well as whether stiffness is altered by changes in myogenic tone and/or wall structure. As regards implications, regulation of local arterial/arteriolar stiffness may facilitate control of arterial pressure in erect posture and conditions of exaggerated intravascular pressure gradients. That increased intravascular pressure leads to increased arteriolar wall stiffness also supports the notion that local pressure loading may constitute a prime mover in the development of vascular changes in hypertension.
NASA Technical Reports Server (NTRS)
Keye, Stefan; Togiti, Vamish; Eisfeld, Bernhard; Brodersen, Olaf P.; Rivers, Melissa B.
2013-01-01
The accurate calculation of aerodynamic forces and moments is of significant importance during the design phase of an aircraft. Reynolds-averaged Navier-Stokes (RANS) based Computational Fluid Dynamics (CFD) has been strongly developed over the last two decades regarding robustness, efficiency, and capabilities for aerodynamically complex configurations. Incremental aerodynamic coefficients of different designs can be calculated with an acceptable reliability at the cruise design point of transonic aircraft for non-separated flows. But regarding absolute values as well as increments at off-design significant challenges still exist to compute aerodynamic data and the underlying flow physics with the accuracy required. In addition to drag, pitching moments are difficult to predict because small deviations of the pressure distributions, e.g. due to neglecting wing bending and twisting caused by the aerodynamic loads can result in large discrepancies compared to experimental data. Flow separations that start to develop at off-design conditions, e.g. in corner-flows, at trailing edges, or shock induced, can have a strong impact on the predictions of aerodynamic coefficients too. Based on these challenges faced by the CFD community a working group of the AIAA Applied Aerodynamics Technical Committee initiated in 2001 the CFD Drag Prediction Workshop (DPW) series resulting in five international workshops. The results of the participants and the committee are summarized in more than 120 papers. The latest, fifth workshop took place in June 2012 in conjunction with the 30th AIAA Applied Aerodynamics Conference. The results in this paper will evaluate the influence of static aeroelastic wing deformations onto pressure distributions and overall aerodynamic coefficients based on the NASA finite element structural model and the common grids.
Quantifying the effect of complications on patient flow, costs and surgical throughputs.
Almashrafi, Ahmed; Vanderbloemen, Laura
2016-10-21
Postoperative adverse events are known to increase length of stay and cost. However, research on how adverse events affect patient flow and operational performance has been relatively limited to date. Moreover, there is paucity of studies on the use of simulation in understanding the effect of complications on care processes and resources. In hospitals with scarcity of resources, postoperative complications can exert a substantial influence on hospital throughputs. This paper describes an evaluation method for assessing the effect of complications on patient flow within a cardiac surgical department. The method is illustrated by a case study where actual patient-level data are incorporated into a discrete event simulation (DES) model. The DES model uses patient data obtained from a large hospital in Oman to quantify the effect of complications on patient flow, costs and surgical throughputs. We evaluated the incremental increase in resources due to treatment of complications using Poisson regression. Several types of complications were examined such as cardiac complications, pulmonary complications, infection complications and neurological complications. 48 % of the patients in our dataset experienced one or more complications. The most common types of complications were ventricular arrhythmia (16 %) followed by new atrial arrhythmia (15.5 %) and prolonged ventilation longer than 24 h (12.5 %). The total number of additional days associated with infections was the highest, while cardiac complications have resulted in the lowest number of incremental days of hospital stay. Complications had a significant effect on perioperative operational performance such as surgery cancellations and waiting time. The effect was profound when complications occurred in the Cardiac Intensive Care (CICU) where a limited capacity was observed. The study provides evidence supporting the need to incorporate adverse events data in resource planning to improve hospital performance.
Why do modelled and observed surface wind stress climatologies differ in the trade wind regions?
NASA Astrophysics Data System (ADS)
Simpson, I.; Bacmeister, J. T.; Sandu, I.; Rodwell, M. J.
2017-12-01
Global climate models (GCMs) exhibit stronger easterly zonal surface wind stress and near surface winds in the Northern Hemisphere (NH) trade winds than observationally constrained reanalyses or other observational products. A comparison, between models and reanalyses, of the processes that contribute to the zonal mean, vertically integrated balance of momentum, reveals that this wind stress discrepancy cannot be explained by either the resolved dynamics or parameterized tendencies that are common to each. Rather, a substantial residual exists in the momentum balance of the reanalyses, pointing toward a role for the analysis increments. Indeed, they are found to systematically weaken the NH near surface easterlies in winter, thereby reducing the surface wind stress. Similar effects are found in the Southern Hemisphere and further analysis of the spatial structure and seasonality of these increments, demonstrates that they act to weaken the near surface flow over much of the low latitude oceans in both summer and winter. This suggests an erroneous /missing process in GCMs that constitutes a missing drag on the low level zonal flow over oceans. Either this indicates a mis-representation of the drag between the surface and the atmosphere, or a missing internal atmospheric process that amounts to an additional drag on the low level zonal flow. If the former is true, then observation based surface stress products, which rely on similar drag formulations to GCMs, may be underestimating the strength of the easterly surface wind stress.
Cardiovascular Responses of Snakes to Gravitational Gradients
NASA Technical Reports Server (NTRS)
Hsieh, Shi-Tong T.; Lillywhite, H. B.; Ballard, R. E.; Hargens, A. R.; Holton, Emily M. (Technical Monitor)
1998-01-01
Snakes are useful vertebrates for studies of gravitational adaptation, owing to their elongate body and behavioral diversification. Scansorial species have evolved specializations for regulating hemodynamics during exposure to gravitational stress, whereas, such adaptations are less well developed in aquatic and non-climbing species. We examined responses of the amphibious snake,\\italicize (Nerodia rhombifera), to increments of Gz (head-to-tail) acceleration force on both a short- and long-arm centrifuge (1.5 vs. 3.7 m radius, from the hub to tail end of snake). We recorded heart rate, dorsal aortic pressure, and carotid arterial blood flow during stepwise 0.25 G increments of Gz force (referenced at the tail) in conscious animals. The Benz tolerance of a snake was determined as the Gz level at which carotid blood flow ceased and was found to be significantly greater at the short- than long-arm centrifuge radius (1.57 Gz vs. 2.0 Gz, respectively; P=0.016). A similar pattern of response was demonstrated in semi-arboreal rat snakes,\\italicize{Elaphe obsoleta}, which are generally more tolerant of Gz force (2.6 Gz at 1.5m radius) than are water snakes. The tolerance differences of the two species reflected cardiovascular responses, which differed quantitatively but not qualitatively: heart rates increased while arterial pressure and blood flow decreased in response to increasing levels of Gz. Thus, in both species of snakes, a reduced gradient of Gz force (associated with greater centrifuge radius) significantly decreases the Gz level that can be tolerated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malcolm Pitts; Jie Qi; Dan Wilson
2005-04-01
Gelation technologies have been developed to provide more efficient vertical sweep efficiencies for flooding naturally fractured oil reservoirs or more efficient areal sweep efficiency for those with high permeability contrast ''thief zones''. The field proven alkaline-surfactant-polymer technology economically recovers 15% to 25% OOIP more oil than waterflooding from swept pore space of an oil reservoir. However, alkaline-surfactant-polymer technology is not amenable to naturally fractured reservoirs or those with thief zones because much of injected solution bypasses target pore space containing oil. This work investigates whether combining these two technologies could broaden applicability of alkaline-surfactant-polymer flooding into these reservoirs. A priormore » fluid-fluid report discussed interaction of different gel chemical compositions and alkaline-surfactant-polymer solutions. Gel solutions under dynamic conditions of linear corefloods showed similar stability to alkaline-surfactant-polymer solutions as in the fluid-fluid analyses. Aluminum-polyacrylamide, flowing gels are not stable to alkaline-surfactant-polymer solutions of either pH 10.5 or 12.9. Chromium acetate-polyacrylamide flowing and rigid flowing gels are stable to subsequent alkaline-surfactant-polymer solution injection. Rigid flowing chromium acetate-polyacrylamide gels maintained permeability reduction better than flowing chromium acetate-polyacrylamide gels. Silicate-polyacrylamide gels are not stable with subsequent injection of either a pH 10.5 or a 12.9 alkaline-surfactant-polymer solution. Chromium acetate-xanthan gum rigid gels are not stable to subsequent alkaline-surfactant-polymer solution injection. Resorcinol-formaldehyde gels were stable to subsequent alkaline-surfactant-polymer solution injection. When evaluated in a dual core configuration, injected fluid flows into the core with the greatest effective permeability to the injected fluid. The same gel stability trends to subsequent alkaline-surfactant-polymer injected solution were observed. Aluminum citrate-polyacrylamide, resorcinol-formaldehyde, and the silicate-polyacrylamide gel systems did not produce significant incremental oil in linear corefloods. Both flowing and rigid flowing chromium acetate-polyacrylamide gels and the xanthan gum-chromium acetate gel system produced incremental oil with the rigid flowing gel producing the greatest amount. Higher oil recovery could have been due to higher differential pressures across cores. None of the gels tested appeared to alter alkaline-surfactant-polymer solution oil recovery. Total waterflood plus chemical flood oil recovery sequence recoveries were all similar.« less
Using scan statistics for congenital anomalies surveillance: the EUROCAT methodology.
Teljeur, Conor; Kelly, Alan; Loane, Maria; Densem, James; Dolk, Helen
2015-11-01
Scan statistics have been used extensively to identify temporal clusters of health events. We describe the temporal cluster detection methodology adopted by the EUROCAT (European Surveillance of Congenital Anomalies) monitoring system. Since 2001, EUROCAT has implemented variable window width scan statistic for detecting unusual temporal aggregations of congenital anomaly cases. The scan windows are based on numbers of cases rather than being defined by time. The methodology is imbedded in the EUROCAT Central Database for annual application to centrally held registry data. The methodology was incrementally adapted to improve the utility and to address statistical issues. Simulation exercises were used to determine the power of the methodology to identify periods of raised risk (of 1-18 months). In order to operationalize the scan methodology, a number of adaptations were needed, including: estimating date of conception as unit of time; deciding the maximum length (in time) and recency of clusters of interest; reporting of multiple and overlapping significant clusters; replacing the Monte Carlo simulation with a lookup table to reduce computation time; and placing a threshold on underlying population change and estimating the false positive rate by simulation. Exploration of power found that raised risk periods lasting 1 month are unlikely to be detected except when the relative risk and case counts are high. The variable window width scan statistic is a useful tool for the surveillance of congenital anomalies. Numerous adaptations have improved the utility of the original methodology in the context of temporal cluster detection in congenital anomalies.
Radioactive waste disposal fees-Methodology for calculation
NASA Astrophysics Data System (ADS)
Bemš, Július; Králík, Tomáš; Kubančák, Ján; Vašíček, Jiří; Starý, Oldřich
2014-11-01
This paper summarizes the methodological approach used for calculation of fee for low- and intermediate-level radioactive waste disposal and for spent fuel disposal. The methodology itself is based on simulation of cash flows related to the operation of system for waste disposal. The paper includes demonstration of methodology application on the conditions of the Czech Republic.
Dervaux, Benoît; Baseilhac, Eric; Fagon, Jean-Yves; Biot, Claire; Blachier, Corinne; Braun, Eric; Debroucker, Frédérique; Detournay, Bruno; Ferretti, Carine; Granger, Muriel; Jouan-Flahault, Chrystel; Lussier, Marie-Dominique; Meyer, Arlette; Muller, Sophie; Pigeon, Martine; De Sahb, Rima; Sannié, Thomas; Sapède, Claudine; Vray, Muriel
2014-01-01
Decree No. 2012-1116 of 2 October 2012 on medico-economic assignments of the French National Authority for Health (Haute autorité de santé, HAS) significantly alters the conditions for accessing the health products market in France. This paper presents a theoretical framework for interpreting the results of the economic evaluation of health technologies and summarises the facts available in France for developing benchmarks that will be used to interpret incremental cost-effectiveness ratios. This literature review shows that it is difficult to determine a threshold value but it is also difficult to interpret then incremental cost effectiveness ratio (ICER) results without a threshold value. In this context, round table participants favour a pragmatic approach based on "benchmarks" as opposed to a threshold value, based on an interpretative and normative perspective, i.e. benchmarks that can change over time based on feedback. © 2014 Société Française de Pharmacologie et de Thérapeutique.
[Parameter of evidence-based medicine in health care economics].
Wasem, J; Siebert, U
1999-08-01
In the view of scarcity of resources, economic evaluations in health care, in which not only effects but also costs related to a medical intervention are examined and a incremental cost-outcome-ratio is build, are an important supplement to the program of evidence based medicine. Outcomes of a medical intervention can be measured by clinical effectiveness, quality-adjusted life years, and monetary evaluation of benefits. As far as costs are concerned, direct medical costs, direct non-medical costs and indirect costs have to be considered in an economic evaluation. Data can be used from primary studies or secondary analysis; metaanalysis for synthesizing of data may be adequate. For calculation of incremental cost-benefit-ratios, models of decision analysis (decision tree models, Markov-models) often are necessary. Methodological and ethical limits for application of the results of economic evaluation in resource allocation decision in health care have to be regarded: Economic evaluations and the calculation of cost-outcome-rations should only support decision making but cannot replace it.
Comparison of a 3-D CFD-DSMC Solution Methodology With a Wind Tunnel Experiment
NASA Technical Reports Server (NTRS)
Glass, Christopher E.; Horvath, Thomas J.
2002-01-01
A solution method for problems that contain both continuum and rarefied flow regions is presented. The methodology is applied to flow about the 3-D Mars Sample Return Orbiter (MSRO) that has a highly compressed forebody flow, a shear layer where the flow separates from a forebody lip, and a low density wake. Because blunt body flow fields contain such disparate regions, employing a single numerical technique to solve the entire 3-D flow field is often impractical, or the technique does not apply. Direct simulation Monte Carlo (DSMC) could be employed to solve the entire flow field; however, the technique requires inordinate computational resources for continuum and near-continuum regions, and is best suited for the wake region. Computational fluid dynamics (CFD) will solve the high-density forebody flow, but continuum assumptions do not apply in the rarefied wake region. The CFD-DSMC approach presented herein may be a suitable way to obtain a higher fidelity solution.
A methodology to reduce uncertainties in the high-flow portion of a rating curve
USDA-ARS?s Scientific Manuscript database
Flow monitoring at watershed scale relies on the establishment of a rating curve that describes the relationship between stage and flow and is developed from actual flow measurements at various stages. Measurement errors increase with out-of-bank flow conditions because of safety concerns and diffic...
Towards an entropy-based detached-eddy simulation
NASA Astrophysics Data System (ADS)
Zhao, Rui; Yan, Chao; Li, XinLiang; Kong, WeiXuan
2013-10-01
A concept of entropy increment ratio ( s¯) is introduced for compressible turbulence simulation through a series of direct numerical simulations (DNS). s¯ represents the dissipation rate per unit mechanical energy with the benefit of independence of freestream Mach numbers. Based on this feature, we construct the shielding function f s to describe the boundary layer region and propose an entropy-based detached-eddy simulation method (SDES). This approach follows the spirit of delayed detached-eddy simulation (DDES) proposed by Spalart et al. in 2005, but it exhibits much better behavior after their performances are compared in the following flows, namely, pure attached flow with thick boundary layer (a supersonic flat-plate flow with high Reynolds number), fully separated flow (the supersonic base flow), and separated-reattached flow (the supersonic cavity-ramp flow). The Reynolds-averaged Navier-Stokes (RANS) resolved region is reliably preserved and the modeled stress depletion (MSD) phenomenon which is inherent in DES and DDES is partly alleviated. Moreover, this new hybrid strategy is simple and general, making it applicable to other models related to the boundary layer predictions.
Mechanical Limits to Size in Wave-Swept Organisms.
1983-11-10
complanata, the probability of destruction and the size- specific increase in the risk of destruction are both substantial. It is conjectured that the...barnacle, Semibalanus cariosus) the size-specific increment in the risk of destruction is small and the size limits imposed on these organisms are...constructed here provides an experimental approach to examining many potential effects of environmental stress caused by flowing water. For example, these
An evaluation of 2 new devices for nasal high-flow gas therapy.
Waugh, Jonathan B; Granger, Wesley M
2004-08-01
The traditional nasal cannula with bubble humidifier is limited to a maximum flow of 6 L/min to minimize the risk of complications. We conducted a bench study of 2 new Food and Drug Administration-approved nasal cannula/humidifier products designed to deliver at flows> 6 L/min. Using a digital psychrometer we measured the relative humidity and temperature of delivered gas from each device, at 5 L/min increments over the specified functional high-flow range. The Salter Labs unit achieved 72.5-78.7% relative humidity (5-15 L/min range) at ambient temperature (21-23 degrees C). The Vapotherm device achieved 99.9% relative humidity at a temperature setting of 37 degrees C (5-40 L/min). Both devices meet minimum humidification standards and offer practical new treatment options. The patient-selection criteria are primarily the severity of the patient's condition and cost.
Energy Efficient Engine exhaust mixer model technology report addendum; phase 3 test program
NASA Technical Reports Server (NTRS)
Larkin, M. J.; Blatt, J. R.
1984-01-01
The Phase 3 exhaust mixer test program was conducted to explore the trends established during previous Phases 1 and 2. Combinations of mixer design parameters were tested. Phase 3 testing showed that the best performance achievable within tailpipe length and diameter constraints is 2.55 percent better than an optimized separate flow base line. A reduced penetration design achieved about the same overall performance level at a substantially lower level of excess pressure loss but with a small reduction in mixing. To improve reliability of the data, the hot and cold flow thrust coefficient analysis used in Phases 1 and 2 was augmented by calculating percent mixing from traverse data. Relative change in percent mixing between configurations was determined from thrust and flow coefficient increments. The calculation procedure developed was found to be a useful tool in assessing mixer performance. Detailed flow field data were obtained to facilitate calibration of computer codes.
SAMS Acceleration Measurements on Mir (NASA Increment 4)
NASA Technical Reports Server (NTRS)
DeLombard, Richard
1998-01-01
During NASA Increment 4 (January to May 1997), about 5 gigabytes of acceleration data were collected by the Space Acceleration Measurements System (SAMS) onboard the Russian Space Station, Mir. The data were recorded on 28 optical disks which were returned to Earth on STS-84. During this increment, SAMS data were collected in the Priroda module to support the Mir Structural Dynamics Experiment (MiSDE), the Binary Colloidal Alloy Tests (BCAT), Angular Liquid Bridge (ALB), Candle Flames in Microgravity (CFM), Diffusion Controlled Apparatus Module (DCAM), Enhanced Dynamic Load Sensors (EDLS), Forced Flow Flame Spreading Test (FFFr), Liquid Metal Diffusion (LMD), Protein Crystal Growth in Dewar (PCG/Dewar), Queen's University Experiments in Liquid Diffusion (QUELD), and Technical Evaluation of MIM (TEM). This report points out some of the salient features of the microgravity environment to which these experiments were exposed. Also documented are mission events of interest such as the docked phase of STS-84 operations, a Progress engine bum, Soyuz vehicle docking and undocking, and Progress vehicle docking. This report presents an overview of the SAMS acceleration measurements recorded by 10 Hz and 100 Hz sensor heads. The analyses included herein complement those presented in previous summary reports prepared by the Principal Investigator Microgravity Services (PIMS) group.
Amponsah, Isaac G; Lieffers, Victor J; Comeau, Philip G; Brockley, Robert P
2004-10-01
We examined how tree growth and hydraulic properties of branches and boles are influenced by periodic (about 6 years) and annual fertilization in two juvenile lodgepole pine (Pinus contorta Dougl. var. latifolia Engelm.) stands in the interior of British Columbia, Canada. Mean basal area (BA), diameter at breast height (DBH) and height increments and percent earlywood and sapwood hydraulic parameters of branches and boles were measured 7 or 8 years after the initial treatments at Sheridan Creek and Kenneth Creek. At Sheridan Creek, fertilization significantly increased BA and DBH increments, but had no effect on height increment. At Kenneth Creek, fertilization increased BA, but fertilized trees had significantly lower height increments than control trees. Sapwood permeability was greater in lower branches of repeatedly fertilized trees than in those of control trees. Sapwood permeabilities of the lower branches of trees in the control, periodic and annual treatments were 0.24 x 10(-12), 0.35 x 10(-12) and 0.45 x 10(-12) m2 at Kenneth Creek; and 0.41 x 10(-12), 0.54 x 10(-12) and 0.65 x 10(-12) m2 at Sheridan Creek, respectively. Annual fertilization tended to increase leaf specific conductivities and Huber values of the lower branches of trees at both study sites. We conclude that, in trees fertilized annually, the higher flow capacity of lower branches may reduce the availability of water to support annual growth of the leader and upper branches.
Global Artificial Boundary Conditions for Computation of External Flow Problems with Propulsive Jets
NASA Technical Reports Server (NTRS)
Tsynkov, Semyon; Abarbanel, Saul; Nordstrom, Jan; Ryabenkii, Viktor; Vatsa, Veer
1998-01-01
We propose new global artificial boundary conditions (ABC's) for computation of flows with propulsive jets. The algorithm is based on application of the difference potentials method (DPM). Previously, similar boundary conditions have been implemented for calculation of external compressible viscous flows around finite bodies. The proposed modification substantially extends the applicability range of the DPM-based algorithm. In the paper, we present the general formulation of the problem, describe our numerical methodology, and discuss the corresponding computational results. The particular configuration that we analyze is a slender three-dimensional body with boat-tail geometry and supersonic jet exhaust in a subsonic external flow under zero angle of attack. Similarly to the results obtained earlier for the flows around airfoils and wings, current results for the jet flow case corroborate the superiority of the DPM-based ABC's over standard local methodologies from the standpoints of accuracy, overall numerical performance, and robustness.
Capillary Flows Along Open Channel Conduits: The Open-Star Section
NASA Technical Reports Server (NTRS)
Weislogel, Mark; Geile, John; Chen, Yongkang; Nguyen, Thanh Tung; Callahan, Michael
2014-01-01
Capillary rise in tubes, channels, and grooves has received significant attention in the literature for over 100 years. In yet another incremental extension of such work, a transient capillary rise problem is solved for spontaneous flow along an interconnected array of open channels forming what is referred to as an 'open-star' section. This geometry possesses several attractive characteristics including passive phase separations and high diffusive gas transport. Despite the complex geometry, novel and convenient approximations for capillary pressure and viscous resistance enable closed form predictions of the flow. As part of the solution, a combined scaling approach is applied that identifies unsteady-inertial-capillary, convective-inertial-capillary, and visco-capillary transient regimes in a single parameter. Drop tower experiments are performed employing 3-D printed conduits to corroborate all findings.
Gravity flow of powder in a lunar environment. Part 2: Analysis of flow initiation
NASA Technical Reports Server (NTRS)
Pariseau, W. G.
1971-01-01
A small displacement-small strain finite element technique utilizing the constant strain triangle and incremental constitutive equations for elasticplastic (media nonhardening and obeying a Coulomb yield condition) was applied to the analysis of gravity flow initiation. This was done in a V-shaped hopper containing a powder under lunar environmental conditions. Three methods of loading were examined. Of the three, the method of computing the initial state of stress in a filled hopper prior to drawdown, by adding material to the hopper layer by layer, was the best. Results of the analysis of a typical hopper problem show that the initial state of stress, the elastic moduli, and the strength parameters have an important influence on material response subsequent to the opening of the hopper outlet.
Elasto-plastic flow in cracked bodies using a new finite element model. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Karabin, M. E., Jr.
1977-01-01
Cracked geometries were studied by finite element techniques with the aid of a new special element embedded at the crack tip. This model seeked to accurately represent the singular stresses and strains associated with the elasto-plastic flow process. The present model was not restricted to a material type and did not predetermine a singularity. Rather the singularity was treated as an unknown. For each step of the incremental process the nodal degrees of freedom and the unknown singularity were found through minimization of an energy-like functional. The singularity and nodal degrees of freedom were determined by means of an iterative process.
An incremental block-line-Gauss-Seidel method for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Napolitano, M.; Walters, R. W.
1985-01-01
A block-line-Gauss-Seidel (LGS) method is developed for solving the incompressible and compressible Navier-Stokes equations in two dimensions. The method requires only one block-tridiagonal solution process per iteration and is consequently faster per step than the linearized block-ADI methods. Results are presented for both incompressible and compressible separated flows: in all cases the proposed block-LGS method is more efficient than the block-ADI methods. Furthermore, for high Reynolds number weakly separated incompressible flow in a channel, which proved to be an impossible task for a block-ADI method, solutions have been obtained very efficiently by the new scheme.
NASA Technical Reports Server (NTRS)
Baumeister, K. J.
1979-01-01
A time dependent numerical solution of the linearized continuity and momentum equation was developed for sound propagation in a two dimensional straight hard or soft wall duct with a sheared mean flow. The time dependent governing acoustic difference equations and boundary conditions were developed along with a numerical determination of the maximum stable time increments. A harmonic noise source radiating into a quiescent duct was analyzed. This explicit iteration method then calculated stepwise in real time to obtain the transient as well as the steady state solution of the acoustic field. Example calculations were presented for sound propagation in hard and soft wall ducts, with no flow and plug flow. Although the problem with sheared flow was formulated and programmed, sample calculations were not examined. The time dependent finite difference analysis was found to be superior to the steady state finite difference and finite element techniques because of shorter solution times and the elimination of large matrix storage requirements.
A RLS-SVM Aided Fusion Methodology for INS during GPS Outages
Yao, Yiqing; Xu, Xiaosu
2017-01-01
In order to maintain a relatively high accuracy of navigation performance during global positioning system (GPS) outages, a novel robust least squares support vector machine (LS-SVM)-aided fusion methodology is explored to provide the pseudo-GPS position information for the inertial navigation system (INS). The relationship between the yaw, specific force, velocity, and the position increment is modeled. Rather than share the same weight in the traditional LS-SVM, the proposed algorithm allocates various weights for different data, which makes the system immune to the outliers. Field test data was collected to evaluate the proposed algorithm. The comparison results indicate that the proposed algorithm can effectively provide position corrections for standalone INS during the 300 s GPS outage, which outperforms the traditional LS-SVM method. Historical information is also involved to better represent the vehicle dynamics. PMID:28245549
A RLS-SVM Aided Fusion Methodology for INS during GPS Outages.
Yao, Yiqing; Xu, Xiaosu
2017-02-24
In order to maintain a relatively high accuracy of navigation performance during global positioning system (GPS) outages, a novel robust least squares support vector machine (LS-SVM)-aided fusion methodology is explored to provide the pseudo-GPS position information for the inertial navigation system (INS). The relationship between the yaw, specific force, velocity, and the position increment is modeled. Rather than share the same weight in the traditional LS-SVM, the proposed algorithm allocates various weights for different data, which makes the system immune to the outliers. Field test data was collected to evaluate the proposed algorithm. The comparison results indicate that the proposed algorithm can effectively provide position corrections for standalone INS during the 300 s GPS outage, which outperforms the traditional LS-SVM method. Historical information is also involved to better represent the vehicle dynamics.
A methodology for post-EIS (environmental impact statement) monitoring
Marcus, Linda Graves
1979-01-01
A methodology for monitoring the impacts predicted in environmental impact statements (EIS's) was developed using the EIS on phosphate development in southeastern Idaho as a case study. A monitoring system based on this methodology: (1) coordinates a comprehensive, intergovernmental monitoring effort; (2) documents the major impacts that result, thereby improving the accuracy of impact predictions in future EIS's; (3) helps agencies control impacts by warning them when critical impact levels are reached and by providing feedback on the success of mitigating measures; and (4) limits monitoring data to the essential information that agencies need to carry out their regulatory and environmental protection responsibilities. The methodology is presented as flow charts accompanied by tables that describe the objectives, tasks, and products for each work element in the flow chart.
ADAPTIVE-GRID SIMULATION OF GROUNDWATER FLOW IN HETEROGENEOUS AQUIFERS. (R825689C068)
The prediction of contaminant transport in porous media requires the computation of the flow velocity. This work presents a methodology for high-accuracy computation of flow in a heterogeneous isotropic formation, employing a dual-flow formulation and adaptive...
Thompson, Ronald E.; Hoffman, Scott A.
2006-01-01
A suite of 28 streamflow statistics, ranging from extreme low to high flows, was computed for 17 continuous-record streamflow-gaging stations and predicted for 20 partial-record stations in Monroe County and contiguous counties in north-eastern Pennsylvania. The predicted statistics for the partial-record stations were based on regression analyses relating inter-mittent flow measurements made at the partial-record stations indexed to concurrent daily mean flows at continuous-record stations during base-flow conditions. The same statistics also were predicted for 134 ungaged stream locations in Monroe County on the basis of regression analyses relating the statistics to GIS-determined basin characteristics for the continuous-record station drainage areas. The prediction methodology for developing the regression equations used to estimate statistics was developed for estimating low-flow frequencies. This study and a companion study found that the methodology also has application potential for predicting intermediate- and high-flow statistics. The statistics included mean monthly flows, mean annual flow, 7-day low flows for three recurrence intervals, nine flow durations, mean annual base flow, and annual mean base flows for two recurrence intervals. Low standard errors of prediction and high coefficients of determination (R2) indicated good results in using the regression equations to predict the statistics. Regression equations for the larger flow statistics tended to have lower standard errors of prediction and higher coefficients of determination (R2) than equations for the smaller flow statistics. The report discusses the methodologies used in determining the statistics and the limitations of the statistics and the equations used to predict the statistics. Caution is indicated in using the predicted statistics for small drainage area situations. Study results constitute input needed by water-resource managers in Monroe County for planning purposes and evaluation of water-resources availability.
Solid rocket booster internal flow analysis by highly accurate adaptive computational methods
NASA Technical Reports Server (NTRS)
Huang, C. Y.; Tworzydlo, W.; Oden, J. T.; Bass, J. M.; Cullen, C.; Vadaketh, S.
1991-01-01
The primary objective of this project was to develop an adaptive finite element flow solver for simulating internal flows in the solid rocket booster. Described here is a unique flow simulator code for analyzing highly complex flow phenomena in the solid rocket booster. New methodologies and features incorporated into this analysis tool are described.
USDA-ARS?s Scientific Manuscript database
Flow monitoring at watershed scale relies on the establishment of a rating curve that describes the relationship between stage and flow and is developed from actual flow measurements at various stages. Measurement errors increase with out-of-bank flow conditions because of safety concerns and diffic...
Discrete Adjoint-Based Design for Unsteady Turbulent Flows On Dynamic Overset Unstructured Grids
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Diskin, Boris
2012-01-01
A discrete adjoint-based design methodology for unsteady turbulent flows on three-dimensional dynamic overset unstructured grids is formulated, implemented, and verified. The methodology supports both compressible and incompressible flows and is amenable to massively parallel computing environments. The approach provides a general framework for performing highly efficient and discretely consistent sensitivity analysis for problems involving arbitrary combinations of overset unstructured grids which may be static, undergoing rigid or deforming motions, or any combination thereof. General parent-child motions are also accommodated, and the accuracy of the implementation is established using an independent verification based on a complex-variable approach. The methodology is used to demonstrate aerodynamic optimizations of a wind turbine geometry, a biologically-inspired flapping wing, and a complex helicopter configuration subject to trimming constraints. The objective function for each problem is successfully reduced and all specified constraints are satisfied.
Atmospheric response to Saharan dust deduced from ECMWF reanalysis increments
NASA Astrophysics Data System (ADS)
Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.
2003-04-01
This study focuses on the atmospheric temperature response to dust deduced from a new source of data - the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely-sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (> 0.5), low correlation, and high negative correlation (<-0.5). The innermost positive correlation area (PCA) is a large area near the center of the Sahara desert. For some local maxima inside this area the correlation even exceeds 0.8. The outermost negative correlation area (NCA) is not uniform. It consists of some areas over the eastern and western parts of North Africa with a relatively small amount of dust. Inside those areas both positive and negative high correlations exist at pressure levels ranging from 850 to 700 hPa, with the peak values near 775 hPa. Dust-forced heating (cooling) inside the PCA (NCA) is accompanied by changes in the static stability of the atmosphere above the dust layer. The reanalysis data of the European Center for Medium Range Weather Forecast(ECMWF) suggests that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity, and downward (upward) airflow. These facts indicate an interaction between dust-forced heating /cooling and atmospheric circulation. The April correlation results are supported by the analysis of vertical distribution of dust concentration, derived from the 24-hour dust prediction system at Tel Aviv University (website: http://earth.nasa.proj.ac.il/dust/current/). For other months the analysis is more complicated because of the essential increasing of humidity along with the northward progress of the ITCZ and the significant impact on the increments.
Experimental Investigation of two-phase nitrogen Cryo transfer line
NASA Astrophysics Data System (ADS)
Singh, G. K.; Nimavat, H.; Panchal, R.; Garg, A.; Srikanth, GLN; Patel, K.; Shah, P.; Tanna, V. L.; Pradhan, S.
2017-02-01
A 6-m long liquid nitrogen based cryo transfer line has been designed, developed and tested at IPR. The test objectives include the thermo-hydraulic characteristics of Cryo transfer line under single phase as well as two phase flow conditions. It is always easy in experimentation to investigate the thermo-hydraulic parameters in case of single phase flow of cryogen but it is real challenge when one deals with the two phase flow of cryogen due to availibity of mass flow measurements (direct) under two phase flow conditions. Established models have been reported in the literature where one of the well-known model of Lockhart-Martenelli relationship has been used to determine the value of quality at the outlet of Cryo transfer line. Under homogenous flow conditions, by taking the ratio of the single-phase pressure drop and the two-phase pressure drop, we estimated the quality at the outlet. Based on these equations, vapor quality at the outlet of the transfer line was predicted at different heat loads. Experimental rresults shown that from inlet to outlet, there is a considerable increment in the pressure drop and vapour quality of the outlet depending upon heat load and mass flow rate of nitrogen flowing through the line.
Numerical Simulation of High-Speed Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Givi, P.; Taulbee, D. B.; Madnia, C. K.; Jaberi, F. A.; Colucci, P. J.; Gicquel, L. Y. M.; Adumitroaie, V.; James, S.
1999-01-01
The objectives of this research are: (1) to develop and implement a new methodology for large eddy simulation of (LES) of high-speed reacting turbulent flows. (2) To develop algebraic turbulence closures for statistical description of chemically reacting turbulent flows.
2009-04-29
ISS019-E-012391 (29 April 2009) --- Astronaut Michael Barratt, Expedition 19/20 flight engineer, activates the Microgravity Science Glovebox (MSG) from its A31p laptop, initiates and conducts a session, the first of Increment 19, with the experiment Smoke Point In Co-flow Experiment (SPICE), performed in the MSG and controlled by its A31p with SPICE micro-drives in the Kibo laboratory of the International Space Station.
Pressure-based high-order TVD methodology for dynamic stall control
NASA Astrophysics Data System (ADS)
Yang, H. Q.; Przekwas, A. J.
1992-01-01
The quantitative prediction of the dynamics of separating unsteady flows, such as dynamic stall, is of crucial importance. This six-month SBIR Phase 1 study has developed several new pressure-based methodologies for solving 3D Navier-Stokes equations in both stationary and moving (body-comforting) coordinates. The present pressure-based algorithm is equally efficient for low speed incompressible flows and high speed compressible flows. The discretization of convective terms by the presently developed high-order TVD schemes requires no artificial dissipation and can properly resolve the concentrated vortices in the wing-body with minimum numerical diffusion. It is demonstrated that the proposed Newton's iteration technique not only increases the convergence rate but also strongly couples the iteration between pressure and velocities. The proposed hyperbolization of the pressure correction equation is shown to increase the solver's efficiency. The above proposed methodologies were implemented in an existing CFD code, REFLEQS. The modified code was used to simulate both static and dynamic stalls on two- and three-dimensional wing-body configurations. Three-dimensional effect and flow physics are discussed.
Heat-exchanger concepts for neutral-beam calorimeters
NASA Astrophysics Data System (ADS)
Thompson, C. C.; Polk, D. H.; McFarlin, D. J.; Stone, R.
1981-10-01
Advanced cooling concepts that permit the design of water cooled heat exchangers for use as calorimeters and beam dumps for advanced neutral beam injection systems were evaluated. Water cooling techniques ranging from pool boiling to high pressure, high velocity swirl flow were considered. Preliminary performance tests were carried out with copper, inconel and molybdenum tubes ranging in size from 0.19 to 0.50 in. diameter. Coolant flow configurations included: (1) smooth tube/straight flow; (2) smooth tube with swirl flow created by tangential injection of the coolant; and (3) axial flow in internally finned tubes. Additionally, the effect of tube L/D was evaluated. A CO2 laser was employed to irradiate a sector of the tube exterior wall; the laser power was incrementally increased until burnout occurred. Absorbed heat fluxes were calculated by dividing the measured coolant heat load by the area of the burn spot on the tube surface. Two six element thermopiles were used to accurately determine the coolant temperature rise. A maximum burnout heat flux near 14 kW/sq cm was obtained for the molybdenum tube swirl flow configuration.
Effects of irregular cerebrospinal fluid production rate in human brain ventricular system
NASA Astrophysics Data System (ADS)
Hadzri, Edi Azali; Shamsudin, Amir Hamzah; Osman, Kahar; Abdul Kadir, Mohammed Rafiq; Aziz, Azian Abd
2012-06-01
Hydrocephalus is an abnormal accumulation of fluid in the ventricles and cavities in the brain. It occurs when the cerebrospinal fluid (CSF) flow or absorption is blocked or when excessive CSF is secreted. The excessive accumulation of CSF results in an abnormal widening of the ventricles. This widening creates potentially harmful pressure on the tissues of the brain. In this study, flow analysis of CSF was conducted on a three-dimensional model of the third ventricle and aqueduct of Sylvius, derived from MRI scans. CSF was modeled as Newtonian Fluid and its flow through the region of interest (ROI) was done using EFD. Lab software. Different steady flow rates through the Foramen of Monro, classified by normal and hydrocephalus cases, were modeled to investigate its effects. The results show that, for normal and hydrocephalus cases, the pressure drop of CSF flow across the third ventricle was observed to be linearly proportionally to the production rate increment. In conclusion, flow rates that cause pressure drop of 5 Pa was found to be the threshold for the initial sign of hydrocephalus.
Moving and adaptive grid methods for compressible flows
NASA Technical Reports Server (NTRS)
Trepanier, Jean-Yves; Camarero, Ricardo
1995-01-01
This paper describes adaptive grid methods developed specifically for compressible flow computations. The basic flow solver is a finite-volume implementation of Roe's flux difference splitting scheme or arbitrarily moving unstructured triangular meshes. The grid adaptation is performed according to geometric and flow requirements. Some results are included to illustrate the potential of the methodology.
Simoens, Steven
2013-01-01
Objectives This paper aims to assess the methodological quality of economic evaluations included in Belgian reimbursement applications for Class 1 drugs. Materials and Methods For 19 reimbursement applications submitted during 2011 and Spring 2012, a descriptive analysis assessed the methodological quality of the economic evaluation, evaluated the assessment of that economic evaluation by the Drug Reimbursement Committee and the response to that assessment by the company. Compliance with methodological guidelines issued by the Belgian Healthcare Knowledge Centre was assessed using a detailed checklist of 23 methodological items. The rate of compliance was calculated based on the number of economic evaluations for which the item was applicable. Results Economic evaluations tended to comply with guidelines regarding perspective, target population, subgroup analyses, comparator, use of comparative clinical data and final outcome measures, calculation of costs, incremental analysis, discounting and time horizon. However, more attention needs to be paid to the description of limitations of indirect comparisons, the choice of an appropriate analytic technique, the expression of unit costs in values for the current year, the estimation and valuation of outcomes, the presentation of results of sensitivity analyses, and testing the face validity of model inputs and outputs. Also, a large variation was observed in the scope and depth of the quality assessment by the Drug Reimbursement Committee. Conclusions Although general guidelines exist, pharmaceutical companies and the Drug Reimbursement Committee would benefit from the existence of a more detailed checklist of methodological items that need to be reported in an economic evaluation. PMID:24386474
Simoens, Steven
2013-01-01
This paper aims to assess the methodological quality of economic evaluations included in Belgian reimbursement applications for Class 1 drugs. For 19 reimbursement applications submitted during 2011 and Spring 2012, a descriptive analysis assessed the methodological quality of the economic evaluation, evaluated the assessment of that economic evaluation by the Drug Reimbursement Committee and the response to that assessment by the company. Compliance with methodological guidelines issued by the Belgian Healthcare Knowledge Centre was assessed using a detailed checklist of 23 methodological items. The rate of compliance was calculated based on the number of economic evaluations for which the item was applicable. Economic evaluations tended to comply with guidelines regarding perspective, target population, subgroup analyses, comparator, use of comparative clinical data and final outcome measures, calculation of costs, incremental analysis, discounting and time horizon. However, more attention needs to be paid to the description of limitations of indirect comparisons, the choice of an appropriate analytic technique, the expression of unit costs in values for the current year, the estimation and valuation of outcomes, the presentation of results of sensitivity analyses, and testing the face validity of model inputs and outputs. Also, a large variation was observed in the scope and depth of the quality assessment by the Drug Reimbursement Committee. Although general guidelines exist, pharmaceutical companies and the Drug Reimbursement Committee would benefit from the existence of a more detailed checklist of methodological items that need to be reported in an economic evaluation.
An integrated study to evaluate debris flow hazard in alpine environment
NASA Astrophysics Data System (ADS)
Tiranti, Davide; Crema, Stefano; Cavalli, Marco; Deangeli, Chiara
2018-05-01
Debris flows are among the most dangerous natural processes affecting the alpine environment due to their magnitude (volume of transported material) and the long runout. The presence of structures and infrastructures on alluvial fans can lead to severe problems in terms of interactions between debris flows and human activities. Risk mitigation in these areas requires identifying the magnitude, triggers, and propagation of debris flows. Here, we propose an integrated methodology to characterize these phenomena. The methodology consists of three complementary procedures. Firstly, we adopt a classification method based on the propensity of the catchment bedrocks to produce clayey-grained material. The classification allows us to identify the most likely rheology of the process. Secondly, we calculate a sediment connectivity index to estimate the topographic control on the possible coupling between the sediment source areas and the catchment channel network. This step allows for the assessment of the debris supply, which is most likely available for the channelized processes. Finally, with the data obtained in the previous steps, we modelled the propagation and depositional pattern of debris flows with a 3D code based on Cellular Automata. The results of the numerical runs allow us to identify the depositional patterns and the areas potentially involved in the flow processes. This integrated methodology is applied to a test-bed catchment located in Northwestern Alps. The results indicate that this approach can be regarded as a useful tool to estimate debris flow related potential hazard scenarios in an alpine environment in an expeditious way without possessing an exhaustive knowledge of the investigated catchment, including data on historical debris flow events.
Aeroelastic optimization methodology for viscous and turbulent flows
NASA Astrophysics Data System (ADS)
Barcelos Junior, Manuel Nascimento Dias
2007-12-01
In recent years, the development of faster computers and parallel processing allowed the application of high-fidelity analysis methods to the aeroelastic design of aircraft. However, these methods are restricted to the final design verification, mainly due to the computational cost involved in iterative design processes. Therefore, this work is concerned with the creation of a robust and efficient aeroelastic optimization methodology for inviscid, viscous and turbulent flows by using high-fidelity analysis and sensitivity analysis techniques. Most of the research in aeroelastic optimization, for practical reasons, treat the aeroelastic system as a quasi-static inviscid problem. In this work, as a first step toward the creation of a more complete aeroelastic optimization methodology for realistic problems, an analytical sensitivity computation technique was developed and tested for quasi-static aeroelastic viscous and turbulent flow configurations. Viscous and turbulent effects are included by using an averaged discretization of the Navier-Stokes equations, coupled with an eddy viscosity turbulence model. For quasi-static aeroelastic problems, the traditional staggered solution strategy has unsatisfactory performance when applied to cases where there is a strong fluid-structure coupling. Consequently, this work also proposes a solution methodology for aeroelastic and sensitivity analyses of quasi-static problems, which is based on the fixed point of an iterative nonlinear block Gauss-Seidel scheme. The methodology can also be interpreted as the solution of the Schur complement of the aeroelastic and sensitivity analyses linearized systems of equations. The methodologies developed in this work are tested and verified by using realistic aeroelastic systems.
Numerical Viscous Flow Analysis of an Advanced Semispan Diamond-Wing Model at High-Life Conditions
NASA Technical Reports Server (NTRS)
Ghaffari, F.; Biedron, R. T.; Luckring, J. M.
2002-01-01
Turbulent Navier-Stokes computational results are presented for an advanced diamond wing semispan model at low speed, high-lift conditions. The numerical results are obtained in support of a wind-tunnel test that was conducted in the National Transonic Facility (NTF) at the NASA Langley Research Center. The model incorporated a generic fuselage and was mounted on the tunnel sidewall using a constant width standoff. The analyses include: (1) the numerical simulation of the NTF empty, tunnel flow characteristics; (2) semispan high-lift model with the standoff in the tunnel environment; (3) semispan high-lift model with the standoff and viscous sidewall in free air; and (4) semispan high-lift model without the standoff in free air. The computations were performed at conditions that correspond to a nominal approach and landing configuration. The wing surface pressure distributions computed for the model in both the tunnel and in free air agreed well with the corresponding experimental data and they both indicated small increments due to the wall interference effects. However, the wall interference effects were found to be more pronounced in the total measured and the computed lift, drag and pitching moment due to standard induced up-flow effects. Although the magnitudes of the computed forces and moment were slightly off compared to the measured data, the increments due the wall interference effects were predicted well. The numerical predictions are also presented on the combined effects of the tunnel sidewall boundary layer and the standoff geometry on the fuselage fore-body pressure distributions and the resulting impact on the overall configuration longitudinal aerodynamic characteristics.
Rolland-Debord, Camille; Morelot-Panzini, Capucine; Similowski, Thomas; Duranti, Roberto; Laveneziana, Pierantonio
2017-12-01
Exercise induces release of cytokines and increase of circulating natural killers (NK) lymphocyte during strong activation of respiratory muscles. We hypothesised that non-fatiguing respiratory muscle loading during exercise causes an increase in NK cells and in metabolic stress indices. Heart rate (HR), ventilation (VE), oesophageal pressure (Pes), oxygen consumption (VO 2 ), dyspnoea and leg effort were measured in eight healthy humans (five men and three women, average age of 31 ± 4 years and body weight of 68 ± 10 kg), performing an incremental exercise testing on a cycle ergometer under control condition and expiratory flow limitation (FL) achieved by putting a Starling resistor. Blood samples were obtained at baseline, at peak of exercise and at iso-workload corresponding to that reached at the peak of FL exercise during control exercise. Diaphragmatic fatigue was evaluated by measuring the tension time index of the diaphragm. Respiratory muscle overloading caused an earlier interruption of exercise. Diaphragmatic fatigue did not occur in the two conditions. At peak of flow-limited exercise compared to iso-workload, HR, peak inspiratory and expiratory Pes, NK cells and norepinephrine were significantly higher. The number of NK cells was significantly related to ΔPes (i.e. difference between the most and the less negative Pes) and plasmatic catecholamines. Loading of respiratory muscles is able to cause an increase of NK cells provided that activation of respiratory muscles is intense enough to induce a significant metabolic stress.
Li, Run-Kui; Zhao, Tong; Li, Zhi-Peng; Ding, Wen-Jun; Cui, Xiao-Yong; Xu, Qun; Song, Xian-Feng
2014-04-01
On-road vehicle emissions have become the main source of urban air pollution and attracted broad attentions. Vehicle emission factor is a basic parameter to reflect the status of vehicle emissions, but the measured emission factor is difficult to obtain, and the simulated emission factor is not localized in China. Based on the synchronized increments of traffic flow and concentration of air pollutants in the morning rush hour period, while meteorological condition and background air pollution concentration retain relatively stable, the relationship between the increase of traffic and the increase of air pollution concentration close to a road is established. Infinite line source Gaussian dispersion model was transformed for the inversion of average vehicle emission factors. A case study was conducted on a main road in Beijing. Traffic flow, meteorological data and carbon monoxide (CO) concentration were collected to estimate average vehicle emission factors of CO. The results were compared with simulated emission factors of COPERT4 model. Results showed that the average emission factors estimated by the proposed approach and COPERT4 in August were 2.0 g x km(-1) and 1.2 g x km(-1), respectively, and in December were 5.5 g x km(-1) and 5.2 g x km(-1), respectively. The emission factors from the proposed approach and COPERT4 showed close values and similar seasonal trends. The proposed method for average emission factor estimation eliminates the disturbance of background concentrations and potentially provides real-time access to vehicle fleet emission factors.
NASA Astrophysics Data System (ADS)
Ebrahim, Girma Y.; Villholth, Karen G.
2016-10-01
Groundwater is an important resource for multiple uses in South Africa. Hence, setting limits to its sustainable abstraction while assuring basic human needs is required. Due to prevalent data scarcity related to groundwater replenishment, which is the traditional basis for estimating groundwater availability, the present article presents a novel method for determining allocatable groundwater in quaternary (fourth-order) catchments through information on streamflow. Using established methodologies for assessing baseflow, recession flow, and instream ecological flow requirement, the methodology develops a combined stepwise methodology to determine annual available groundwater storage volume using linear reservoir theory, essentially linking low flows proportionally to upstream groundwater storages. The approach was trialled for twenty-one perennial and relatively undisturbed catchments with long-term and reliable streamflow records. Using the Desktop Reserve Model, instream flow requirements necessary to meet the present ecological state of the streams were determined, and baseflows in excess of these flows were converted into a conservative estimates of allocatable groundwater storages on an annual basis. Results show that groundwater development potential exists in fourteen of the catchments, with upper limits to allocatable groundwater volumes (including present uses) ranging from 0.02 to 3.54 × 106 m3 a-1 (0.10-11.83 mm a-1) per catchment. With a secured availability of these volume 75% of the years, variability between years is assumed to be manageable. A significant (R2 = 0.88) correlation between baseflow index and the drainage time scale for the catchments underscores the physical basis of the methodology and also enables the reduction of the procedure by one step, omitting recession flow analysis. The method serves as an important complementary tool for the assessment of the groundwater part of the Reserve and the Groundwater Resource Directed Measures in South Africa and could be adapted and applied elsewhere.
Armstrong, Patrick Ian; Vogel, David L
2010-04-01
The current article replies to comments made by Lent, Sheu, and Brown (2010) and Lubinski (2010) regarding the study "Interpreting the Interest-Efficacy Association From a RIASEC Perspective" (Armstrong & Vogel, 2009). The comments made by Lent et al. and Lubinski highlight a number of important theoretical and methodological issues, including the process of defining and differentiating between constructs, the assumptions underlying Holland's (1959, 1997) RIASEC (Realistic, Investigative, Artistic, Social, Enterprising, and Conventional types) model and interrelations among constructs specified in social cognitive career theory (SCCT), the importance of incremental validity for evaluating constructs, and methodological considerations when quantifying interest-efficacy correlations and for comparing models using multivariate statistical methods. On the basis of these comments and previous research on the SCCT and Holland models, we highlight the importance of considering multiple theoretical perspectives in vocational research and practice. Alternative structural models are outlined for examining the role of interests, self-efficacy, learning experiences, outcome expectations, personality, and cognitive abilities in the career choice and development process. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Increasing the UAV data value by an OBIA methodology
NASA Astrophysics Data System (ADS)
García-Pedrero, Angel; Lillo-Saavedra, Mario; Rodriguez-Esparragon, Dionisio; Rodriguez-Gonzalez, Alejandro; Gonzalo-Martin, Consuelo
2017-10-01
Recently, there has been a noteworthy increment of using images registered by unmanned aerial vehicles (UAV) in different remote sensing applications. Sensors boarded on UAVs has lower operational costs and complexity than other remote sensing platforms, quicker turnaround times as well as higher spatial resolution. Concerning this last aspect, particular attention has to be paid on the limitations of classical algorithms based on pixels when they are applied to high resolution images. The objective of this study is to investigate the capability of an OBIA methodology developed for the automatic generation of a digital terrain model of an agricultural area from Digital Elevation Model (DEM) and multispectral images registered by a Parrot Sequoia multispectral sensor board on a eBee SQ agricultural drone. The proposed methodology uses a superpixel approach for obtaining context and elevation information used for merging superpixels and at the same time eliminating objects such as trees in order to generate a Digital Terrain Model (DTM) of the analyzed area. Obtained results show the potential of the approach, in terms of accuracy, when it is compared with a DTM generated by manually eliminating objects.
Rolling scheduling of electric power system with wind power based on improved NNIA algorithm
NASA Astrophysics Data System (ADS)
Xu, Q. S.; Luo, C. J.; Yang, D. J.; Fan, Y. H.; Sang, Z. X.; Lei, H.
2017-11-01
This paper puts forth a rolling modification strategy for day-ahead scheduling of electric power system with wind power, which takes the operation cost increment of unit and curtailed wind power of power grid as double modification functions. Additionally, an improved Nondominated Neighbor Immune Algorithm (NNIA) is proposed for solution. The proposed rolling scheduling model has further improved the operation cost of system in the intra-day generation process, enhanced the system’s accommodation capacity of wind power, and modified the key transmission section power flow in a rolling manner to satisfy the security constraint of power grid. The improved NNIA algorithm has defined an antibody preference relation model based on equal incremental rate, regulation deviation constraints and maximum & minimum technical outputs of units. The model can noticeably guide the direction of antibody evolution, and significantly speed up the process of algorithm convergence to final solution, and enhance the local search capability.
Computational Assessment of Aft-Body Closure for the HSR Reference H Configuration
NASA Technical Reports Server (NTRS)
Londenberg, W. Kelly
1999-01-01
A study has been conducted to determine how well the USM3D unstructured Euler solver can be utilized to predict the flow over the High Speed Research (HSR) Reference H configuration with an ultimate goal of prediction of Sting interference so after body closure effects may be evaluated. This study has shown that the code can be used to predict the interference effects of a lower mounted blade sting with a high degree of confidence. It has been shown that wing and fuselage pressures, both levels and trends, can be predicted well. Force and moment levels are not predicted well but experimental trends are predicted. Based upon this, predicted force and moment increments are assumed to be predicted accurately. Deflection of the horizontal tail was found to cause a non-linear increment from the non-deflected sting interference effects.
[Demographic processes in the Amur region under conditions of economics reform in Russia].
D'iachenko, V G
2000-01-01
Demographic and migration processes are analyzed for the Amur region, a typical Far-Eastern Russian territory with low density of population, in the historical and modern aspects. Low birth rate, high mortality, and negative natural increment of population reflect the social and economic reforms of the nineties in the Russian Far East. Demographic situation and migration processes in Asian and Pacific countries neighboring the Russian Far East are characterized by positive tendencies (high birth rate, natural increment in population number, and tendencies of migration of the capable population from densely populated central and Southern provinces to little populated Eastern provinces). According to the demographic prognosis, during the nearest decades we shall witness acute shortage of working resources in the Amur region, which will stimulate the already high flow of foreign manpower from the neighboring countries (China and Northern Korea). This necessitates urgent stabilizing measures at the local and federal levels.
Recent advances in the modelling of crack growth under fatigue loading conditions
NASA Technical Reports Server (NTRS)
Dekoning, A. U.; Tenhoeve, H. J.; Henriksen, T. K.
1994-01-01
Fatigue crack growth associated with cyclic (secondary) plastic flow near a crack front is modelled using an incremental formulation. A new description of threshold behaviour under small load cycles is included. Quasi-static crack extension under high load excursions is described using an incremental formulation of the R-(crack growth resistance)- curve concept. The integration of the equations is discussed. For constant amplitude load cycles the results will be compared with existing crack growth laws. It will be shown that the model also properly describes interaction effects of fatigue crack growth and quasi-static crack extension. To evaluate the more general applicability the model is included in the NASGRO computer code for damage tolerance analysis. For this purpose the NASGRO program was provided with the CORPUS and the STRIP-YIELD models for computation of the crack opening load levels. The implementation is discussed and recent results of the verification are presented.
Countercurrent direct contact heat exchange process and system
Wahl, III, Edward F.; Boucher, Frederic B.
1979-01-01
Recovery of energy from geothermal brines and other hot water sources by direct contact heat exchange with a working fluid, such as a hydrocarbon working fluid, e.g. isobutane. The process and system consists of a plurality of stages, each stage including mixing and settling units. In the first stage, hot brine and arm working fluid are intimately mixed and passed into a settler wherein the brine settles to the bottom of the settler and the hot working fluid rises to the top. The hot working fluid is passed to a heat engine or turbine to produce work and the working fluid is then recycled back into the system. The system is comprised of a series of stages each containing a settler and mixer, and wherein the working fluid and the brine flow in a countercurrent manner through the stages to recover the heat from the brine in increments and raise the temperature of the working fluid in increments.
Brothers, R. Matthew; Wingo, Jonathan E.; Hubing, Kimberly A.
2010-01-01
Skin blood flow responses in the human forearm, assessed by three commonly used technologies—single-point laser-Doppler flowmetry, integrated laser-Doppler flowmetry, and laser-Doppler imaging—were compared in eight subjects during normothermic baseline, acute skin-surface cooling, and whole body heat stress (Δ internal temperature = 1.0 ± 0.2°C; P < 0.001). In addition, while normothermic and heat stressed, subjects were exposed to 30-mmHg lower-body negative pressure (LBNP). Skin blood flow was normalized to the maximum value obtained at each site during local heating to 42°C for at least 30 min. Furthermore, comparisons of forearm blood flow (FBF) measures obtained using venous occlusion plethysmography and Doppler ultrasound were made during the aforementioned perturbations. Relative to normothermic baseline, skin blood flow decreased during normothermia + LBNP (P < 0.05) and skin-surface cooling (P < 0.01) and increased during whole body heating (P < 0.001). Subsequent LBNP during whole body heating significantly decreased skin blood flow relative to control heat stress (P < 0.05). Importantly, for each of the aforementioned conditions, skin blood flow was similar between the three measurement devices (main effect of device: P > 0.05 for all conditions). Similarly, no differences were identified across all perturbations between FBF measures using plethysmography and Doppler ultrasound (P > 0.05 for all perturbations). These data indicate that when normalized to maximum, assessment of skin blood flow in response to vasoconstrictor and dilator perturbations are similar regardless of methodology. Likewise, FBF responses to these perturbations are similar between two commonly used methodologies of limb blood flow assessment. PMID:20634360
Brothers, R Matthew; Wingo, Jonathan E; Hubing, Kimberly A; Crandall, Craig G
2010-09-01
Skin blood flow responses in the human forearm, assessed by three commonly used technologies-single-point laser-Doppler flowmetry, integrated laser-Doppler flowmetry, and laser-Doppler imaging-were compared in eight subjects during normothermic baseline, acute skin-surface cooling, and whole body heat stress (Δ internal temperature=1.0±0.2 degrees C; P<0.001). In addition, while normothermic and heat stressed, subjects were exposed to 30-mmHg lower-body negative pressure (LBNP). Skin blood flow was normalized to the maximum value obtained at each site during local heating to 42 degrees C for at least 30 min. Furthermore, comparisons of forearm blood flow (FBF) measures obtained using venous occlusion plethysmography and Doppler ultrasound were made during the aforementioned perturbations. Relative to normothermic baseline, skin blood flow decreased during normothermia+LBNP (P<0.05) and skin-surface cooling (P<0.01) and increased during whole body heating (P<0.001). Subsequent LBNP during whole body heating significantly decreased skin blood flow relative to control heat stress (P<0.05). Importantly, for each of the aforementioned conditions, skin blood flow was similar between the three measurement devices (main effect of device: P>0.05 for all conditions). Similarly, no differences were identified across all perturbations between FBF measures using plethysmography and Doppler ultrasound (P>0.05 for all perturbations). These data indicate that when normalized to maximum, assessment of skin blood flow in response to vasoconstrictor and dilator perturbations are similar regardless of methodology. Likewise, FBF responses to these perturbations are similar between two commonly used methodologies of limb blood flow assessment.
Optimal Micro-Vane Flow Control for Compact Air Vehicle Inlets
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Miller, Daniel N.; Addington, Gregory A.; Agrell, Johan
2004-01-01
The purpose of this study on micro-vane secondary flow control is to demonstrate the viability and economy of Response Surface Methodology (RSM) to optimally design micro-vane secondary flow control arrays, and to establish that the aeromechanical effects of engine face distortion can also be included in the design and optimization process. These statistical design concepts were used to investigate the design characteristics of "low unit strength" micro-effector arrays. "Low unit strength" micro-effectors are micro-vanes set at very low angles-of-incidence with very long chord lengths. They were designed to influence the near wall inlet flow over an extended streamwise distance, and their advantage lies in low total pressure loss and high effectiveness in managing engine face distortion. Therefore, this report examines optimal micro-vane secondary flow control array designs for compact inlets through a Response Surface Methodology.
NASA Astrophysics Data System (ADS)
Klepikova, Maria V.; Le Borgne, Tanguy; Bour, Olivier; Davy, Philippe
2011-09-01
SummaryTemperature profiles in the subsurface are known to be sensitive to groundwater flow. Here we show that they are also strongly related to vertical flow in the boreholes themselves. Based on a numerical model of flow and heat transfer at the borehole scale, we propose a method to invert temperature measurements to derive borehole flow velocities. This method is applied to an experimental site in fractured crystalline rocks. Vertical flow velocities deduced from the inversion of temperature measurements are compared with direct heat-pulse flowmeter measurements showing a good agreement over two orders of magnitudes. Applying this methodology under ambient, single and cross-borehole pumping conditions allows us to estimate fracture hydraulic head and local transmissivity, as well as inter-borehole fracture connectivity. Thus, these results provide new insights on how to include temperature profiles in inverse problems for estimating hydraulic fracture properties.
Application and testing of a procedure to evaluate transferability of habitat suitability criteria
Thomas, Jeff A.; Bovee, Ken D.
1993-01-01
A procedure designed to test the transferability of habitat suitability criteria was evaluated in the Cache la Poudre River, Colorado. Habitat suitability criteria were developed for active adult and juvenile rainbow trout in the South Platte River, Colorado. These criteria were tested by comparing microhabitat use predicted from the criteria with observed microhabitat use by adult rainbow trout in the Cache la Poudre River. A one-sided X2 test, using counts of occupied and unoccupied cells in each suitability classification, was used to test for non-random selection for optimum habitat use over usable habitat and for suitable over unsuitable habitat. Criteria for adult rainbow trout were judged to be transferable to the Cache la Poudre River, but juvenile criteria (applied to adults) were not transferable. Random subsampling of occupied and unoccupied cells was conducted to determine the effect of sample size on the reliability of the test procedure. The incidence of type I and type II errors increased rapidly as the sample size was reduced below 55 occupied and 200 unoccupied cells. Recommended modifications to the procedure included the adoption of a systematic or randomized sampling design and direct measurement of microhabitat variables. With these modifications, the procedure is economical, simple and reliable. Use of the procedure as a quality assurance device in routine applications of the instream flow incremental methodology was encouraged.
PROCESS SIMULATION OF COLD PRESSING OF ARMSTRONG CP-Ti POWDERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabau, Adrian S; Gorti, Sarma B; Peter, William H
A computational methodology is presented for the process simulation of cold pressing of Armstrong CP-Ti Powders. The computational model was implemented in the commercial finite element program ABAQUSTM. Since the powder deformation and consolidation is governed by specific pressure-dependent constitutive equations, several solution algorithms were developed for the ABAQUS user material subroutine, UMAT. The solution algorithms were developed for computing the plastic strain increments based on an implicit integration of the nonlinear yield function, flow rule, and hardening equations that describe the evolution of the state variables. Since ABAQUS requires the use of a full Newton-Raphson algorithm for the stress-strainmore » equations, an algorithm for obtaining the tangent/linearization moduli, which is consistent with the return-mapping algorithm, also was developed. Numerical simulation results are presented for the cold compaction of the Ti powders. Several simulations were conducted for cylindrical samples with different aspect ratios. The numerical simulation results showed that for the disk samples, the minimum von Mises stress was approximately half than its maximum value. The hydrostatic stress distribution exhibits a variation smaller than that of the von Mises stress. It was found that for the disk and cylinder samples the minimum hydrostatic stresses were approximately 23 and 50% less than its maximum value, respectively. It was also found that the minimum density was noticeably affected by the sample height.« less
Cabrita, Marisa; Bekman, Evguenia; Braga, José; Rino, José; Santus, Renè; Filipe, Paulo L.; Sousa, Ana E.; Ferreira, João A.
2017-01-01
We propose a novel single-deoxynucleoside-based assay that is easy to perform and provides accurate values for the absolute length (in units of time) of each of the cell cycle stages (G1, S and G2/M). This flow-cytometric assay takes advantage of the excellent stoichiometric properties of azide-fluorochrome detection of DNA substituted with 5-ethynyl-2′-deoxyuridine (EdU). We show that by pulsing cells with EdU for incremental periods of time maximal EdU-coupled fluorescence is reached when pulsing times match the length of S phase. These pulsing times, allowing labelling for a full S phase of a fraction of cells in asynchronous populations, provide accurate values for the absolute length of S phase. We characterized additional, lower intensity signals that allowed quantification of the absolute durations of G1 and G2 phases. Importantly, using this novel assay data on the lengths of G1, S and G2/M phases are obtained in parallel. Therefore, these parameters can be estimated within a time frame that is shorter than a full cell cycle. This method, which we designate as EdU-Coupled Fluorescence Intensity (E-CFI) analysis, was successfully applied to cell types with distinctive cell cycle features and shows excellent agreement with established methodologies for analysis of cell cycle kinetics. PMID:28465489
Yonco, R.M.; Nagy, Z.
1987-07-30
An external, reference electrode is provided for long term use with a high temperature, high pressure system. The electrode is arranged in a vertical, electrically insulative tube with an upper portion serving as an electrolyte reservoir and a lower portion in electrolytic communication with the system to be monitored. The lower end portion includes a flow restriction such as a porous plug to limit the electrolyte release into the system. A piston equalized to the system pressure is fitted into the upper portion of the tube to impart a small incremental pressure to the electrolyte. The piston is selected of suitable size and weight to cause only a slight flow of electrolyte through the porous plug into the high pressure system. This prevents contamination of the electrolyte but is of such small flow rate that operating intervals of a month or more can be achieved. 2 figs.
NASA Technical Reports Server (NTRS)
Hathaway, Michael D.; Chriss, Randall M.; Strazisar, Anthony J.; Wood, Jerry R.
1995-01-01
A laser anemometer system was used to provide detailed surveys of the three-dimensional velocity field within the NASA low-speed centrifugal impeller operating with a vaneless diffuser. Both laser anemometer and aerodynamic performance data were acquired at the design flow rate and at a lower flow rate. Floor path coordinates, detailed blade geometry, and pneumatic probe survey results are presented in tabular form. The laser anemometer data are presented in the form of pitchwise distributions of axial, radial, and relative tangential velocity on blade-to-blade stream surfaces at 5-percent-of-span increments, starting at 95-percent-of-span from the hub. The laser anemometer data are also presented as contour and wire-frame plots of throughflow velocity and vector plots of secondary velocities at all measurement stations through the impeller.
NASA Astrophysics Data System (ADS)
Aman, Sidra; Zuki Salleh, Mohd; Ismail, Zulkhibri; Khan, Ilyas
2017-09-01
This article focuses on the flow of Maxwell nanofluids with graphene nanoparticles over a vertical plate (static) with constant wall temperature. Possessing high thermal conductivity, engine oil is useful to be chosen as base fluid with free convection. The problem is modelled in terms of PDE’s with boundary conditions. Some suitable non-dimensional variables are interposed to transform the governing equations into dimensionless form. The generated equations are solved via Laplace transform technique. Exact solutions are evaluated for velocity and temperature. These solutions are significantly controlled by some parameters involved. Temperature rises with elevation in volume fraction while Velocity decreases with increment in volume fraction. A comparison with previous published results are established and discussed. Moreover, a detailed discussion is made for influence of volume fraction on the flow and heat profile.
Yonco, R.M.; Nagy, Z.
1989-04-04
An external, reference electrode is provided for long term use with a high temperature, high pressure system. The electrode is arranged in a vertical, electrically insulative tube with an upper portion serving as an electrolyte reservoir and a lower portion in electrolytic communication with the system to be monitored. The lower end portion includes a flow restriction such as a porous plug to limit the electrolyte release into the system. A piston equalized to the system pressure is fitted into the upper portion of the tube to impart a small incremental pressure to the electrolyte. The piston is selected of suitable size and weight to cause only a slight flow of electrolyte through the porous plug into the high pressure system. This prevents contamination of the electrolyte but is of such small flow rate that operating intervals of a month or more can be achieved. 2 figs.
Yonco, Robert M.; Nagy, Zoltan
1989-01-01
An external, reference electrode is provided for long term use with a high temperature, high pressure system. The electrode is arranged in a vertical, electrically insulative tube with an upper portion serving as an electrolyte reservior and a lower portion in electrolytic communication with the system to be monitored. The lower end portion includes a flow restriction such as a porous plug to limit the electrolyte release into the system. A piston equalized to the system pressure is fitted into the upper portion of the tube to impart a small incremental pressure to the electrolyte. The piston is selected of suitable size and weight to cause only a slight flow of electrolyte through the porous plug into the high pressure system. This prevents contamination of the electrolyte but is of such small flow rate that operating intervals of a month or more can be achieved.
Transportation Energy Conservation Data Book: A Selected Bibliography. Edition 3,
1978-11-01
Charlottesville, VA 22901 TITLE: Couputer-Based Resource Accounting Model TT1.1: Methodology for the Design of Urban for Automobile Technology Impact...Evaluation System ACCOUNTING; INDUSTRIAL SECTOR; ENERGY tPIESi Documentation. volume 6. CONSUM PTION: PERFORANCE: DESIGN : NASTE MEAT: Methodology for... Methodology for the Design of Urban Transportation 000172 Energy Flows In the U.S., 1973 and 1974. Volume 1: Methodology * $opdate to the Fational Energy
A design methodology of magentorheological fluid damper using Herschel-Bulkley model
NASA Astrophysics Data System (ADS)
Liao, Linqing; Liao, Changrong; Cao, Jianguo; Fu, L. J.
2003-09-01
Magnetorheological fluid (MR fluid) is highly concentrated suspension of very small magnetic particle in inorganic oil. The essential behavior of MR fluid is its ability to reversibly change from free-flowing, linear viscous liquids to semi-solids having controllable yield strength in milliseconds when exposed to magnetic field. This feature provides simple, quiet, rapid-response interfaces between electronic controls and mechanical systems. In this paper, a mini-bus MR fluid damper based on plate Poiseuille flow mode is typically analyzed using Herschel-Bulkley model, which can be used to account for post-yield shear thinning or thickening under the quasi-steady flow condition. In the light of various value of flow behavior index, the influences of post-yield shear thinning or thickening on flow velocity profiles of MR fluid in annular damping orifice are examined numerically. Analytical damping coefficient predictions also are compared via the nonlinear Bingham plastic model and Herschel-Bulkley constitutive model. A MR fluid damper, which is designed and fabricated according to design method presented in this paper, has tested by electro-hydraulic servo vibrator and its control system in National Center for Test and Supervision of Coach Quality. The experimental results reveal that the analysis methodology and design theory are reasonable and MR fluid damper can be designed according to the design methodology.
NASA Astrophysics Data System (ADS)
Tsai, Wen-Ping; Chang, Fi-John; Chang, Li-Chiu; Herricks, Edwin E.
2015-11-01
Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (AI) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs.
NASA Astrophysics Data System (ADS)
Malley, C. S.; Braban, C. F.; Dumitrean, P.; Cape, J. N.; Heal, M. R.
2015-03-01
The impact of 27 volatile organic compounds (VOC) on the regional O3 increment was investigated using measurements made at the UK EMEP supersites Harwell (1999-2001 and 2010-2012) and Auchencorth (2012). Ozone at these sites is representative of rural O3 in south-east England and northern UK, respectively. Monthly-diurnal regional O3 increment was defined as the difference between the regional and hemispheric background O3 concentrations, respectively derived from oxidant vs. NOx correlation plots, and cluster analysis of back trajectories arriving at Mace Head, Ireland. At Harwell, which had substantially greater regional ozone increments than at Auchencorth, variation in the regional O3 increment mirrored afternoon depletion of VOCs due to photochemistry (after accounting for diurnal changes in boundary layer mixing depth, and weighting VOC concentrations according to their photochemical ozone creation potential). A positive regional O3 increment occurred consistently during the summer, during which time afternoon photochemical depletion was calculated for the majority of measured VOCs, and to the greatest extent for ethene and m + p-xylene. This indicates that, of the measured VOCs, ethene and m + p-xylene emissions reduction would be most effective in reducing the regional O3 increment, but that reductions in a larger number of VOCs would be required for further improvement. The VOC diurnal photochemical depletion was linked to the sources of the VOC emissions through the integration of gridded VOC emissions estimates over 96 h air-mass back trajectories. This demonstrated that the effectiveness of VOC gridded emissions for use in measurement and modelling studies is limited by the highly aggregated nature of the 11 SNAP source sectors in which they are reported, as monthly variation in speciated VOC trajectory emissions did not reflect monthly changes in individual VOC diurnal photochemical depletion. Additionally, the major VOC emission source sectors during elevated regional O3 increment at Harwell were more narrowly defined through disaggregation of the SNAP emissions to 91 NFR codes (i.e. sectors 3D2 (domestic solvent use), 3D3 (other product use) and 2D2 (food and drink)). However, spatial variation in the contribution of NFR sectors to parent SNAP emissions could only be accounted for at the country level. Hence, the future reporting of gridded VOC emissions in source sectors more highly disaggregated than currently (e.g. to NFR codes) would facilitate a more precise identification of those VOC sources most important for mitigation of the impact of VOCs on O3 formation. In summary, this work presents a clear methodology for achieving a coherent VOC regional-O3-impact chemical climate using measurement data and explores the effect of limited emission and measurement species on the understanding of the regional VOC contribution to O3 concentrations.
NASA Astrophysics Data System (ADS)
Malley, C. S.; Braban, C. F.; Dumitrean, P.; Cape, J. N.; Heal, M. R.
2015-07-01
The impact of 27 volatile organic compounds (VOCs) on the regional O3 increment was investigated using measurements made at the UK EMEP supersites Harwell (1999-2001 and 2010-2012) and Auchencorth (2012). Ozone at these sites is representative of rural O3 in south-east England and northern UK, respectively. The monthly-diurnal regional O3 increment was defined as the difference between the regional and hemispheric background O3 concentrations, respectively, derived from oxidant vs. NOx correlation plots, and cluster analysis of back trajectories arriving at Mace Head, Ireland. At Harwell, which had substantially greater regional O3 increments than Auchencorth, variation in the regional O3 increment mirrored afternoon depletion of anthropogenic VOCs due to photochemistry (after accounting for diurnal changes in boundary layer mixing depth, and weighting VOC concentrations according to their photochemical ozone creation potential). A positive regional O3 increment occurred consistently during the summer, during which time afternoon photochemical depletion was calculated for the majority of measured VOCs, and to the greatest extent for ethene and m+p-xylene. This indicates that, of the measured VOCs, ethene and m+p-xylene emissions reduction would be most effective in reducing the regional O3 increment but that reductions in a larger number of VOCs would be required for further improvement. The VOC diurnal photochemical depletion was linked to anthropogenic sources of the VOC emissions through the integration of gridded anthropogenic VOC emission estimates over 96 h air-mass back trajectories. This demonstrated that one factor limiting the effectiveness of VOC gridded emissions for use in measurement and modelling studies is the highly aggregated nature of the 11 SNAP (Selected Nomenclature for Air Pollution) source sectors in which they are reported, as monthly variation in speciated VOC trajectory emissions did not reflect monthly changes in individual VOC diurnal photochemical depletion. Additionally, the major VOC emission source sectors during elevated regional O3 increment at Harwell were more narrowly defined through disaggregation of the SNAP emissions to 91 NFR (Nomenclature for Reporting) codes (i.e. sectors 3D2 (domestic solvent use), 3D3 (other product use) and 2D2 (food and drink)). However, spatial variation in the contribution of NFR sectors to parent SNAP emissions could only be accounted for at the country level. Hence, the future reporting of gridded VOC emissions in source sectors more highly disaggregated than currently (e.g. to NFR codes) would facilitate a more precise identification of those VOC sources most important for mitigation of the impact of VOCs on O3 formation. In summary, this work presents a clear methodology for achieving a coherent VOC, regional-O3-impact chemical climate using measurement data and explores the effect of limited emission and measurement species on the understanding of the regional VOC contribution to O3 concentrations.
On the self-preservation of turbulent jet flows with variable viscosity
NASA Astrophysics Data System (ADS)
Danaila, Luminita; Gauding, Michael; Varea, Emilien; Turbulence; mixing Team
2017-11-01
The concept of self-preservation has played an important role in shaping the understanding of turbulent flows. The assumption of complete self-preservation imposes certain constrains on the dynamics of the flow, allowing to express one-point or two-point statistics by choosing an appropriate unique length scale. Determining this length scale and its scaling is of high relevance for modeling. In this work, we study turbulent jet flows with variable viscosity from the self-preservation perspective. Turbulent flows encountered in engineering and environmental applications are often characterized by fluctuations of viscosity resulting for instance from variations of temperature or species composition. Starting from the transport equation for the moments of the mixture fraction increment, constraints for self-preservation are derived. The analysis is based on direct numerical simulations of turbulent jet flows where the viscosity between host and jet fluid differs. It is shown that fluctuations of viscosity do not affect the decay exponents of the turbulent energy or the dissipation but modify the scaling of two-point statistics in the dissipative range. Moreover, the analysis reveals that complete self-preservation in turbulent flows with variable viscosity cannot be achieved. Financial support from Labex EMC3 and FEDER is gratefully acknowledged.
A first-generation software product line for data acquisition systems in astronomy
NASA Astrophysics Data System (ADS)
López-Ruiz, J. C.; Heradio, Rubén; Cerrada Somolinos, José Antonio; Coz Fernandez, José Ramón; López Ramos, Pablo
2008-07-01
This article presents a case study on developing a software product line for data acquisition systems in astronomy based on the Exemplar Driven Development methodology and the Exemplar Flexibilization Language tool. The main strategies to build the software product line are based on the domain commonality and variability, the incremental scope and the use of existing artifacts. It consists on a lean methodology with little impact on the organization, suitable for small projects, which reduces product line start-up time. Software Product Lines focuses on creating a family of products instead of individual products. This approach has spectacular benefits on reducing the time to market, maintaining the know-how, reducing the development costs and increasing the quality of new products. The maintenance of the products is also enhanced since all the data acquisition systems share the same product line architecture.
Shaw, Bronwen E; Hahn, Theresa; Martin, Paul J; Mitchell, Sandra A; Petersdorf, Effie W; Armstrong, Gregory T; Shelburne, Nonniekaye; Storer, Barry E; Bhatia, Smita
2017-01-01
The increasing numbers of hematopoietic cell transplantations (HCTs) performed each year, the changing demographics of HCT recipients, the introduction of new transplantation strategies, incremental improvement in survival, and the growing population of HCT survivors demand a comprehensive approach to examining the health and well-being of patients throughout life after HCT. This report summarizes strategies for the conduct of research on late effects after transplantation, including consideration of the study design and analytic approaches; methodologic challenges in handling complex phenotype data; an appreciation of the changing trends in the practice of transplantation; and the availability of biospecimens to support laboratory-based research. It is hoped that these concepts will promote continued research and facilitate the development of new approaches to address fundamental questions in transplantation outcomes. Copyright © 2017 The American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.
Variance change point detection for fractional Brownian motion based on the likelihood ratio test
NASA Astrophysics Data System (ADS)
Kucharczyk, Daniel; Wyłomańska, Agnieszka; Sikora, Grzegorz
2018-01-01
Fractional Brownian motion is one of the main stochastic processes used for describing the long-range dependence phenomenon for self-similar processes. It appears that for many real time series, characteristics of the data change significantly over time. Such behaviour one can observe in many applications, including physical and biological experiments. In this paper, we present a new technique for the critical change point detection for cases where the data under consideration are driven by fractional Brownian motion with a time-changed diffusion coefficient. The proposed methodology is based on the likelihood ratio approach and represents an extension of a similar methodology used for Brownian motion, the process with independent increments. Here, we also propose a statistical test for testing the significance of the estimated critical point. In addition to that, an extensive simulation study is provided to test the performance of the proposed method.
Influence of exercise induced hyperlactatemia on retinal blood flow during normo- and hyperglycemia.
Garhöfer, Gerhard; Kopf, Andreas; Polska, Elzbieta; Malec, Magdalena; Dorner, Guido T; Wolzt, Michael; Schmetterer, Leopold
2004-05-01
Short term hyperglycemia has previously been shown to induce a blood flow increase in the retina. The mechanism behind this effect is poorly understood. We set out to investigate whether exercise-induced hyperlactatemia may alter the response of retinal blood flow to hyperglycemia. We performed a randomized, controlled two-way cross over study comprising 12 healthy subjects, performed a 6-minutes period of dynamic exercise during an euglcaemic or hyperglycaemic insulin clamp. Retinal blood flow was assessed by combined vessel size measurement with the Zeiss retinal vessel analyzer and measurement of red blood cell velocities using bi-directional laser Doppler velocimetry. Retinal and systemic hemodynamic parameters were measured before, immediately after and 10 and 20 minutes after isometric exercise. On the euglycemic study day retinal blood flow increased after dynamic exercise. The maximum increase in retinal blood flow was observed 10 minutes after the end of exercise when lactate plasma concentration peaked. Hyperglycemia increased retinal blood flow under basal conditions, but had no incremental effect during exercise induced hyperlactatemia. Our results indicate that both lactate and glucose induce an increase in retinal blood flow in healthy humans. This may indicate a common pathway between glucose and lactate induced blood flow changes in the human retina.
Subsonic Reynolds Number Effects on a Diamond Wing Configuration
NASA Technical Reports Server (NTRS)
Luckring, J. M.; Ghee, T. A.
2001-01-01
An advanced diamond-wing configuration was tested at low speeds in the National Transonic Facility (NTF) in air at chord Reynolds numbers from 4.4 million (typical wind-tunnel conditions) to 24 million (nominal flight value). Extensive variations on high-lift rigging were explored as part of a broad multinational program. The analysis for this study is focused on the cruise and landing settings of the wing high-lift systems. Three flow domains were identified from the data and provide a context for the ensuing data analysis. Reynolds number effects were examined in incremental form based upon attached-flow theory. A similar approach showed very little effect of low-speed compressibility.
NASA Astrophysics Data System (ADS)
Pakela, Julia M.; Lee, Seung Yup; Hedrick, Taylor L.; Vishwanath, Karthik; Helton, Michael C.; Chung, Yooree G.; Kolodziejski, Noah J.; Staples, Christopher J.; McAdams, Daniel R.; Fernandez, Daniel E.; Christian, James F.; O'Reilly, Jameson; Farkas, Dana; Ward, Brent B.; Feinberg, Stephen E.; Mycek, Mary-Ann
2017-02-01
In reconstructive surgery, impeded blood flow in microvascular free flaps due to a compromise in arterial or venous patency secondary to blood clots or vessel spasms can rapidly result in flap failures. Thus, the ability to detect changes in microvascular free flaps is critical. In this paper, we report progress on in vivo pre-clinical testing of a compact, multimodal, fiber-based diffuse correlation and reflectance spectroscopy system designed to quantitatively monitor tissue perfusion in a porcine model's surgically-grafted free flap. We also describe the device's sensitivity to incremental blood flow changes and discuss the prospects for continuous perfusion monitoring in future clinical translational studies.
Analysis of information flows among individual companies in the KOSDAQ market
NASA Astrophysics Data System (ADS)
Kim, Ho-Yong; Oh, Gabjin
2016-08-01
In this paper, we employ the variance decomposition method to measure the strength and the direction of interconnections among companies in the KOSDAQ (Korean Securities Dealers Automated Quotation) stock market. We analyze the 200 companies listed on the KOSDAQ market from January 2001 to December 2015. We find that the systemic risk, measured by using the interconnections, increases substantially during periods of financial crisis such as the bankruptcy of Lehman brothers and the European financial crisis. In particular, we find that the increases in the aggregated information flows can be used to predict the increment of the market volatility that may occur during a sub-prime financial crisis period.
Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media
NASA Astrophysics Data System (ADS)
Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo
2016-04-01
Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.
NASA Technical Reports Server (NTRS)
Holland, S. Douglas (Inventor); Steele, Glen F. (Inventor); Romero, Denise M. (Inventor); Koudelka, Robert David (Inventor)
2008-01-01
A data multiplexer that accommodates both industry standard CCSDS data packets and bits streams and standard IEEE 1394 data is described. The multiplexer provides a statistical allotment of bandwidth to the channels in turn, preferably four, but expandable in increments of four up to sixteen. A microcontroller determines bandwidth requested by the plurality of channels, as well as the bandwidth available, and meters out the available bandwidth on a statistical basis employing flow control to the input channels.
Modeling the Temperature Rise at the Tip of a Fast Crack
1989-08-01
plastic deformation in the plastic zone, the strain rate and the temperature dependence of the flow stress have been incorporated in the determination ...of dislocation generation in the plastic zone. The stress field 1 associated with a moving elastic crack tip is used to determine the increment of...yield stress and the crack tip stress field for a given mode of the applied stress. The fracture toughness of several materials, determined
Permeability hysterisis of limestone during isotropic compression.
Selvadurai, A P S; Głowacki, A
2008-01-01
The evolution of permeability hysterisis in Indiana Limestone during application of isotropic confining pressures up to 60 MPa was measured by conducting one-dimensional constant flow rate tests. These tests were carried out either during monotonic application of the confining pressure or during loading-partial unloading cycles. Irreversible permeability changes occurred during both monotonic and repeated incremental compression of the limestone. Mathematical relationships are developed for describing the evolution of path-dependent permeability during isotropic compression.
NASA Technical Reports Server (NTRS)
Sawyer, Richard H.; Trant, James P., Jr.
1950-01-01
An investigation was made by the NACA wing-flow method to determine the drag, pitching-moment, lift, and angle-of-attack characteristics at transonic speeds of various configurations of a semispan model of an early configuration of the XF7U-1 tailless airplane. The results of the tests indicated that for the basic configuration with undeflected ailavator, the zero-lift drag rise occurred at a Mach number of about 0.85 and that about a five-fold increase in drag occurred through the transonic speed range. The results of the tests also indicated that the drag increment produced by -8.0 degrees deflection of the ailavator increased with increase in normal-force coefficient and was smaller at speeds above than at speeds below the drag rise. The drag increment produced by 35 degree deflection of the speed brakes varied from 0.040 to 0.074 depending on the normal-force coefficient and Mach number. These values correspond to drag coefficients of about 0.40 and 0.75 based on speed-brake frontal area. Removal of the fin produced a small positive drag increment at a given normal-force coefficient at speeds during the drag rise. A large forward shift of the neutral-point location occurred at Mach numbers above about 0.90 upon removal of the fin, and also a considerable forward shift throughout the Mach number range occurred upon deflection of the speed brakes. Ailavator ineffectiveness or reversal at low deflections, similar to that determined in previous tests of the basic configuration of the model in the Mach number range from about 0.93 to 1.0, was found for the fin-off configuration and for the model equipped with skewed (more highly sweptback) hinge-line ailavators. With the speed brakes deflected, little or no loss in the incremental pitching moment produced by deflection of the ailavator from O degrees to -8.00 degrees occurred in the Mach number range from 0.85 to 1.0 in contrast to a considerable loss found in previous tests with the speed brakes off.
Optimal Output of Distributed Generation Based On Complex Power Increment
NASA Astrophysics Data System (ADS)
Wu, D.; Bao, H.
2017-12-01
In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.
Incremental Aerodynamic Coefficient Database for the USA2
NASA Technical Reports Server (NTRS)
Richardson, Annie Catherine
2016-01-01
In March through May of 2016, a wind tunnel test was conducted by the Aerosciences Branch (EV33) to visually study the unsteady aerodynamic behavior over multiple transition geometries for the Universal Stage Adapter 2 (USA2) in the MSFC Aerodynamic Research Facility's Trisonic Wind Tunnel (TWT). The purpose of the test was to make a qualitative comparison of the transonic flow field in order to provide a recommended minimum transition radius for manufacturing. Additionally, 6 Degree of Freedom force and moment data for each configuration tested was acquired in order to determine the geometric effects on the longitudinal aerodynamic coefficients (Normal Force, Axial Force, and Pitching Moment). In order to make a quantitative comparison of the aerodynamic effects of the USA2 transition geometry, the aerodynamic coefficient data collected during the test was parsed and incorporated into a database for each USA2 configuration tested. An incremental aerodynamic coefficient database was then developed using the generated databases for each USA2 geometry as a function of Mach number and angle of attack. The final USA2 coefficient increments will be applied to the aerodynamic coefficients of the baseline geometry to adjust the Space Launch System (SLS) integrated launch vehicle force and moment database based on the transition geometry of the USA2.
SAMS Acceleration Measurements on Mir From January to May 1997 (NASA Increment 4)
NASA Technical Reports Server (NTRS)
DeLombard, Richard
1998-01-01
During NASA Increment 4 (January to May 1997), about 5 gigabytes of acceleration data were collected by the Space Acceleration Measurements System (SAMS) onboard the Russian Space Station, Mir. The data were recorded on 28 optical disks which were returned to Earth on STS-84. During this increment, SAMS data were collected in the Priroda module to support the Mir Structural Dynamics Experiment (MiSDE), the Binary Colloidal Alloy Tests (BCAT), Angular Liquid Bridge (ALB), Candle Flames in Microgravity (CFM), Diffusion Controlled Apparatus Module (DCAM), Enhanced Dynamic Load Sensors (EDLS), Forced Flow Flame Spreading Test (FFFT), Liquid Metal Diffusion (LMD), Protein Crystal Growth in Dewar (PCG/Dewar), Queen's University Experiments in Liquid Diffusion (QUELD), and Technical Evaluation of MIM (TEM). This report points out some of the salient features of the microgravity environment to which these experiments were exposed. Also documented are mission events of interest such as the docked phase of STS-84 operations, a Progress engine burn, Soyuz vehicle docking and undocking, and Progress vehicle docking. This report presents an overview of the SAMS acceleration measurements recorded by 10 Hz and 100 Hz sensor heads. The analyses included herein complement those presented in previous summary reports prepared by the Principal Investigator Microgravity Services (PIMS) group.
Eusebi, Anna Laura; Bellezze, Tiziano; Chiappini, Gianluca; Sasso, Marco; Battistoni, Paolo
2017-06-15
The paper deals with the evaluation of the effect of on/off switching of diffuser membranes, in the intermittent aeration process of the urban wastewater treatments. Accelerated tests were done using two types of commercial EPDM diffusers, which were submitted to several consecutive cycles up to the simulation of more than 8 years of real working conditions. The effect of this switching on the mechanical characteristics of the membranes was evaluated in terms of pressure increment of the air operating at different flow rates (2, 3.5 and 6 m 3 /h/diff): during accelerated tests, such increment ranged from 2% to 18%. The intermittent phases emphasized the loss both of the original mechanical proprieties of the diffusers and of the initial pore shapes. The main cause of pressure increment was attributed to the fouling of the internal channels of the pores. Further analyses performed by scanning electron microscopy and by mechanical tests on EPDM membrane, using a traditional tensile test and a non destructive optical method, from which the Young's Modulus was obtained, supported previous conclusions. Any changes in terms of oxygen transfer parameters (KLa and SOTE%) were specifically founded by causing to the repeated on/off switching. Copyright © 2017. Published by Elsevier Ltd.
Variational Bayes method for estimating transit route OD flows using APC data.
DOT National Transportation Integrated Search
2017-01-31
The focus of this study is on the use of large quantities of APC data to estimate OD flows : for transit bus routes. Since most OD flow estimation methodologies based on boarding and : alighting counts were developed before the prevalence of APC tech...
Improving the cold flow properties of biodiesel with synthetic branched diester additives
USDA-ARS?s Scientific Manuscript database
A technical disadvantage of biodiesel relative to petroleum diesel fuel is inferior cold flow properties. One of many methodologies to address this deficiency is employment of cold flow improver (CFI) additives. Generally composed of low-molecular weight copolymers, CFIs originally developed for pet...
An Experimental Study of Vortex Flow Formation and Dynamics in Confined Microcavities
NASA Astrophysics Data System (ADS)
Khojah, Reem; di Carlo, Dino
2017-11-01
New engineering solutions for bioparticle separation invites revisiting classic fluid dynamics problems. Previous studies investigated cavity vortical flow that occurs in 2D with the formation of a material flux boundary or separatrix between the main flow and cavity flow. We demonstrate the concept of separatrix breakdown, in which the cavity flow becomes connected to the main flow, occurs as the cavity is confined in 3D, and is implicated in particle capture and rapid mass exchange in cavities. Understanding the convective flux between the channel and a side cavity provides insight into size-dependent particle capture and release from the cavity flow. The process of vortex formation and separatrix breakdown between the main channel to the side cavity is Reynolds number dependent and can be described by dissecting the flow streamlines from the main channel that enter and spiral out of the cavity. Laminar streamlines from incremented initial locations in the main flow are observed inside the cavity under different flow conditions. Experimentally, we provide the Reynolds number threshold to generate certain flow geometry. We found the optimal flow conditions that enable rapid convective transfer through the cavity flow and exposure and interaction between soluble factors with captured cells. By tuning which fraction of the main flow has solute, we can create a dynamic gate between the cavity and channel flow that potentially serves as a time-dependent fluid exchange approach for objects within the cavity.
Rodríguez, Iván; Zambrano, Lysien; Manterola, Carlos
2016-04-01
Physiological parameters used to measure exercise intensity are oxygen uptake and heart rate. However, perceived exertion (PE) is a scale that has also been frequently applied. The objective of this study is to establish the criterion-related validity of PE scales in children during an incremental exercise test. Seven electronic databases were used. Studies aimed at assessing criterion-related validity of PE scales in healthy children during an incremental exercise test were included. Correlation coefficients were transformed into z-values and assessed in a meta-analysis by means of a fixed effects model if I2 was below 50% or a random effects model, if it was above 50%. wenty-five articles that studied 1418 children (boys: 49.2%) met the inclusion criteria. Children's average age was 10.5 years old. Exercise modalities included bike, running and stepping exercises. The weighted correlation coefficient was 0.835 (95% confidence interval: 0.762-0.887) and 0.874 (95% confidence interval: 0.794-0.924) for heart rate and oxygen uptake as reference criteria. The production paradigm and scales that had not been adapted to children showed the lowest measurement performance (p < 0.05). Measuring PE could be valid in healthy children during an incremental exercise test. Child-specific rating scales showed a better performance than those that had not been adapted to this population. Further studies with better methodological quality should be conducted in order to confirm these results. Sociedad Argentina de Pediatría.
Turbofan Engine Core Compartment Vent Aerodynamic Configuration Development Methodology
NASA Technical Reports Server (NTRS)
Hebert, Leonard J.
2006-01-01
This paper presents an overview of the design methodology used in the development of the aerodynamic configuration of the nacelle core compartment vent for a typical Boeing commercial airplane together with design challenges for future design efforts. Core compartment vents exhaust engine subsystem flows from the space contained between the engine case and the nacelle of an airplane propulsion system. These subsystem flows typically consist of precooler, oil cooler, turbine case cooling, compartment cooling and nacelle leakage air. The design of core compartment vents is challenging due to stringent design requirements, mass flow sensitivity of the system to small changes in vent exit pressure ratio, and the need to maximize overall exhaust system performance at cruise conditions.
Locational Marginal Pricing in the Campus Power System at the Power Distribution Level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Jun; Gu, Yi; Zhang, Yingchen
2016-11-14
In the development of smart grid at distribution level, the realization of real-time nodal pricing is one of the key challenges. The research work in this paper implements and studies the methodology of locational marginal pricing at distribution level based on a real-world distribution power system. The pricing mechanism utilizes optimal power flow to calculate the corresponding distributional nodal prices. Both Direct Current Optimal Power Flow and Alternate Current Optimal Power Flow are utilized to calculate and analyze the nodal prices. The University of Denver campus power grid is used as the power distribution system test bed to demonstrate themore » pricing methodology.« less
NASA Technical Reports Server (NTRS)
Manhardt, P. D.
1982-01-01
The CMC fluid mechanics program system was developed to transmit the theoretical solution of finite element numerical solution methodology, applied to nonlinear field problems into a versatile computer code for comprehensive flow field analysis. Data procedures for the CMC 3 dimensional Parabolic Navier-Stokes (PNS) algorithm are presented. General data procedures a juncture corner flow standard test case data deck is described. A listing of the data deck and an explanation of grid generation methodology are presented. Tabulations of all commands and variables available to the user are described. These are in alphabetical order with cross reference numbers which refer to storage addresses.
LES, DNS, and RANS for the Analysis of High-Speed Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Colucci, P. J.; Jaberi, F. A.; Givi, P.
1996-01-01
A filtered density function (FDF) method suitable for chemically reactive flows is developed in the context of large eddy simulation. The advantage of the FDF methodology is its inherent ability to resolve subgrid scales (SGS) scalar correlations that otherwise have to be modeled. Because of the lack of robust models to accurately predict these correlations in turbulent reactive flows, simulations involving turbulent combustion are often met with a degree of skepticism. The FDF methodology avoids the closure problem associated with these terms and treats the reaction in an exact manner. The scalar FDF approach is particularly attractive since it can be coupled with existing hydrodynamic computational fluid dynamics (CFD) codes.
Statistical assessment of optical phase fluctuations through turbulent mixing layers
NASA Astrophysics Data System (ADS)
Gardner, Patrick J.; Roggemann, Michael C.; Welsh, Byron M.; Bowersox, Rodney D.
1995-09-01
A lateral shearing interferometer is used to measure the slope of perturbed wavefronts after propagating through turbulent shear flows. This provides a two-dimensional flow visualization technique which is nonintrusive. The slope measurements are used to reconstruct the phase of the turbulence-corrupted wave front. Experiments were performed on a plane shear mixing layer of helium and nitrogen gas at fixed velocities, for five locations in the flow development. The two gases, having a density ratio of approximately seven, provide an effective means of simulating compressible shear layers. Statistical autocorrelation functions and structure functions are computed on the reconstructed phase maps. The autocorrelation function results indicate that the turbulence-induced phase fluctuations are not wide-sense stationary. The structure functions exhibit statistical homogeneity, indicating the phase fluctuation are stationary in first increments. However, the turbulence-corrupted phase is not isotropic. A five-thirds power law is shown to fit one-dimensional, orthogonal slices of the structure function, with scaling coefficients related to the location in the flow.
Methodology for CFD Design Analysis of National Launch System Nozzle Manifold
NASA Technical Reports Server (NTRS)
Haire, Scot L.
1993-01-01
The current design environment dictates that high technology CFD (Computational Fluid Dynamics) analysis produce quality results in a timely manner if it is to be integrated into the design process. The design methodology outlined describes the CFD analysis of an NLS (National Launch System) nozzle film cooling manifold. The objective of the analysis was to obtain a qualitative estimate for the flow distribution within the manifold. A complex, 3D, multiple zone, structured grid was generated from a 3D CAD file of the geometry. A Euler solution was computed with a fully implicit compressible flow solver. Post processing consisted of full 3D color graphics and mass averaged performance. The result was a qualitative CFD solution that provided the design team with relevant information concerning the flow distribution in and performance characteristics of the film cooling manifold within an effective time frame. Also, this design methodology was the foundation for a quick turnaround CFD analysis of the next iteration in the manifold design.
NASA Astrophysics Data System (ADS)
Cvetkovic, V.; Molin, S.
2012-02-01
We present a methodology that combines numerical simulations of groundwater flow and advective transport in heterogeneous porous media with analytical retention models for computing the infection risk probability from pathogens in aquifers. The methodology is based on the analytical results presented in [1,2] for utilising the colloid filtration theory in a time-domain random walk framework. It is shown that in uniform flow, the results from the numerical simulations of advection yield comparable results as the analytical TDRW model for generating advection segments. It is shown that spatial variability of the attachment rate may be significant, however, it appears to affect risk in a different manner depending on if the flow is uniform or radially converging. In spite of the fact that numerous issues remain open regarding pathogen transport in aquifers on the field scale, the methodology presented here may be useful for screening purposes, and may also serve as a basis for future studies that would include greater complexity.
Vascular structure determines pulmonary blood flow distribution
NASA Technical Reports Server (NTRS)
Hlastala, M. P.; Glenny, R. W.
1999-01-01
Scientific knowledge develops through the evolution of new concepts. This process is usually driven by new methodologies that provide observations not previously available. Understanding of pulmonary blood flow determinants advanced significantly in the 1960s and is now changing rapidly again, because of increased spatial resolution of regional pulmonary blood flow measurements.
The Gene Flow Project at the US Environmental Protection Agency, Western Ecology Division is developing methodologies for ecological risk assessments of transgene flow using Agrostis and Brassica engineered with CP4 EPSPS genes that confer resistance to glyphosate herbicide. In ...
CFD methodology and validation for turbomachinery flows
NASA Astrophysics Data System (ADS)
Hirsch, Ch.
1994-05-01
The essential problem today, in the application of 3D Navier-Stokes simulations to the design and analysis of turbomachinery components, is the validation of the numerical approximation and of the physical models, in particular the turbulence modelling. Although most of the complex 3D flow phenomena occurring in turbomachinery bladings can be captured with relatively coarse meshes, many detailed flow features are dependent on mesh size, on the turbulence and transition models. A brief review of the present state of the art of CFD methodology is given with emphasis on quality and accuracy of numerical approximations related to viscous flow computations. Considerations related to the mesh influence on solution accuracy are stressed. The basic problems of turbulence and transition modelling are discussed next, with a short summary of the main turbulence models and their applications to representative turbomachinery flows. Validations of present turbulence models indicate that none of the available turbulence models is able to predict all the detailed flow behavior in complex flow interactions. In order to identify the phenomena that can be captured on coarser meshes a detailed understanding of the complex 3D flow in compressor and turbines is necessary. Examples of global validations for different flow configurations, representative of compressor and turbine aerodynamics are presented, including secondary and tip clearance flows.
Nakamura, Shinichiro; Kondo, Yasushi; Matsubae, Kazuyo; Nakajima, Kenichi; Nagasaka, Tetsuya
2011-02-01
Identification of the flow of materials and substances associated with a product system provides useful information for Life Cycle Analysis (LCA), and contributes to extending the scope of complementarity between LCA and Materials Flow Analysis/Substances Flow Analysis (MFA/SFA), the two major tools of industrial ecology. This paper proposes a new methodology based on input-output analysis for identifying the physical input-output flow of individual materials that is associated with the production of a unit of given product, the unit physical input-output by materials (UPIOM). While the Sankey diagram has been a standard tool for the visualization of MFA/SFA, with an increase in the complexity of the flows under consideration, which will be the case when economy-wide intersectoral flows of materials are involved, the Sankey diagram may become too complex for effective visualization. An alternative way to visually represent material flows is proposed which makes use of triangulation of the flow matrix based on degrees of fabrication. The proposed methodology is applied to the flow of pig iron and iron and steel scrap that are associated with the production of a passenger car in Japan. Its usefulness to identify a specific MFA pattern from the original IO table is demonstrated.
H-P adaptive methods for finite element analysis of aerothermal loads in high-speed flows
NASA Technical Reports Server (NTRS)
Chang, H. J.; Bass, J. M.; Tworzydlo, W.; Oden, J. T.
1993-01-01
The commitment to develop the National Aerospace Plane and Maneuvering Reentry Vehicles has generated resurgent interest in the technology required to design structures for hypersonic flight. The principal objective of this research and development effort has been to formulate and implement a new class of computational methodologies for accurately predicting fine scale phenomena associated with this class of problems. The initial focus of this effort was to develop optimal h-refinement and p-enrichment adaptive finite element methods which utilize a-posteriori estimates of the local errors to drive the adaptive methodology. Over the past year this work has specifically focused on two issues which are related to overall performance of a flow solver. These issues include the formulation and implementation (in two dimensions) of an implicit/explicit flow solver compatible with the hp-adaptive methodology, and the design and implementation of computational algorithm for automatically selecting optimal directions in which to enrich the mesh. These concepts and algorithms have been implemented in a two-dimensional finite element code and used to solve three hypersonic flow benchmark problems (Holden Mach 14.1, Edney shock on shock interaction Mach 8.03, and the viscous backstep Mach 4.08).
Discrete Adjoint-Based Design Optimization of Unsteady Turbulent Flows on Dynamic Unstructured Grids
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Diskin, Boris; Yamaleev, Nail K.
2009-01-01
An adjoint-based methodology for design optimization of unsteady turbulent flows on dynamic unstructured grids is described. The implementation relies on an existing unsteady three-dimensional unstructured grid solver capable of dynamic mesh simulations and discrete adjoint capabilities previously developed for steady flows. The discrete equations for the primal and adjoint systems are presented for the backward-difference family of time-integration schemes on both static and dynamic grids. The consistency of sensitivity derivatives is established via comparisons with complex-variable computations. The current work is believed to be the first verified implementation of an adjoint-based optimization methodology for the true time-dependent formulation of the Navier-Stokes equations in a practical computational code. Large-scale shape optimizations are demonstrated for turbulent flows over a tiltrotor geometry and a simulated aeroelastic motion of a fighter jet.
4D Subject-Specific Inverse Modeling of the Chick Embryonic Heart Outflow Tract Hemodynamics
Goenezen, Sevan; Chivukula, Venkat Keshav; Midgett, Madeline; Phan, Ly; Rugonyi, Sandra
2015-01-01
Blood flow plays a critical role in regulating embryonic cardiac growth and development, with altered flow leading to congenital heart disease. Progress in the field, however, is hindered by a lack of quantification of hemodynamic conditions in the developing heart. In this study, we present a methodology to quantify blood flow dynamics in the embryonic heart using subject-specific computational fluid dynamics (CFD) models. While the methodology is general, we focused on a model of the chick embryonic heart outflow tract (OFT), which distally connects the heart to the arterial system, and is the region of origin of many congenital cardiac defects. Using structural and Doppler velocity data collected from optical coherence tomography (OCT), we generated 4D (3D + time) embryo-specific CFD models of the heart OFT. To replicate the blood flow dynamics over time during the cardiac cycle, we developed an iterative inverse-method optimization algorithm, which determines the CFD model boundary conditions such that differences between computed velocities and measured velocities at one point within the OFT lumen are minimized. Results from our developed CFD model agree with previously measured hemodynamics in the OFT. Further, computed velocities and measured velocities differ by less than 15% at locations that were not used in the optimization, validating the model. The presented methodology can be used in quantifications of embryonic cardiac hemodynamics under normal and altered blood flow conditions, enabling an in depth quantitative study of how blood flow influences cardiac development. PMID:26361767
NASA Technical Reports Server (NTRS)
Hartill, W. R.
1977-01-01
A hypersonic wind tunnel test method for obtaining credible aerodynamic data on a complete hypersonic vehicle (generic X-24c) with scramjet exhaust flow simulation is described. The general problems of simulating the scramjet exhaust as well as accounting for scramjet inlet flow and vehicle forces are analyzed, and candidate test methods are described and compared. The method selected as most useful makes use of a thrust-minus-drag flow-through balance with a completely metric model. Inlet flow is diverted by a fairing. The incremental effect of the fairing is determined in the testing of two reference models. The net thrust of the scramjet module is an input to be determined in large-scale module tests with scramjet combustion. Force accounting is described, and examples of force component levels are predicted. Compatibility of the test method with candidate wind tunnel facilities is described, and a preliminary model mechanical arrangement drawing is presented. The balance design and performance requirements are described in a detailed specification. Calibration procedures, model instrumentation, and a test plan for the model are outlined.
Unconventional Liquid Flow in Low-Permeability Media: Theory and Revisiting Darcy's Law
NASA Astrophysics Data System (ADS)
Liu, H. H.; Chen, J.
2017-12-01
About 80% of fracturing fluid remains in shale formations after hydraulic fracturing and the flow back process. It is critical to understand and accurately model the flow process of fracturing fluids in a shale formation, because the flow has many practical applications for shale gas recovery. Owing to the strong solid-liquid interaction in low-permeability media, Darcy's law is not always adequate for describing liquid flow process in a shale formation. This non-Darcy flow behavior (characterized by nonlinearity of the relationship between liquid flux and hydraulic gradient), however, has not been given enough attention in the shale gas community. The current study develops a systematic methodology to address this important issue. We developed a phenomenological model for liquid flow in shale (in which liquid flux is a power function of pressure gradient), an extension of the conventional Darcy's law, and also a methodology to estimate parameters for the phenomenological model from spontaneous imbibition tests. The validity of our new developments is verified by satisfactory comparisons of theoretical results and observations from our and other research groups. The relative importance of this non-Darcy liquid flow for hydrocarbon production in unconventional reservoirs remains an issue that needs to be further investigated.
Soltani, Maryam; Kerachian, Reza
2018-04-15
In this paper, a new methodology is proposed for the real-time trading of water withdrawal and waste load discharge permits in agricultural areas along the rivers. Total Dissolved Solids (TDS) is chosen as an indicator of river water quality and the TDS load that agricultural water users discharge to the river are controlled by storing a part of return flows in some evaporation ponds. Available surface water withdrawal and waste load discharge permits are determined using a non-linear multi-objective optimization model. Total available permits are then fairly reallocated among agricultural water users, proportional to their arable lands. Water users can trade their water withdrawal and waste load discharge permits simultaneously, in a bilateral, step by step framework, which takes advantage of differences in their water use efficiencies and agricultural return flow rates. A trade that would take place at each time step results in either more benefit or less diverted return flow. The Nucleolus cooperative game is used to redistribute the benefits generated through trades in different time steps. The proposed methodology is applied to PayePol region in the Karkheh River catchment, southwest Iran. Predicting that 1922.7 Million Cubic Meters (MCM) of annual flow is available to agricultural lands at the beginning of the cultivation year, the real-time optimization model estimates the total annual benefit to reach 46.07 million US Dollars (USD), which requires 6.31 MCM of return flow to be diverted to the evaporation ponds. Fair reallocation of the permits, changes these values to 35.38 million USD and 13.69 MCM, respectively. Results illustrate the effectiveness of the proposed methodology in the real-time water and waste load allocation and simultaneous trading of permits. Copyright © 2018 Elsevier Ltd. All rights reserved.
High-Order Moving Overlapping Grid Methodology in a Spectral Element Method
NASA Astrophysics Data System (ADS)
Merrill, Brandon E.
A moving overlapping mesh methodology that achieves spectral accuracy in space and up to second-order accuracy in time is developed for solution of unsteady incompressible flow equations in three-dimensional domains. The targeted applications are in aerospace and mechanical engineering domains and involve problems in turbomachinery, rotary aircrafts, wind turbines and others. The methodology is built within the dual-session communication framework initially developed for stationary overlapping meshes. The methodology employs semi-implicit spectral element discretization of equations in each subdomain and explicit treatment of subdomain interfaces with spectrally-accurate spatial interpolation and high-order accurate temporal extrapolation, and requires few, if any, iterations, yet maintains the global accuracy and stability of the underlying flow solver. Mesh movement is enabled through the Arbitrary Lagrangian-Eulerian formulation of the governing equations, which allows for prescription of arbitrary velocity values at discrete mesh points. The stationary and moving overlapping mesh methodologies are thoroughly validated using two- and three-dimensional benchmark problems in laminar and turbulent flows. The spatial and temporal global convergence, for both methods, is documented and is in agreement with the nominal order of accuracy of the underlying solver. Stationary overlapping mesh methodology was validated to assess the influence of long integration times and inflow-outflow global boundary conditions on the performance. In a turbulent benchmark of fully-developed turbulent pipe flow, the turbulent statistics are validated against the available data. Moving overlapping mesh simulations are validated on the problems of two-dimensional oscillating cylinder and a three-dimensional rotating sphere. The aerodynamic forces acting on these moving rigid bodies are determined, and all results are compared with published data. Scaling tests, with both methodologies, show near linear strong scaling, even for moderately large processor counts. The moving overlapping mesh methodology is utilized to investigate the effect of an upstream turbulent wake on a three-dimensional oscillating NACA0012 extruded airfoil. A direct numerical simulation (DNS) at Reynolds Number 44,000 is performed for steady inflow incident upon the airfoil oscillating between angle of attack 5.6° and 25° with reduced frequency k=0.16. Results are contrasted with subsequent DNS of the same oscillating airfoil in a turbulent wake generated by a stationary upstream cylinder.
Yang, Zhong-jin; Price, Chrystal D.; Bosco, Gerardo; Tucci, Micheal; El-Badri, Nagwa S.; Mangar, Devanand; Camporesi, Enrico M.
2008-01-01
Background Cerebral blood flow (CBF) is auto-regulated to meet the brain's metabolic requirements. Oxycyte® is a perfluorocarbon emulsion that acts as a highly effective oxygen carrier compared to blood. The aim of this study is to determine the effects of Oxycyte® on regional CBF (rCBF), by evaluating the effects of stepwise isovolemic hemodilution with Oxycyte® on CBF. Methodology Male rats were intubated and ventilated with 100% O2 under isoflurane anesthesia. The regional (striatum) CBF (rCBF) was measured with a laser doppler flowmeter (LDF). Stepwise isovolemic hemodilution was performed by withdrawing 4ml of blood and substituting the same volume of 5% albumin or 2 ml Oxycyte® plus 2 ml albumin at 20-minute intervals until the hematocrit (Hct) values reached 5%. Principal Findings In the albumin-treated group, rCBF progressively increased to approximately twice its baseline level (208±30%) when Hct levels were less than 10%. In the Oxycyte®-treated group on the other hand, rCBF increased by significantly smaller increments, and this group's mean rCBF was only slightly higher than baseline (118±18%) when Hct levels were less than 10%. Similarly, in the albumin-treated group, rCBF started to increase when hemodilution with albumin caused the CaO2 to decrease below 17.5 ml/dl. Thereafter, the increase in rCBF was accompanied by a nearly proportional decrease in the CaO2 level. In the Oxycyte®-treated group, the increase in rCBF was significantly smaller than in the albumin-treated group when the CaO2 level dropped below 10 ml/dl (142±20% vs. 186±26%), and rCBF returned to almost baseline levels (106±15) when the CaO2 level was below 7 ml/dl. Conclusions/Significance Hemodilution with Oxycyte® was accompanied with higher CaO2 and PO2 than control group treated with albumin alone. This effect may be partially responsible for maintaining relatively constant CBF and not allowing the elevated blood flow that was observed with albumin. PMID:18431491
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buttner, William J; Hartmann, Kevin S; Schmidt, Kara
Certification of hydrogen sensors to standards often prescribes using large-volume test chambers [1, 2]. However, feedback from stakeholders such as sensor manufacturers and end-users indicate that chamber test methods are often viewed as too slow and expensive for routine assessment. Flow through test methods potentially are an efficient, cost-effective alternative for sensor performance assessment. A large number of sensors can be simultaneously tested, in series or in parallel, with an appropriate flow through test fixture. The recent development of sensors with response times of less than 1s mandates improvements in equipment and methodology to properly capture the performance of thismore » new generation of fast sensors; flow methods are a viable approach for accurate response and recovery time determinations, but there are potential drawbacks. According to ISO 26142 [1], flow through test methods may not properly simulate ambient applications. In chamber test methods, gas transport to the sensor can be dominated by diffusion which is viewed by some users as mimicking deployment in rooms and other confined spaces. Alternatively, in flow through methods, forced flow transports the gas to the sensing element. The advective flow dynamics may induce changes in the sensor behaviour relative to the quasi-quiescent condition that may prevail in chamber test methods. One goal of the current activity in the JRC and NREL sensor laboratories [3, 4] is to develop a validated flow through apparatus and methods for hydrogen sensor performance testing. In addition to minimizing the impact on sensor behaviour induced by differences in flow dynamics, challenges associated with flow through methods include the ability to control environmental parameters (humidity, pressure and temperature) during the test and changes in the test gas composition induced by chemical reactions with upstream sensors. Guidelines on flow through test apparatus design and protocols for the evaluation of hydrogen sensor performance are being developed. Various commercial sensor platforms (e.g., thermal conductivity, catalytic and metal semiconductor) were used to demonstrate the advantages and issues with the flow through methodology.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, R.G.; Verkman, A.S.
1989-01-24
A quantitative description of transmembrane water transport requires specification of osmotic (Pf) and diffusional (Pd) water permeability coefficients. Methodology has been developed to measure Pf and Pd simultaneously on the basis of the sensitivity and rapid response of the fluorophore aminonaphthalenetrisulfonic acid (ANTS) to solution H2O/D2O content. Cells loaded with ANTS in an H2O buffer were subjected to an inward osmotic gradient with a D2O buffer in a stopped-flow apparatus. The time courses of cell volume (giving Pf) and H2O/D2O content (giving Pd) were recorded with dual photomultiplier detection of scattered light intensity and ANTS fluorescence, respectively. The method wasmore » validated by using sealed red cell ghosts and artificial liposomes reconstituted with the pore-forming agent gramicidin D. At 25 degrees C, red cell ghost Pf was 0.021 cm/s with Pd 0.005 cm/s (H2O/D2O exchange time 7.9 ms). Pf and Pd were inhibited by 90% and 45% upon addition of 0.5 mM HgCl2. The activation energy for Pd increased from 5.1 kcal/mol to 10 kcal/mol with addition of HgCl2 (18-35 degrees C). In 90% phosphatidylcholine (PC)/10% cholesterol liposomes prepared by bath sonication and exclusion chromatography, Pf and Pd were 5.1 X 10(-4) and 6.3 X 10(-4) cm/s, respectively (23 degrees C). Addition of gramicidin D (0.1 micrograms/mg of PC) resulted in a further increment in Pf and Pd of 7 X 10(-4) and 3 X 10(-4) cm/s, respectively. These results validate the new methodology and demonstrate its utility for rapid determination of Pf/Pd in biological membranes and in liposomes reconstituted with water channels.« less
Creation of a small high-throughput screening facility.
Flak, Tod
2009-01-01
The creation of a high-throughput screening facility within an organization is a difficult task, requiring a substantial investment of time, money, and organizational effort. Major issues to consider include the selection of equipment, the establishment of data analysis methodologies, and the formation of a group having the necessary competencies. If done properly, it is possible to build a screening system in incremental steps, adding new pieces of equipment and data analysis modules as the need grows. Based upon our experience with the creation of a small screening service, we present some guidelines to consider in planning a screening facility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronfman, B. H.
Time-series analysis provides a useful tool in the evaluation of public policy outputs. It is shown that the general Box and Jenkins method, when extended to allow for multiple interrupts, enables researchers simultaneously to examine changes in drift and level of a series, and to select the best fit model for the series. As applied to urban renewal allocations, results show significant changes in the level of the series, corresponding to changes in party control of the Executive. No support is given to the ''incrementalism'' hypotheses as no significant changes in drift are found.
Fuzzy Q-Learning for Generalization of Reinforcement Learning
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1996-01-01
Fuzzy Q-Learning, introduced earlier by the author, is an extension of Q-Learning into fuzzy environments. GARIC is a methodology for fuzzy reinforcement learning. In this paper, we introduce GARIC-Q, a new method for doing incremental Dynamic Programming using a society of intelligent agents which are controlled at the top level by Fuzzy Q-Learning and at the local level, each agent learns and operates based on GARIC. GARIC-Q improves the speed and applicability of Fuzzy Q-Learning through generalization of input space by using fuzzy rules and bridges the gap between Q-Learning and rule based intelligent systems.
Methodology of Computer-Aided Design of Variable Guide Vanes of Aircraft Engines
ERIC Educational Resources Information Center
Falaleev, Sergei V.; Melentjev, Vladimir S.; Gvozdev, Alexander S.
2016-01-01
The paper presents a methodology which helps to avoid a great amount of costly experimental research. This methodology includes thermo-gas dynamic design of an engine and its mounts, the profiling of compressor flow path and cascade design of guide vanes. Employing a method elaborated by Howell, we provide a theoretical solution to the task of…
Modeling for free surface flow with phase change and its application to fusion technology
NASA Astrophysics Data System (ADS)
Luo, Xiaoyong
The development of predictive capabilities for free surface flow with phase change is essential to evaluate liquid wall protection schemes for various fusion chambers. With inertial fusion energy (IFE) concepts such as HYLIFE-II, rapid condensation into cold liquid surfaces is required when using liquid curtains for protecting reactor walls from blasts and intense neutron radiation. With magnetic fusion energy (MFE) concepts, droplets are injected onto the free surface of the liquid to minimize evaporation by minimizing the surface temperature. This dissertation presents a numerical methodology for free surface flow with phase change to help resolve feasibility issues encountered in the aforementioned fusion engineering fields, especially spray droplet condensation efficiency in IFE and droplet heat transfer enhancement on free surface liquid divertors in MFE. The numerical methodology is being conducted within the framework of the incompressible flow with the phase change model. A new second-order projection method is presented in conjunction with Approximate-Factorization techniques (AF method) for incompressible Navier-Stokes equations. A sub-cell conception is introduced and the Ghost Fluid Method in extended in a modified mass transfer model to accurately calculate the mass transfer across the interface. The Crank-Nicholson method is used for the diffusion term to eliminate the numerical viscous stability restriction. The third-order ENO scheme is used for the convective term to guarantee the accuracy of the method. The level set method is used to capture accurately the free surface of the flow and the deformation of the droplets. This numerical investigation identifies the physics characterizing transient heat and mass transfer of the droplet and the free surface flow. The results show that the numerical methodology is quite successful in modeling the free surface with phase change even though some severe deformations such as breaking and merging occur. The versatility of the numerical methodology shows that the work can easily handle complex physical conditions that occur in the fusion science and engineering.
NASA Technical Reports Server (NTRS)
Tan, Choon-Sooi; Suder, Kenneth (Technical Monitor)
2003-01-01
A framework for an effective computational methodology for characterizing the stability and the impact of distortion in high-speed multi-stage compressor is being developed. The methodology consists of using a few isolated-blade row Navier-Stokes solutions for each blade row to construct a body force database. The purpose of the body force database is to replace each blade row in a multi-stage compressor by a body force distribution to produce same pressure rise and flow turning. To do this, each body force database is generated in such a way that it can respond to the changes in local flow conditions. Once the database is generated, no hrther Navier-Stokes computations are necessary. The process is repeated for every blade row in the multi-stage compressor. The body forces are then embedded as source terms in an Euler solver. The method is developed to have the capability to compute the performance in a flow that has radial as well as circumferential non-uniformity with a length scale larger than a blade pitch; thus it can potentially be used to characterize the stability of a compressor under design. It is these two latter features as well as the accompanying procedure to obtain the body force representation that distinguish the present methodology from the streamline curvature method. The overall computational procedures have been developed. A dimensional analysis was carried out to determine the local flow conditions for parameterizing the magnitudes of the local body force representation of blade rows. An Euler solver was modified to embed the body forces as source terms. The results from the dimensional analysis show that the body forces can be parameterized in terms of the two relative flow angles, the relative Mach number, and the Reynolds number. For flow in a high-speed transonic blade row, they can be parameterized in terms of the local relative Mach number alone.
Sabharwal, Sanjeeve; Carter, Alexander; Darzi, Lord Ara; Reilly, Peter; Gupte, Chinmay M
2015-06-01
Approximately 76,000 people a year sustain a hip fracture in the UK and the estimated cost to the NHS is £1.4 billion a year. Health economic evaluations (HEEs) are one of the methods employed by decision makers to deliver healthcare policy supported by clinical and economic evidence. The objective of this study was to (1) identify and characterize HEEs for the management of patients with hip fractures, and (2) examine their methodological quality. A literature search was performed in MEDLINE, EMBASE and the NHS Economic Evaluation Database. Studies that met the specified definition for a HEE and evaluated hip fracture management were included. Methodological quality was assessed using the Consensus on Health Economic Criteria (CHEC). Twenty-seven publications met the inclusion criteria of this study and were included in our descriptive and methodological analysis. Domains of methodology that performed poorly included use of an appropriate time horizon (66.7% of studies), incremental analysis of costs and outcomes (63%), future discounting (44.4%), sensitivity analysis (40.7%), declaration of conflicts of interest (37%) and discussion of ethical considerations (29.6%). HEEs for patients with hip fractures are increasing in publication in recent years. Most of these studies fail to adopt a societal perspective and key aspects of their methodology are poor. The development of future HEEs in this field must adhere to established principles of methodology, so that better quality research can be used to inform health policy on the management of patients with a hip fracture. Copyright © 2014 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.
The design of a wind tunnel VSTOL fighter model incorporating turbine powered engine simulators
NASA Technical Reports Server (NTRS)
Bailey, R. O.; Maraz, M. R.; Hiley, P. E.
1981-01-01
A wind-tunnel model of a supersonic VSTOL fighter aircraft configuration has been developed for use in the evaluation of airframe-propulsion system aerodynamic interactions. The model may be employed with conventional test techniques, where configuration aerodynamics are measured in a flow-through mode and incremental nozzle-airframe interactions are measured in a jet-effects mode, and with the Compact Multimission Aircraft Propulsion Simulator which is capable of the simultaneous simulation of inlet and exhaust nozzle flow fields so as to allow the evaluation of the extent of inlet and nozzle flow field coupling. The basic configuration of the twin-engine model has a geometrically close-coupled canard and wing, and a moderately short nacelle with nonaxisymmetric vectorable exhaust nozzles near the wing trailing edge, and may be converted to a canardless configuration with an extremely short nacelle. Testing is planned to begin in the summer of 1982.
CFD Analysis of nanofluid forced convection heat transport in laminar flow through a compact pipe
NASA Astrophysics Data System (ADS)
Yu, Kitae; Park, Cheol; Kim, Sedon; Song, Heegun; Jeong, Hyomin
2017-08-01
In the present paper, developing laminar forced convection flows were numerically investigated by using water-Al2O3 nano-fluid through a circular compact pipe which has 4.5mm diameter. Each model has a steady state and uniform heat flux (UHF) at the wall. The whole numerical experiments were processed under the Re = 1050 and the nano-fluid models were made by the Alumina volume fraction. A single-phase fluid models were defined through nano-fluid physical and thermal properties calculations, Two-phase model(mixture granular model) were processed in 100nm diameter. The results show that Nusselt number and heat transfer rate are improved as the Al2O3 volume fraction increased. All of the numerical flow simulations are processed by the FLUENT. The results show the increment of thermal transfer from the volume fraction concentration.
Flow Regime Based Climatologies of Lightning Probabilities for Spaceports and Airports
NASA Technical Reports Server (NTRS)
Bauman, William H., III; Sharp, David; Spratt, Scott; Lafosse, Richard A.
2008-01-01
The objective of this work was to provide forecasters with a tool to indicate the warm season climatological probability of one or more lightning strikes within a circle at a site within a specified time interval. This paper described the AMU work conducted in developing flow regime based climatologies of lightning probabilities for the SLF and seven airports in the NWS MLB CWA in east-central Florida. The paper also described the GUI developed by the AMU that is used to display the data for the operational forecasters. There were challenges working with gridded lightning data as well as the code that accompanied the gridded data. The AMU modified the provided code to be able to produce the climatologies of lightning probabilities based on eight flow regimes for 5-, 10-, 20-, and 30-n mi circles centered on eight sites in 1-, 3-, and 6-hour increments.
Compressible turbulent mixing: Effects of Schmidt number.
Ni, Qionglin
2015-05-01
We investigated by numerical simulations the effects of Schmidt number on passive scalar transport in forced compressible turbulence. The range of Schmidt number (Sc) was 1/25∼25. In the inertial-convective range the scalar spectrum seemed to obey the k(-5/3) power law. For Sc≫1, there appeared a k(-1) power law in the viscous-convective range, while for Sc≪1, a k(-17/3) power law was identified in the inertial-diffusive range. The scaling constant computed by the mixed third-order structure function of the velocity-scalar increment showed that it grew over Sc, and the effect of compressibility made it smaller than the 4/3 value from incompressible turbulence. At small amplitudes, the probability distribution function (PDF) of scalar fluctuations collapsed to the Gaussian distribution whereas, at large amplitudes, it decayed more quickly than Gaussian. At large scales, the PDF of scalar increment behaved similarly to that of scalar fluctuation. In contrast, at small scales it resembled the PDF of scalar gradient. Furthermore, the scalar dissipation occurring at large magnitudes was found to grow with Sc. Due to low molecular diffusivity, in the Sc≫1 flow the scalar field rolled up and got mixed sufficiently. However, in the Sc≪1 flow the scalar field lost the small-scale structures by high molecular diffusivity and retained only the large-scale, cloudlike structures. The spectral analysis found that the spectral densities of scalar advection and dissipation in both Sc≫1 and Sc≪1 flows probably followed the k(-5/3) scaling. This indicated that in compressible turbulence the processes of advection and dissipation except that of scalar-dilatation coupling might deferring to the Kolmogorov picture. It then showed that at high wave numbers, the magnitudes of spectral coherency in both Sc≫1 and Sc≪1 flows decayed faster than the theoretical prediction of k(-2/3) for incompressible flows. Finally, the comparison with incompressible results showed that the scalar in compressible turbulence with Sc=1 lacked a conspicuous bump structure in its spectrum, but was more intermittent in the dissipative range.
Flow Separation Control on A Full-Scale Vertical Tail Model Using Sweeping Jet Actuators
NASA Technical Reports Server (NTRS)
Andino, Marlyn Y.; Lin, John C.; Washburn, Anthony E.; Whalen, Edward A.; Graff, Emilio C.; Wygnanski, Israel J.
2015-01-01
This paper describes test results of a joint NASA/Boeing research effort to advance Active Flow Control (AFC) technology to enhance aerodynamic efficiency. A full-scale Boeing 757 vertical tail model equipped with sweeping jets AFC was tested at the National Full-Scale Aerodynamics Complex 40- by 80-Foot Wind Tunnel at NASA Ames Research Center. The flow separation control optimization was performed at 100 knots, a maximum rudder deflection of 30deg, and sideslip angles of 0deg and -7.5deg. Greater than 20% increments in side force were achieved at the two sideslip angles with a 31-actuator AFC configuration. Flow physics and flow separation control associated with the AFC are presented in detail. AFC caused significant increases in suction pressure on the actuator side and associated side force enhancement. The momentum coefficient (C sub mu) is shown to be a useful parameter to use for scaling-up sweeping jet AFC from sub-scale tests to full-scale applications. Reducing the number of actuators at a constant total C(sub mu) of approximately 0.5% and tripling the actuator spacing did not significantly affect the flow separation control effectiveness.
Particle Size Effects on Flow Properties of PS304 Plasma Spray Feedstock Powder Blend
NASA Technical Reports Server (NTRS)
Stanford, Malcolm K.; DellaCorte, Christopher; Eylon, Daniel
2002-01-01
The effects of BaF2-CaF2 particle size and size distribution on PS304 feedstock powder flowability have been investigated. Angular BaF2-CaF2 eutectic powders were produced by comminution and classified by screening to obtain 38 to 45 microns 45 to 106 microns, 63 to 106 microns, 45 to 53 microns, 63 to 75 microns, and 90 to 106 microns particle size distributions. The fluorides were added incrementally from 0 to 10 wt% to the other powder constituents of the PS304 feedstock: nichrome, chromia, and silver powders. The flow rate of the powder blends decreased linearly with increasing concentration of the fluorides. Flow was degraded with decreasing BaF2-CaF2 particle size and with increasing BaF2-CaF2 particle size distribution. A semiempirical relationship is offered to describe the PS304 powder blend flow behavior. The Hausner Ratio confirmed the funnel flow test results, but was slightly less sensitive to differences in BaF2-CaF2 particle size and size distribution. These findings may have applicability to other powders that do not flow easily, such as ceramic powders.
NASA Technical Reports Server (NTRS)
Rued, Klaus
1987-01-01
The requirements for fundamental experimental studies of the influence of free stream turbulence, pressure gradients and wall cooling are discussed. Under turbine-like free stream conditions, comprehensive tests of transitional boundary layers with laminar, reversing and turbulent flow increments were performed to decouple the effects of the parameters and to determine the effects during mutual interaction.
40 CFR Appendix III to Part 86 - Constant Volume Sampler Flow Calibration
Code of Federal Regulations, 2014 CFR
2014-07-01
... ETI °F ±.1°F. Pressure depression upstream of LFE EPI “H20 ±.1“H20. Pressure drop across the LFE matrix EDP “H20 ±.005“H20. Air temperature at CVS pump inlet PTI °F ±.5°F. Pressure depression at CVS... condition in an increment of pump inlet depression (about 4″ H2O) that will yield a minimum of six data...
40 CFR Appendix III to Part 86 - Constant Volume Sampler Flow Calibration
Code of Federal Regulations, 2010 CFR
2010-07-01
... ETI °F ±.1 °F. Pressure depression upstream of LFE EPI “H20 ±.1“H20. Pressure drop across the LFE matrix EDP “H20 ±.005“H20. Air temperature at CVS pump inlet PTI °F ±.5 °F. Pressure depression at CVS... condition in an increment of pump inlet depression (about 4″ H2O) that will yield a minimum of six data...
40 CFR Appendix III to Part 86 - Constant Volume Sampler Flow Calibration
Code of Federal Regulations, 2013 CFR
2013-07-01
... ETI °F ±.1 °F. Pressure depression upstream of LFE EPI “H20 ±.1“H20. Pressure drop across the LFE matrix EDP “H20 ±.005“H20. Air temperature at CVS pump inlet PTI °F ±.5 °F. Pressure depression at CVS... condition in an increment of pump inlet depression (about 4″ H2O) that will yield a minimum of six data...
40 CFR Appendix III to Part 86 - Constant Volume Sampler Flow Calibration
Code of Federal Regulations, 2011 CFR
2011-07-01
... ETI °F ±.1 °F. Pressure depression upstream of LFE EPI “H20 ±.1“H20. Pressure drop across the LFE matrix EDP “H20 ±.005“H20. Air temperature at CVS pump inlet PTI °F ±.5 °F. Pressure depression at CVS... condition in an increment of pump inlet depression (about 4″ H2O) that will yield a minimum of six data...
40 CFR Appendix III to Part 86 - Constant Volume Sampler Flow Calibration
Code of Federal Regulations, 2012 CFR
2012-07-01
... ETI °F ±.1 °F. Pressure depression upstream of LFE EPI “H20 ±.1“H20. Pressure drop across the LFE matrix EDP “H20 ±.005“H20. Air temperature at CVS pump inlet PTI °F ±.5 °F. Pressure depression at CVS... condition in an increment of pump inlet depression (about 4″ H2O) that will yield a minimum of six data...
Subsonic Aerodynamic Characteristics of a Circular Body Earth-to-Orbit Vehicle
NASA Technical Reports Server (NTRS)
Lepsch, Roger A., Jr.; Ware, George M.; MacConochie, Ian O.
1996-01-01
A test of a generic reusable earth-to-orbit transport was conducted in the 7- by 10-Foot high-speed tunnel at the Langley Research Center at Mach number 0.3. The model had a body with a circular cross section and a thick clipped delta wing as the major lifting surface. For directional control, three different vertical fin arrangements were investigated: a conventional aft-mounted center vertical fin, wingtip fins, and a nose-mounted vertical fin. The configuration was longitudinally stable about the estimated center-of-gravity position of 0.72 body length and had sufficient pitch-control authority for stable trim over a wide range of angle of attack, regardless of fin arrangement. The maximum trimmed lift/drag ratio for the aft center-fin configuration was less than 5, whereas the other configurations had values of above 6. The aft center-fin configuration was directionally stable for all angles of attack tested. The wingtip and nose fins were not intended to produce directional stability but to be active controllers for artificial stabilization. Small rolling-moment values resulted from yaw control of the nose fin. Large adverse rolling-moment increments resulted from tip-fin controller deflection above 13 deg angle of attack. Flow visualization indicated that the adverse rolling-moment increments were probably caused by the influence of the deflected tip-fin controller on wing flow separation.
Mininni, P D; Alexakis, A; Pouquet, A
2008-03-01
We analyze the data stemming from a forced incompressible hydrodynamic simulation on a grid of 2048(3) regularly spaced points, with a Taylor Reynolds number of R(lambda) ~ 1300. The forcing is given by the Taylor-Green vortex, which shares similarities with the von Kàrmàn flow used in several laboratory experiments; the computation is run for ten turnover times in the turbulent steady state. At this Reynolds number the anisotropic large scale flow pattern, the inertial range, the bottleneck, and the dissipative range are clearly visible, thus providing a good test case for the study of turbulence as it appears in nature. Triadic interactions, the locality of energy fluxes, and longitudinal structure functions of the velocity increments are computed. A comparison with runs at lower Reynolds numbers is performed and shows the emergence of scaling laws for the relative amplitude of local and nonlocal interactions in spectral space. Furthermore, the scaling of the Kolmogorov constant, and of skewness and flatness of velocity increments is consistent with previous experimental results. The accumulation of energy in the small scales associated with the bottleneck seems to occur on a span of wave numbers that is independent of the Reynolds number, possibly ruling out an inertial range explanation for it. Finally, intermittency exponents seem to depart from standard models at high R(lambda), leaving the interpretation of intermittency an open problem.
Algorithm Optimally Orders Forward-Chaining Inference Rules
NASA Technical Reports Server (NTRS)
James, Mark
2008-01-01
People typically develop knowledge bases in a somewhat ad hoc manner by incrementally adding rules with no specific organization. This often results in a very inefficient execution of those rules since they are so often order sensitive. This is relevant to tasks like Deep Space Network in that it allows the knowledge base to be incrementally developed and have it automatically ordered for efficiency. Although data flow analysis was first developed for use in compilers for producing optimal code sequences, its usefulness is now recognized in many software systems including knowledge-based systems. However, this approach for exhaustively computing data-flow information cannot directly be applied to inference systems because of the ubiquitous execution of the rules. An algorithm is presented that efficiently performs a complete producer/consumer analysis for each antecedent and consequence clause in a knowledge base to optimally order the rules to minimize inference cycles. An algorithm was developed that optimally orders a knowledge base composed of forwarding chaining inference rules such that independent inference cycle executions are minimized, thus, resulting in significantly faster execution. This algorithm was integrated into the JPL tool Spacecraft Health Inference Engine (SHINE) for verification and it resulted in a significant reduction in inference cycles for what was previously considered an ordered knowledge base. For a knowledge base that is completely unordered, then the improvement is much greater.
A New TCP Congestion Control Supporting RTT-Fairness
NASA Astrophysics Data System (ADS)
Ogura, Kazumine; Nemoto, Yohei; Su, Zhou; Katto, Jiro
This paper focuses on RTT-fairness of multiple TCP flows over the Internet, and proposes a new TCP congestion control named “HRF (Hybrid RTT-Fair)-TCP”. Today, it is a serious problem that the flows having smaller RTT utilize more bandwidth than others when multiple flows having different RTT values compete in the same network. This means that a user with longer RTT may not be able to obtain sufficient bandwidth by the current methods. This RTT fairness issue has been discussed in many TCP papers. An example is CR (Constant Rate) algorithm, which achieves RTT-fairness by multiplying the square of RTT value in its window increment phase against TCP-Reno. However, the method halves its windows size same as TCP-Reno when a packet loss is detected. This makes worse its efficiency in certain network cases. On the other hand, recent proposed TCP versions essentially require throughput efficiency and TCP-friendliness with TCP-Reno. Therefore, we try to keep these advantages in our TCP design in addition to RTT-fairness. In this paper, we make intuitive analytical models in which we separate resource utilization processes into two cases: utilization of bottleneck link capacity and that of buffer space at the bottleneck link router. These models take into account three characteristic algorithms (Reno, Constant Rate, Constant Increase) in window increment phase where a sender receives an acknowledgement successfully. Their validity is proved by both simulations and implementations. From these analyses, we propose HRF-TCP which switches two modes according to observed RTT values and achieves RTT fairness. Experiments are carried out to validate the proposed method. Finally, HRF-TCP outperforms conventional methods in RTT-fairness, efficiency and friendliness with TCP-Reno.
Werner, Kent; Bosson, Emma; Berglund, Sten
2006-12-01
Safety assessment related to the siting of a geological repository for spent nuclear fuel deep in the bedrock requires identification of potential flow paths and the associated travel times for radionuclides originating at repository depth. Using the Laxemar candidate site in Sweden as a case study, this paper describes modeling methodology, data integration, and the resulting water flow models, focusing on the Quaternary deposits and the upper 150 m of the bedrock. Example simulations identify flow paths to groundwater discharge areas and flow paths in the surface system. The majority of the simulated groundwater flow paths end up in the main surface waters and along the coastline, even though the particles used to trace the flow paths are introduced with a uniform spatial distribution at a relatively shallow depth. The calculated groundwater travel time, determining the time available for decay and retention of radionuclides, is on average longer to the coastal bays than to other biosphere objects at the site. Further, it is demonstrated how GIS-based modeling can be used to limit the number of surface flow paths that need to be characterized for safety assessment. Based on the results, the paper discusses an approach for coupling the present models to a model for groundwater flow in the deep bedrock.
Building the Material Flow Networks of Aluminum in the 2007 U.S. Economy.
Chen, Wei-Qiang; Graedel, T E; Nuss, Philip; Ohno, Hajime
2016-04-05
Based on the combination of the U.S. economic input-output table and the stocks and flows framework for characterizing anthropogenic metal cycles, this study presents a methodology for building material flow networks of bulk metals in the U.S. economy and applies it to aluminum. The results, which we term the Input-Output Material Flow Networks (IO-MFNs), achieve a complete picture of aluminum flow in the entire U.S. economy and for any chosen industrial sector (illustrated for the Automobile Manufacturing sector). The results are compared with information from our former study on U.S. aluminum stocks and flows to demonstrate the robustness and value of this new methodology. We find that the IO-MFN approach has the following advantages: (1) it helps to uncover the network of material flows in the manufacturing stage in the life cycle of metals; (2) it provides a method that may be less time-consuming but more complete and accurate in estimating new scrap generation, process loss, domestic final demand, and trade of final products of metals, than existing material flow analysis approaches; and, most importantly, (3) it enables the analysis of the material flows of metals in the U.S. economy from a network perspective, rather than merely that of a life cycle chain.
Mittal, R.; Dong, H.; Bozkurttas, M.; Najjar, F.M.; Vargas, A.; von Loebbecke, A.
2010-01-01
A sharp interface immersed boundary method for simulating incompressible viscous flow past three-dimensional immersed bodies is described. The method employs a multi-dimensional ghost-cell methodology to satisfy the boundary conditions on the immersed boundary and the method is designed to handle highly complex three-dimensional, stationary, moving and/or deforming bodies. The complex immersed surfaces are represented by grids consisting of unstructured triangular elements; while the flow is computed on non-uniform Cartesian grids. The paper describes the salient features of the methodology with special emphasis on the immersed boundary treatment for stationary and moving boundaries. Simulations of a number of canonical two- and three-dimensional flows are used to verify the accuracy and fidelity of the solver over a range of Reynolds numbers. Flow past suddenly accelerated bodies are used to validate the solver for moving boundary problems. Finally two cases inspired from biology with highly complex three-dimensional bodies are simulated in order to demonstrate the versatility of the method. PMID:20216919
A lava flow simulation model for the development of volcanic hazard maps for Mount Etna (Italy)
NASA Astrophysics Data System (ADS)
Damiani, M. L.; Groppelli, G.; Norini, G.; Bertino, E.; Gigliuto, A.; Nucita, A.
2006-05-01
Volcanic hazard assessment is of paramount importance for the safeguard of the resources exposed to volcanic hazards. In the paper we present ELFM, a lava flow simulation model for the evaluation of the lava flow hazard on Mount Etna (Sicily, Italy), the most important active volcano in Europe. The major contributions of the paper are: (a) a detailed specification of the lava flow simulation model and the specification of an algorithm implementing it; (b) the definition of a methodological framework for applying the model to the specific volcano. For what concerns the former issue, we propose an extended version of an existing stochastic model that has been applied so far only to the assessment of the volcanic hazard on Lanzarote and Tenerife (Canary Islands). Concerning the methodological framework, we claim model validation is definitely needed for assessing the effectiveness of the lava flow simulation model. To that extent a strategy has been devised for the generation of simulation experiments and evaluation of their outcomes.
Process optimization of an auger pyrolyzer with heat carrier using response surface methodology.
Brown, J N; Brown, R C
2012-01-01
A 1 kg/h auger reactor utilizing mechanical mixing of steel shot heat carrier was used to pyrolyze red oak wood biomass. Response surface methodology was employed using a circumscribed central composite design of experiments to optimize the system. Factors investigated were: heat carrier inlet temperature and mass flow rate, rotational speed of screws in the reactor, and volumetric flow rate of sweep gas. Conditions for maximum bio-oil and minimum char yields were high flow rate of sweep gas (3.5 standard L/min), high heat carrier temperature (∼600 °C), high auger speeds (63 RPM) and high heat carrier mass flow rates (18 kg/h). Regression models for bio-oil and char yields are described including identification of a novel interaction effect between heat carrier mass flow rate and auger speed. Results suggest that auger reactors, which are rarely described in literature, are well suited for bio-oil production. The reactor achieved liquid yields greater than 73 wt.%. Copyright © 2011 Elsevier Ltd. All rights reserved.
Spatial modeling of potential woody biomass flow
Woodam Chung; Nathaniel Anderson
2012-01-01
The flow of woody biomass to end users is determined by economic factors, especially the amount available across a landscape and delivery costs of bioenergy facilities. The objective of this study develop methodology to quantify landscape-level stocks and potential biomass flows using the currently available spatial database road network analysis tool. We applied this...
Navier-Stokes simulations of slender axisymmetric shapes in supersonic, turbulent flow
NASA Astrophysics Data System (ADS)
Moran, Kenneth J.; Beran, Philip S.
1994-07-01
Computational fluid dynamics is used to study flows about slender, axisymmetric bodies at very high speeds. Numerical experiments are conducted to simulate a broad range of flight conditions. Mach number is varied from 1.5 to 8 and Reynolds number is varied from 1 X 10(exp 6)/m to 10(exp 8)/m. The primary objective is to develop and validate a computational and methodology for the accurate simulation of a wide variety of flow structures. Accurate results are obtained for detached bow shocks, recompression shocks, corner-point expansions, base-flow recirculations, and turbulent boundary layers. Accuracy is assessed through comparison with theory and experimental data; computed surface pressure, shock structure, base-flow structure, and velocity profiles are within measurement accuracy throughout the range of conditions tested. The methodology is both practical and general: general in its applicability, and practicaal in its performance. To achieve high accuracy, modifications to previously reported techniques are implemented in the scheme. These modifications improve computed results in the vicinity of symmetry lines and in the base flow region, including the turbulent wake.
Iannetta, Danilo; Okushima, Dai; Inglis, Erin Calaine; Kondo, Narihiko; Murias, Juan M; Koga, Shunsaku
2018-05-03
It was recently demonstrated that an O 2 extraction reserve, as assessed by the near-infrared spectroscopy (NIRS)-derived deoxygenation signal ([HHb]), exists in the superficial region of vastus lateralis (VL) muscle during an occlusion performed at the end of a ramp-incremental test. However, it is unknown whether this reserve is present and/or different in magnitude in other portions and depths of the quadriceps muscles. We tested the hypothesis that O 2 extraction would exist in other regions of this muscle but greater in deep compared to more superficial portions. Superficial and deep VL (VL-s and VL-d, respectively) as well as superficial rectus femoris (RF-s) were monitored by a combination of low- and high- power time resolved (TRS) NIRS. During the occlusion immediately post ramp-incremental test there was a significant overshoot in the [HHb] signal (P<0.05). However, the magnitude of this increase was greater in VL-d (93.2{plus minus}42.9%) compared to VL-s (55.0{plus minus}19.6%) and RF-s (47.8{plus minus}14.0%) (P<0.05). The present study demonstrated that an O2 extraction reserve exists in different pools of active muscle fibers of the quadriceps at the end of a ramp exercise to exhaustion. The greater magnitude in the reserve observed in the deeper portion of VL, however, suggests that this portion of muscle may present a greater surplus of oxygenated blood, likely due to a greater population of slow-twitch fibers. These findings add to the notion that the plateau in the [HHb] signal towards the end of a ramp-incremental exercise does not indicate the upper limit of O 2 extraction.
McLaughlin, Samuel B; Wullschleger, Stan D; Nosal, Miloslav
2003-11-01
To evaluate indicators of whole-tree physiological responses to climate stress, we determined seasonal, daily and diurnal patterns of growth and water use in 10 yellow poplar (Liriodendron tulipifera L.) trees in a stand recently released from competition. Precise measurements of stem increment and sap flow made with automated electronic dendrometers and thermal dissipation probes, respectively, indicated close temporal linkages between water use and patterns of stem shrinkage and swelling during daily cycles of water depletion and recharge of extensible outer-stem tissues. These cycles also determined net daily basal area increment. Multivariate regression models based on a 123-day data series showed that daily diameter increments were related negatively to vapor pressure deficit (VPD), but positively to precipitation and temperature. The same model form with slight changes in coefficients yielded coefficients of determination of about 0.62 (0.57-0.66) across data subsets that included widely variable growth rates and VPDs. Model R2 was improved to 0.75 by using 3-day running mean daily growth data. Rapid recovery of stem diameter growth following short-term, diurnal reductions in VPD indicated that water stored in extensible stem tissues was part of a fast recharge system that limited hydration changes in the cambial zone during periods of water stress. There were substantial differences in the seasonal dynamics of growth among individual trees, and analyses indicated that faster-growing trees were more positively affected by precipitation, solar irradiance and temperature and more negatively affected by high VPD than slower-growing trees. There were no negative effects of ozone on daily growth rates in a year of low ozone concentrations.
Classifying low flow hydrological regimes at a regional scale
NASA Astrophysics Data System (ADS)
Kirkby, M. J.; Gallart, F.; Kjeldsen, T. R.; Irvine, B. J.; Froebrich, J.; Lo Porto, A.; de Girolamo, A.; Mirage Team
2011-12-01
The paper uses a simple water balance model that partitions the precipitation between actual evapotranspiration, quick flow and delayed flow, and has sufficient complexity to capture the essence of climate and vegetation controls on this partitioning. Using this model, monthly flow duration curves have been constructed from climate data across Europe to address the relative frequency of ecologically critical low flow stages in semi-arid rivers, when flow commonly persists only in disconnected pools in the river bed. The hydrological model is based on a dynamic partitioning of precipitation to estimate water available for evapotranspiration and plant growth and for residual runoff. The duration curve for monthly flows has then been analysed to give an estimate of bankfull flow based on recurrence interval. Arguing from observed ratios of cross-sectional areas at flood and low flows, hydraulic geometry suggests that disconnected flow under "pool" conditions is approximately 0.1% of bankfull flow. Flow duration curves define a measure of bankfull discharge on the basis of frequency. The corresponding frequency for pools is then read from the duration curve, using this (0.1%) ratio to estimate pool discharge from bank full discharge. The flow duration curve then provides an estimate of the frequency of poorly connected pool conditions, corresponding to this discharge, that constrain survival of river-dwelling arthropods and fish. The methodology has here been applied across Europe at 15 km resolution, and the potential is demonstrated for applying the methodology under alternative climatic scenarios.
Chen, Yen-Ju; Lee, Yen-I; Chang, Wen-Cheng; Hsiao, Po-Jen; You, Jr-Shian; Wang, Chun-Chieh; Wei, Chia-Min
2017-01-01
Abstract Hot deformation of Nd-Fe-B magnets has been studied for more than three decades. With a good combination of forming processing parameters, the remanence and (BH)max values of Nd-Fe-B magnets could be greatly increased due to the formation of anisotropic microstructures during hot deformation. In this work, a methodology is proposed for visualizing the material flow in hot-deformed Nd-Fe-B magnets via finite element simulation. Material flow in hot-deformed Nd-Fe-B magnets could be predicted by simulation, which fitted with experimental results. By utilizing this methodology, the correlation between strain distribution and magnetic properties enhancement could be better understood. PMID:28970869
Camacho-Sandoval, Rosa; Sosa-Grande, Eréndira N; González-González, Edith; Tenorio-Calvo, Alejandra; López-Morales, Carlos A; Velasco-Velázquez, Marco; Pavón-Romero, Lenin; Pérez-Tapia, Sonia Mayra; Medina-Rivero, Emilio
2018-06-05
Physicochemical and structural properties of proteins used as active pharmaceutical ingredients of biopharmaceuticals are determinant to carry out their biological activity. In this regard, the assays intended to evaluate functionality of biopharmaceuticals provide confirmatory evidence that they contain the appropriate physicochemical properties and structural conformation. The validation of the methodologies used for the assessment of critical quality attributes of biopharmaceuticals is a key requirement for manufacturing under GMP environments. Herein we present the development and validation of a flow cytometry-based methodology for the evaluation of adalimumab's affinity towards membrane-bound TNFα (mTNFα) on recombinant CHO cells. This in vitro methodology measures the interaction between an in-solution antibody and its target molecule onto the cell surface through a fluorescent signal. The characteristics evaluated during the validation exercise showed that this methodology is suitable for its intended purpose. The assay demonstrated to be accurate (r 2 = 0.92, slope = 1.20), precise (%CV ≤ 18.31) and specific (curve fitting, r 2 = 0.986-0.997) to evaluate binding of adalimumab to mTNFα. The results obtained here provide evidence that detection by flow cytometry is a viable alternative for bioassays used in the pharmaceutical industry. In addition, this methodology could be standardized for the evaluation of other biomolecules acting through the same mechanism of action. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Methodology update for estimating volume to service flow ratio.
DOT National Transportation Integrated Search
2015-12-01
Volume/service flow ratio (VSF) is calculated by the Highway Performance Monitoring System (HPMS) software as an indicator of peak hour congestion. It is an essential input to the Kentucky Transportation Cabinets (KYTC) key planning applications, ...
Bhattacharya, S.; Byrnes, A.P.; Watney, W.L.; Doveton, J.H.
2008-01-01
Characterizing the reservoir interval into flow units is an effective way to subdivide the net-pay zone into layers for reservoir simulation. Commonly used flow unit identification techniques require a reliable estimate of permeability in the net pay on a foot-by-foot basis. Most of the wells do not have cores, and the literature is replete with different kinds of correlations, transforms, and prediction methods for profiling permeability in pay. However, for robust flow unit determination, predicted permeability at noncored wells requires validation and, if necessary, refinement. This study outlines the use o f a spreadsheet-based permeability validation technique to characterize flow units in wells from the Norcan East field, Clark County, Kansas, that produce from Atokan aged fine- to very fine-grained quartzarenite sandstones interpreted to have been deposited in brackish-water, tidally dominated restricted tidal-flat, tidal-channel, tidal-bar, and estuary bay environments within a small incised-valley-fill system. The methodology outlined enables the identification of fieldwide free-water level and validates and refines predicted permeability at 0.5-ft (0.15-m) intervals by iteratively reconciling differences in water saturation calculated from wire-line log and a capillary-pressure formulation that models fine- to very fine-grained sandstone with diagenetic clay and silt or shale laminae. The effectiveness of this methodology was confirmed by successfully matching primary and secondary production histories using a flow unit-based reservoir model of the Norcan East field without permeability modifications. The methodologies discussed should prove useful for robust flow unit characterization of different kinds of reservoirs. Copyright ?? 2008. The American Association of Petroleum Geologists. All rights reserved.
NASA Astrophysics Data System (ADS)
Foulon, Étienne; Rousseau, Alain N.; Gagnon, Patrick
2018-02-01
Low flow conditions are governed by short-to-medium term weather conditions or long term climate conditions. This prompts the question: given climate scenarios, is it possible to assess future extreme low flow conditions from climate data indices (CDIs)? Or should we rely on the conventional approach of using outputs of climate models as inputs to a hydrological model? Several CDIs were computed using 42 climate scenarios over the years 1961-2100 for two watersheds located in Québec, Canada. The relationship between the CDIs and hydrological data indices (HDIs; 7- and 30-day low flows for two hydrological seasons) were examined through correlation analysis to identify the indices governing low flows. Results of the Mann-Kendall test, with a modification for autocorrelated data, clearly identified trends. A partial correlation analysis allowed attributing the observed trends in HDIs to trends in specific CDIs. Furthermore, results showed that, even during the spatial validation process, the methodological framework was able to assess trends in low flow series from: (i) trends in the effective drought index (EDI) computed from rainfall plus snowmelt minus PET amounts over ten to twelve months of the hydrological snow cover season or (ii) the cumulative difference between rainfall and potential evapotranspiration over five months of the snow free season. For 80% of the climate scenarios, trends in HDIs were successfully attributed to trends in CDIs. Overall, this paper introduces an efficient methodological framework to assess future trends in low flows given climate scenarios. The outcome may prove useful to municipalities concerned with source water management under changing climate conditions.
Modelling white-water rafting suitability in a hydropower regulated Alpine River.
Carolli, Mauro; Zolezzi, Guido; Geneletti, Davide; Siviglia, Annunziato; Carolli, Fabiano; Cainelli, Oscar
2017-02-01
Cultural and recreational river ecosystem services and their relations with the flow regime are still poorly investigated. We develop a modelling-based approach to assess recreational flow requirements and the spatially distributed river suitability for white-water rafting, a typical service offered by mountain streams, with potential conflicts of interest with hydropower regulation. The approach is based on the principles of habitat suitability modelling using water depth as the main attribute, with preference curves defined through interviews with local rafting guides. The methodology allows to compute streamflow thresholds for conditions of suitability and optimality of a river reach in relation to rafting. Rafting suitability response to past, present and future flow management scenarios can be predicted on the basis of a hydrological model, which is incorporated in the methodology and is able to account for anthropic effects. Rafting suitability is expressed through a novel metric, the "Rafting hydro-suitability index" (RHSI) which quantifies the cumulative duration of suitable and optimal conditions for rafting. The approach is applied on the Noce River (NE Italy), an Alpine River regulated by hydropower production and affected by hydropeaking, which influences suitability at a sub-daily scale. A dedicated algorithm is developed within the hydrological model to resemble hydropeaking conditions with daily flow data. In the Noce River, peak flows associated with hydropeaking support rafting activities in late summer, highlighting the dual nature of hydropeaking in regulated rivers. Rafting suitability is slightly reduced under present, hydropower-regulated flow conditions compared to an idealized flow regime characterised by no water abstractions. Localized water abstractions for small, run-of-the-river hydropower plants are predicted to negatively affect rafting suitability. The proposed methodology can be extended to support decision making for flow management in hydropower regulated streams, as it has the potential to quantify the response of different ecosystem services to flow regulation. Copyright © 2016 Elsevier B.V. All rights reserved.
Mars Science Laboratory CHIMRA/IC/DRT Flight Software for Sample Acquisition and Processing
NASA Technical Reports Server (NTRS)
Kim, Won S.; Leger, Chris; Carsten, Joseph; Helmick, Daniel; Kuhn, Stephen; Redick, Richard; Trujillo, Diana
2013-01-01
The design methodologies of using sequence diagrams, multi-process functional flow diagrams, and hierarchical state machines were successfully applied in designing three MSL (Mars Science Laboratory) flight software modules responsible for handling actuator motions of the CHIMRA (Collection and Handling for In Situ Martian Rock Analysis), IC (Inlet Covers), and DRT (Dust Removal Tool) mechanisms. The methodologies were essential to specify complex interactions with other modules, support concurrent foreground and background motions, and handle various fault protections. Studying task scenarios with multi-process functional flow diagrams yielded great insight to overall design perspectives. Since the three modules require three different levels of background motion support, the methodologies presented in this paper provide an excellent comparison. All three modules are fully operational in flight.
NASA Astrophysics Data System (ADS)
Papathoma-Köhle, Maria
2016-08-01
The assessment of the physical vulnerability of elements at risk as part of the risk analysis is an essential aspect for the development of strategies and structural measures for risk reduction. Understanding, analysing and, if possible, quantifying physical vulnerability is a prerequisite for designing strategies and adopting tools for its reduction. The most common methods for assessing physical vulnerability are vulnerability matrices, vulnerability curves and vulnerability indicators; however, in most of the cases, these methods are used in a conflicting way rather than in combination. The article focuses on two of these methods: vulnerability curves and vulnerability indicators. Vulnerability curves express physical vulnerability as a function of the intensity of the process and the degree of loss, considering, in individual cases only, some structural characteristics of the affected buildings. However, a considerable amount of studies argue that vulnerability assessment should focus on the identification of these variables that influence the vulnerability of an element at risk (vulnerability indicators). In this study, an indicator-based methodology (IBM) for mountain hazards including debris flow (Kappes et al., 2012) is applied to a case study for debris flows in South Tyrol, where in the past a vulnerability curve has been developed. The relatively "new" indicator-based method is being scrutinised and recommendations for its improvement are outlined. The comparison of the two methodological approaches and their results is challenging since both methodological approaches deal with vulnerability in a different way. However, it is still possible to highlight their weaknesses and strengths, show clearly that both methodologies are necessary for the assessment of physical vulnerability and provide a preliminary "holistic methodological framework" for physical vulnerability assessment showing how the two approaches may be used in combination in the future.
Recent developments of axial flow compressors under transonic flow conditions
NASA Astrophysics Data System (ADS)
Srinivas, G.; Raghunandana, K.; Satish Shenoy, B.
2017-05-01
The objective of this paper is to give a holistic view of the most advanced technology and procedures that are practiced in the field of turbomachinery design. Compressor flow solver is the turbulence model used in the CFD to solve viscous problems. The popular techniques like Jameson’s rotated difference scheme was used to solve potential flow equation in transonic condition for two dimensional aero foils and later three dimensional wings. The gradient base method is also a popular method especially for compressor blade shape optimization. Various other types of optimization techniques available are Evolutionary algorithms (EAs) and Response surface methodology (RSM). It is observed that in order to improve compressor flow solver and to get agreeable results careful attention need to be paid towards viscous relations, grid resolution, turbulent modeling and artificial viscosity, in CFD. The advanced techniques like Jameson’s rotated difference had most substantial impact on wing design and aero foil. For compressor blade shape optimization, Evolutionary algorithm is quite simple than gradient based technique because it can solve the parameters simultaneously by searching from multiple points in the given design space. Response surface methodology (RSM) is a method basically used to design empirical models of the response that were observed and to study systematically the experimental data. This methodology analyses the correct relationship between expected responses (output) and design variables (input). RSM solves the function systematically in a series of mathematical and statistical processes. For turbomachinery blade optimization recently RSM has been implemented successfully. The well-designed high performance axial flow compressors finds its application in any air-breathing jet engines.
Müller, Dirk; Pulm, Jannis; Gandjour, Afschin
2012-01-01
To compare cost-effectiveness modeling analyses of strategies to prevent osteoporotic and osteopenic fractures either based on fixed thresholds using bone mineral density or based on variable thresholds including bone mineral density and clinical risk factors. A systematic review was performed by using the MEDLINE database and reference lists from previous reviews. On the basis of predefined inclusion/exclusion criteria, we identified relevant studies published since January 2006. Articles included for the review were assessed for their methodological quality and results. The literature search resulted in 24 analyses, 14 of them using a fixed-threshold approach and 10 using a variable-threshold approach. On average, 70% of the criteria for methodological quality were fulfilled, but almost half of the analyses did not include medication adherence in the base case. The results of variable-threshold strategies were more homogeneous and showed more favorable incremental cost-effectiveness ratios compared with those based on a fixed threshold with bone mineral density. For analyses with fixed thresholds, incremental cost-effectiveness ratios varied from €80,000 per quality-adjusted life-year in women aged 55 years to cost saving in women aged 80 years. For analyses with variable thresholds, the range was €47,000 to cost savings. Risk assessment using variable thresholds appears to be more cost-effective than selecting high-risk individuals by fixed thresholds. Although the overall quality of the studies was fairly good, future economic analyses should further improve their methods, particularly in terms of including more fracture types, incorporating medication adherence, and including or discussing unrelated costs during added life-years. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Messias, Leonardo H. D.; Gobatto, Claudio A.; Beck, Wladimir R.; Manchado-Gobatto, Fúlvia B.
2017-01-01
In 1993, Uwe Tegtbur proposed a useful physiological protocol named the lactate minimum test (LMT). This test consists of three distinct phases. Firstly, subjects must perform high intensity efforts to induce hyperlactatemia (phase 1). Subsequently, 8 min of recovery are allowed for transposition of lactate from myocytes (for instance) to the bloodstream (phase 2). Right after the recovery, subjects are submitted to an incremental test until exhaustion (phase 3). The blood lactate concentration is expected to fall during the first stages of the incremental test and as the intensity increases in subsequent stages, to rise again forming a “U” shaped blood lactate kinetic. The minimum point of this curve, named the lactate minimum intensity (LMI), provides an estimation of the intensity that represents the balance between the appearance and clearance of arterial blood lactate, known as the maximal lactate steady state intensity (iMLSS). Furthermore, in addition to the iMLSS estimation, studies have also determined anaerobic parameters (e.g., peak, mean, and minimum force/power) during phase 1 and also the maximum oxygen consumption in phase 3; therefore, the LMT is considered a robust physiological protocol. Although, encouraging reports have been published in both human and animal models, there are still some controversies regarding three main factors: (1) the influence of methodological aspects on the LMT parameters; (2) LMT effectiveness for monitoring training effects; and (3) the LMI as a valid iMLSS estimator. Therefore, the aim of this review is to provide a balanced discussion between scientific evidence of the aforementioned issues, and insights for future investigations are suggested. In summary, further analyses is necessary to determine whether these factors are worthy, since the LMT is relevant in several contexts of health sciences. PMID:28642717
Bollaerts, Kaatje; De Smedt, Tom; Donegan, Katherine; Titievsky, Lina; Bauchau, Vincent
2018-03-26
New vaccines are launched based on their benefit-risk (B/R) profile anticipated from clinical development. Proactive post-marketing surveillance is necessary to assess whether the vaccination uptake and the B/R profile are as expected and, ultimately, whether further public health or regulatory actions are needed. There are several, typically not integrated, facets of post-marketing vaccine surveillance: the surveillance of vaccination coverage, vaccine safety, effectiveness and impact. With this work, we aim to assess the feasibility and added value of using an interactive dashboard as a potential methodology for near real-time monitoring of vaccine coverage and pre-specified health benefits and risks of vaccines. We developed a web application with an interactive dashboard for B/R monitoring. The dashboard is demonstrated using simulated electronic healthcare record data mimicking the introduction of rotavirus vaccination in the UK. The interactive dashboard allows end users to select certain parameters, including expected vaccine effectiveness, age groups, and time periods and allows calculation of the incremental net health benefit (INHB) as well as the incremental benefit-risk ratio (IBRR) for different sets of preference weights. We assessed the potential added value of the dashboard by user testing amongst a range of stakeholders experienced in the post-marketing monitoring of vaccines. The dashboard was successfully implemented and demonstrated. The feedback from the potential end users was generally positive, although reluctance to using composite B/R measures was expressed. The use of interactive dashboards for B/R monitoring is promising and received support from various stakeholders. In future research, the use of such an interactive dashboard will be further tested with real-life data as opposed to simulated data.
Too far ahead of the IT curve?
Glaser, John P
2007-01-01
Peachtree Healthcare has major IT infrastructure problems, and CEO Max Berndt is struggling to find the right fix. He can go with a single set of systems and applications that will provide consistency across Peachtree's facilities but may not give doctors enough flexibility. Or he can choose service-oriented architecture (SOA), a modular design that will allow Peachtree to standardize incrementally and selectively but poses certain risks as a newer technology. What should he do? Four experts comment on this fictional case study, authored by John P. Glaser, CIO for Partners HealthCare System. George C. Halvorson, the chairman and CEO of Kaiser Permanente, warns against using untested methodologies such as SOA in a health care environment, where lives are at stake. He says Peachtree's management must clarify its overall IT vision before devising a plan to achieve each of its objectives. Monte Ford, the chief information officer at American Airlines, says Peachtree can gradually replace its old systems with SOA. An incremental approach, he points out, would not only minimize risk but also enhance flexibility and control, and would allow IT to shift priorities along the way. Randy Heffner, a vice president at Forrester Research who focuses on technology architectures for computer-based business systems, thinks SOA's modular approach to business design would best meet Peachtree's need for flexibility. He says that Peachtree's CIO sees SOA as a new product category but should instead view it as a methodology. John A. Kastor, a professor at the University of Maryland School of Medicine, questions the goal of standardized care. He argues that it would be difficult to persuade doctors, many of whom are fiercely independent, to follow rigid patterns in their work.
Evolution of 3-D geologic framework modeling and its application to groundwater flow studies
Blome, Charles D.; Smith, David V.
2012-01-01
In this Fact Sheet, the authors discuss the evolution of project 3-D subsurface framework modeling, research in hydrostratigraphy and airborne geophysics, and methodologies used to link geologic and groundwater flow models.
DOT National Transportation Integrated Search
1981-08-01
management techniques to operate local transit systems more efficiently and economically. In particular, the ability to accurately ascertain route specific passenger flows or passenger demands has become essential for adequate resource allocation and...
Numerical Simulation of Flow in a Whirling Annular Seal and Comparison with Experiments
NASA Technical Reports Server (NTRS)
Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.
1995-01-01
The turbulent flow field in a simulated annular seal with a large clearance/radius ratio (0.015) and a whirling rotor was simulated using an advanced 3D CFD code SCISEAL. A circular whirl orbit with synchronous whirl was imposed on the rotor center. The flow field was rendered quasi-steady by making a transformation to a totaling frame. Standard k-epsilon model with wall functions was used to treat the turbulence. Experimentally measured values of flow parameters were used to specify the seal inlet and exit boundary conditions. The computed flow-field in terms of the velocity and pressure is compared with the experimental measurements inside the seal. The agreement between the numerical results and experimental data with correction is fair to good. The capability of current advanced CFD methodology to analyze this complex flow field is demonstrated. The methodology can also be extended to other whirl frequencies. Half- (or sub-) synchronous (fluid film unstable motion) and synchronous (rotor centrifugal force unbalance) whirls are the most unstable whirl modes in turbomachinery seals, and the flow code capability of simulating the flows in steady as well as whirling seals will prove to be extremely useful in the design, analyses, and performance predictions of annular as well as other types of seals.
Roshani, G H; Nazemi, E; Roshani, M M
2017-05-01
Changes of fluid properties (especially density) strongly affect the performance of radiation-based multiphase flow meter and could cause error in recognizing the flow pattern and determining void fraction. In this work, we proposed a methodology based on combination of multi-beam gamma ray attenuation and dual modality densitometry techniques using RBF neural network in order to recognize the flow regime and determine the void fraction in gas-liquid two phase flows independent of the liquid phase changes. The proposed system is consisted of one 137 Cs source, two transmission detectors and one scattering detector. The registered counts in two transmission detectors were used as the inputs of one primary Radial Basis Function (RBF) neural network for recognizing the flow regime independent of liquid phase density. Then, after flow regime identification, three RBF neural networks were utilized for determining the void fraction independent of liquid phase density. Registered count in scattering detector and first transmission detector were used as the inputs of these three RBF neural networks. Using this simple methodology, all the flow patterns were correctly recognized and the void fraction was predicted independent of liquid phase density with mean relative error (MRE) of less than 3.28%. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tavakoli, Ali; Nikoo, Mohammad Reza; Kerachian, Reza; Soltani, Maryam
2015-04-01
In this paper, a new fuzzy methodology is developed to optimize water and waste load allocation (WWLA) in rivers under uncertainty. An interactive two-stage stochastic fuzzy programming (ITSFP) method is utilized to handle parameter uncertainties, which are expressed as fuzzy boundary intervals. An iterative linear programming (ILP) is also used for solving the nonlinear optimization model. To accurately consider the impacts of the water and waste load allocation strategies on the river water quality, a calibrated QUAL2Kw model is linked with the WWLA optimization model. The soil, water, atmosphere, and plant (SWAP) simulation model is utilized to determine the quantity and quality of each agricultural return flow. To control pollution loads of agricultural networks, it is assumed that a part of each agricultural return flow can be diverted to an evaporation pond and also another part of it can be stored in a detention pond. In detention ponds, contaminated water is exposed to solar radiation for disinfecting pathogens. Results of applying the proposed methodology to the Dez River system in the southwestern region of Iran illustrate its effectiveness and applicability for water and waste load allocation in rivers. In the planning phase, this methodology can be used for estimating the capacities of return flow diversion system and evaporation and detention ponds.
NASA Astrophysics Data System (ADS)
J-Me, Teh; Noh, Norlaili Mohd.; Aziz, Zalina Abdul
2015-05-01
In the chip industry today, the key goal of a chip development organization is to develop and market chips within a short time frame to gain foothold on market share. This paper proposes a design flow around the area of parasitic extraction to improve the design cycle time. The proposed design flow utilizes the usage of metal fill emulation as opposed to the current flow which performs metal fill insertion directly. By replacing metal fill structures with an emulation methodology in earlier iterations of the design flow, this is targeted to help reduce runtime in fill insertion stage. Statistical design of experiments methodology utilizing the randomized complete block design was used to select an appropriate emulated metal fill width to improve emulation accuracy. The experiment was conducted on test cases of different sizes, ranging from 1000 gates to 21000 gates. The metal width was varied from 1 x minimum metal width to 6 x minimum metal width. Two-way analysis of variance and Fisher's least significant difference test were used to analyze the interconnect net capacitance values of the different test cases. This paper presents the results of the statistical analysis for the 45 nm process technology. The recommended emulated metal fill width was found to be 4 x the minimum metal width.
DOT National Transportation Integrated Search
2010-02-01
This project developed a methodology to couple a new pollutant dispersion model with a traffic : assignment process to contain air pollution while maximizing mobility. The overall objective of the air : quality modeling part of the project is to deve...
The Tacitness of Tacitus. A Methodological Approach to European Thought. No. 46.
ERIC Educational Resources Information Center
Bierschenk, Bernhard
This study measured the analysis of verbal flows by means of volume-elasticity measures and the analysis of information flow structures and their representations in the form of a metaphysical cube. A special purpose system of computer programs (PERTEX) was used to establish the language space in which the textual flow patterns occurred containing…
The Use of Logistics n the Quality Parameters Control System of Material Flow
ERIC Educational Resources Information Center
Karpova, Natalia P.; Toymentseva, Irina A.; Shvetsova, Elena V.; Chichkina, Vera D.; Chubarkova, Elena V.
2016-01-01
The relevance of the research problem is conditioned on the need to justify the use of the logistics methodologies in the quality parameters control process of material flows. The goal of the article is to develop theoretical principles and practical recommendations for logistical system control in material flows quality parameters. A leading…
Code of Federal Regulations, 2010 CFR
2010-07-01
... the hourly stack flow rate (in scfh). Only one methodology for determining NOX mass emissions shall be...-diluent continuous emissions monitoring system and a flow monitoring system in the common stack, record... maintain a flow monitoring system and diluent monitor in the duct to the common stack from each unit; or...
NASA Astrophysics Data System (ADS)
Zeng, Guang; Cao, Shuchao; Liu, Chi; Song, Weiguo
2018-06-01
It is important to study pedestrian stepping behavior and characteristics for facility design and pedestrian flow study due to pedestrians' bipedal movement. In this paper, data of steps are extracted based on trajectories of pedestrians from a single-file experiment. It is found that step length and step frequency will decrease 75% and 33%, respectively, when global density increases from 0.46 ped/m to 2.28 ped/m. With the increment of headway, they will first increase and then remain constant when the headway is beyond 1.16 m and 0.91 m, respectively. Step length and frequency under different headways can be described well by normal distributions. Meanwhile, relationships between step length and frequency under different headways exist. Step frequency decreases with the increment of step length. However, the decrease tendencies depend on headways as a whole. And there are two decrease tendencies: when the headway is between about 0.6 m and 1.0 m, the decrease rate of the step frequency will increase with the increment of step length; while it will decrease when the headway is beyond about 1.0 m and below about 0.6 m. A model is built based on the experiment results. In fundamental diagrams, the results of simulation agree well with those of experiment. The study can be helpful for understanding pedestrian stepping behavior and designing public facilities.
Houston-Galveston Navigation Channel Shoaling Study
2014-12-01
compared to data collected at other times during the year. If this is the case, sediment would tend to collect farther down the channel toward Red Fish ...27 28 29 30 31 32 33 Location along channel (1000 m increments) Pe rc en t ( % ) Bayport Red Fish Reef Pe rc en t ( % ) ERDC/CHL TR-14-14 12...in the bay tends to evacuate. For the wind and flow conditions that were investigated previously, the net drift in the channel upland from Red Fish
Improving Stochastic Communication Network Performance: Reliability vs. Throughput
1991-12-01
increased to one. 2) arc survivabil.. ities will be increased in increments of one tenths. and 3) the costs to increase- arc si’rvivabilities were equal and...This reliability value is leni used to maximize the associated expected flow. For Net work A. a bIdget of (8)() pro(duces a tradcoff point at (.58.37...Network B for a buidgel of 2000 which allows a nel \\\\ork relial)ilitv of one to be achieved and a bidget of 1200 which allows for ;, maximum 57
Coolant monitoring apparatus for nuclear reactors
Tokarz, Richard D.
1983-01-01
A system for monitoring coolant conditions within a pressurized vessel. A length of tubing extends outward from the vessel from an open end containing a first line restriction at the location to be monitored. The flowing fluid is cooled and condensed before passing through a second line restriction. Measurement of pressure drop at the second line restriction gives an indication of fluid condition at the first line restriction. Multiple lengths of tubing with open ends at incremental elevations can measure coolant level within the vessel.
2007-01-01
where H is the scaling exponent , or called the Hurst exponent . In 1941, Kolmogorov suggested that the velocity increment in high-Reynolds number...turbulent flows should scale with the mean (time-averaged) energy dissipation and the separation length scale. The Hurst exponent H is equal to 1/3. For...the internal solitons change the power exponent of the power spectra drastically especially in the low wave number domain; break down the power law
Estimating QALY gains in applied studies: a review of cost-utility analyses published in 2010.
Wisløff, Torbjørn; Hagen, Gunhild; Hamidi, Vida; Movik, Espen; Klemp, Marianne; Olsen, Jan Abel
2014-04-01
Reimbursement agencies in several countries now require health outcomes to be measured in terms of quality-adjusted life-years (QALYs), leading to an immense increase in publications reporting QALY gains. However, there is a growing concern that the various 'multi-attribute utility' (MAU) instruments designed to measure the Q in the QALY yield disparate values, implying that results from different instruments are incommensurable. By reviewing cost-utility analyses published in 2010, we aim to contribute to improved knowledge on how QALYs are currently calculated in applied analyses; how transparently QALY measurement is presented; and how large the expected incremental QALY gains are. We searched Embase, MEDLINE and NHS EED for all cost-utility analyses published in 2010. All analyses that had estimated QALYs gained from health interventions were included. Of the 370 studies included in this review, 48% were pharmacoeconomic evaluations. Active comparators were used in 71% of studies. The median incremental QALY gain was 0.06, which translates to 3 weeks in best imaginable health. The EQ-5D-3L is the dominant instrument used. However, reporting of how QALY gains are estimated is generally inadequate. In 55% of the studies there was no reference to which MAU instrument or direct valuation method QALY data came from. The methods used for estimating expected QALY gains are not transparently reported in published papers. Given the wide variation in utility scores that different methodologies may assign to an identical health state, it is important for journal editors to require a more transparent way of reporting the estimation of incremental QALY gains.
Battery Capacity Fading Estimation Using a Force-Based Incremental Capacity Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samad, Nassim A.; Kim, Youngki; Siegel, Jason B.
Traditionally health monitoring techniques in lithium-ion batteries rely on voltage and current measurements. A novel method of using a mechanical rather than electrical signal in the incremental capacity analysis (ICA) method is introduced in this paper. This method derives the incremental capacity curves based onmeasured force (ICF) instead of voltage (ICV). The force ismeasured on the surface of a cell under compression in a fixture that replicates a battery pack assembly and preloading. The analysis is performed on data collected from cycling encased prismatic Lithium-ion Nickel-Manganese-Cobalt Oxide (NMC) cells. For the NMC chemistry, the ICF method can complement or replacemore » the ICV method for the following reasons. The identified ICV peaks are centered around 40% of state of charge (SOC) while the peaks of the ICF method are centered around 70% of SOC indicating that the ICF can be used more often because it is more likely that an electric vehicle (EV) or a plug-in hybrid electric vehicle (PHEV) will traverse the 70% SOC range than the 40% SOC. In addition the Signal to Noise ratio (SNR) of the force signal is four times larger than the voltage signal using laboratory grade sensors. The proposed ICF method is shown to achieve 0.42% accuracy in capacity estimation during a low C-rate constant current discharge. Future work will investigate the application of the capacity estimation technique under charging and operation under high C-rates by addressing the transient behavior of force so that an online methodology for capacity estimation is developed.« less
Battery Capacity Fading Estimation Using a Force-Based Incremental Capacity Analysis
Samad, Nassim A.; Kim, Youngki; Siegel, Jason B.; ...
2016-05-27
Traditionally health monitoring techniques in lithium-ion batteries rely on voltage and current measurements. A novel method of using a mechanical rather than electrical signal in the incremental capacity analysis (ICA) method is introduced in this paper. This method derives the incremental capacity curves based onmeasured force (ICF) instead of voltage (ICV). The force ismeasured on the surface of a cell under compression in a fixture that replicates a battery pack assembly and preloading. The analysis is performed on data collected from cycling encased prismatic Lithium-ion Nickel-Manganese-Cobalt Oxide (NMC) cells. For the NMC chemistry, the ICF method can complement or replacemore » the ICV method for the following reasons. The identified ICV peaks are centered around 40% of state of charge (SOC) while the peaks of the ICF method are centered around 70% of SOC indicating that the ICF can be used more often because it is more likely that an electric vehicle (EV) or a plug-in hybrid electric vehicle (PHEV) will traverse the 70% SOC range than the 40% SOC. In addition the Signal to Noise ratio (SNR) of the force signal is four times larger than the voltage signal using laboratory grade sensors. The proposed ICF method is shown to achieve 0.42% accuracy in capacity estimation during a low C-rate constant current discharge. Future work will investigate the application of the capacity estimation technique under charging and operation under high C-rates by addressing the transient behavior of force so that an online methodology for capacity estimation is developed.« less
Eigenspace perturbations for uncertainty estimation of single-point turbulence closures
NASA Astrophysics Data System (ADS)
Iaccarino, Gianluca; Mishra, Aashwin Ananda; Ghili, Saman
2017-02-01
Reynolds-averaged Navier-Stokes (RANS) models represent the workhorse for predicting turbulent flows in complex industrial applications. However, RANS closures introduce a significant degree of epistemic uncertainty in predictions due to the potential lack of validity of the assumptions utilized in model formulation. Estimating this uncertainty is a fundamental requirement for building confidence in such predictions. We outline a methodology to estimate this structural uncertainty, incorporating perturbations to the eigenvalues and the eigenvectors of the modeled Reynolds stress tensor. The mathematical foundations of this framework are derived and explicated. Thence, this framework is applied to a set of separated turbulent flows, while compared to numerical and experimental data and contrasted against the predictions of the eigenvalue-only perturbation methodology. It is exhibited that for separated flows, this framework is able to yield significant enhancement over the established eigenvalue perturbation methodology in explaining the discrepancy against experimental observations and high-fidelity simulations. Furthermore, uncertainty bounds of potential engineering utility can be estimated by performing five specific RANS simulations, reducing the computational expenditure on such an exercise.
Costs of Addressing Heroin Addiction in Malaysia and 32 Comparable Countries Worldwide
Ruger, Jennifer Prah; Chawarski, Marek; Mazlan, Mahmud; Luekens, Craig; Ng, Nora; Schottenfeld, Richard
2012-01-01
Objective Develop and apply new costing methodologies to estimate costs of opioid dependence treatment in countries worldwide. Data Sources/Study Setting Micro-costing methodology developed and data collected during randomized controlled trial (RCT) involving 126 patients (July 2003–May 2005) in Malaysia. Gross-costing methodology developed to estimate costs of treatment replication in 32 countries with data collected from publicly available sources. Study Design Fixed, variable, and societal cost components of Malaysian RCT micro-costed and analytical framework created and employed for gross-costing in 32 countries selected by three criteria relative to Malaysia: major heroin problem, geographic proximity, and comparable gross domestic product (GDP) per capita. Principal Findings Medication, and urine and blood testing accounted for the greatest percentage of total costs for both naltrexone (29–53 percent) and buprenorphine (33–72 percent) interventions. In 13 countries, buprenorphine treatment could be provided for under $2,000 per patient. For all countries except United Kingdom and Singapore, incremental costs per person were below $1,000 when comparing buprenorphine to naltrexone. An estimated 100 percent of opiate users in Cambodia and Lao People's Democratic Republic could be treated for $8 and $30 million, respectively. Conclusions Buprenorphine treatment can be provided at low cost in countries across the world. This study's new costing methodologies provide tools for health systems worldwide to determine the feasibility and cost of similar interventions. PMID:22091732
Costs of addressing heroin addiction in Malaysia and 32 comparable countries worldwide.
Ruger, Jennifer Prah; Chawarski, Marek; Mazlan, Mahmud; Luekens, Craig; Ng, Nora; Schottenfeld, Richard
2012-04-01
Develop and apply new costing methodologies to estimate costs of opioid dependence treatment in countries worldwide. Micro-costing methodology developed and data collected during randomized controlled trial (RCT) involving 126 patients (July 2003-May 2005) in Malaysia. Gross-costing methodology developed to estimate costs of treatment replication in 32 countries with data collected from publicly available sources. Fixed, variable, and societal cost components of Malaysian RCT micro-costed and analytical framework created and employed for gross-costing in 32 countries selected by three criteria relative to Malaysia: major heroin problem, geographic proximity, and comparable gross domestic product (GDP) per capita. Medication, and urine and blood testing accounted for the greatest percentage of total costs for both naltrexone (29-53 percent) and buprenorphine (33-72 percent) interventions. In 13 countries, buprenorphine treatment could be provided for under $2,000 per patient. For all countries except United Kingdom and Singapore, incremental costs per person were below $1,000 when comparing buprenorphine to naltrexone. An estimated 100 percent of opiate users in Cambodia and Lao People's Democratic Republic could be treated for $8 and $30 million, respectively. Buprenorphine treatment can be provided at low cost in countries across the world. This study's new costing methodologies provide tools for health systems worldwide to determine the feasibility and cost of similar interventions. © Health Research and Educational Trust.
Brenzel, Logan; Young, Darwin; Walker, Damian G
2015-05-07
Few detailed facility-based costing studies of routine immunization (RI) programs have been conducted in recent years, with planners, managers and donors relying on older information or data from planning tools. To fill gaps and improve quality of information, a multi-country study on costing and financing of routine immunization and new vaccines (EPIC) was conducted in Benin, Ghana, Honduras, Moldova, Uganda and Zambia. This paper provides the rationale for the launch of the EPIC study, as well as outlines methods used in a Common Approach on facility sampling, data collection, cost and financial flow estimation for both the routine program and new vaccine introduction. Costing relied on an ingredients-based approach from a government perspective. Estimating incremental economic costs of new vaccine introduction in contexts with excess capacity are highlighted. The use of more disaggregated System of Health Accounts (SHA) coding to evaluate financial flows is presented. The EPIC studies resulted in a sample of 319 primary health care facilities, with 65% of facilities in rural areas. The EPIC studies found wide variation in total and unit costs within each country, as well as between countries. Costs increased with level of scale and socio-economic status of the country. Governments are financing an increasing share of total RI financing. This study provides a wealth of high quality information on total and unit costs and financing for RI, and demonstrates the value of in-depth facility approaches. The paper discusses the lessons learned from using a standardized approach, as well as proposes further areas of methodology development. The paper discusses how results can be used for resource mobilization and allocation, improved efficiency of services at the country level, and to inform policies at the global level. Efforts at routinizing cost analysis to support sustainability efforts would be beneficial. Copyright © 2015 Elsevier Ltd. All rights reserved.
A new methodology for hydro-abrasive erosion tests simulating penstock erosive flow
NASA Astrophysics Data System (ADS)
Aumelas, V.; Maj, G.; Le Calvé, P.; Smith, M.; Gambiez, B.; Mourrat, X.
2016-11-01
Hydro-abrasive resistance is an important property requirement for hydroelectric power plant penstock coating systems used by EDF. The selection of durable coating systems requires an experimental characterization of coating performance. This can be achieved by performing accelerated and representative laboratory tests. In case of severe erosion induced by a penstock flow, there is no suitable method or standard representative of real erosive flow conditions. The presented study aims at developing a new methodology and an associated laboratory experimental device. The objective of the laboratory apparatus is to subject coated test specimens to wear conditions similar to the ones generated at the penstock lower generatrix in actual flow conditions. Thirteen preselected coating solutions were first been tested during a 45 hours erosion test. A ranking of the thirteen coating solutions was then determined after characterisation. To complete this first evaluation and to determine the wear kinetic of the four best coating solutions, additional erosion tests were conducted with a longer duration of 216 hours. A comparison of this new method with standardized tests and with real service operating flow conditions is also discussed. To complete the final ranking based on hydro-abrasive erosion tests, some trial tests were carried out on penstock samples to check the application method of selected coating systems. The paper gives some perspectives related to erosion test methodologies for materials and coating solutions for hydraulic applications. The developed test method can also be applied in other fields.
Numerical and experimental investigation of a beveled trailing-edge flow field and noise emission
NASA Astrophysics Data System (ADS)
van der Velden, W. C. P.; Pröbsting, S.; van Zuijlen, A. H.; de Jong, A. T.; Guan, Y.; Morris, S. C.
2016-12-01
Efficient tools and methodology for the prediction of trailing-edge noise experience substantial interest within the wind turbine industry. In recent years, the Lattice Boltzmann Method has received increased attention for providing such an efficient alternative for the numerical solution of complex flow problems. Based on the fully explicit, transient, compressible solution of the Lattice Boltzmann Equation in combination with a Ffowcs-Williams and Hawking aeroacoustic analogy, an estimation of the acoustic radiation in the far field is obtained. To validate this methodology for the prediction of trailing-edge noise, the flow around a flat plate with an asymmetric 25° beveled trailing edge and obtuse corner in a low Mach number flow is analyzed. Flow field dynamics are compared to data obtained experimentally from Particle Image Velocimetry and Hot Wire Anemometry, and compare favorably in terms of mean velocity field and turbulent fluctuations. Moreover, the characteristics of the unsteady surface pressure, which are closely related to the acoustic emission, show good agreement between simulation and experiment. Finally, the prediction of the radiated sound is compared to the results obtained from acoustic phased array measurements in combination with a beamforming methodology. Vortex shedding results in a strong narrowband component centered at a constant Strouhal number in the acoustic spectrum. At higher frequency, a good agreement between simulation and experiment for the broadband noise component is obtained and a typical cardioid-like directivity is recovered.
F-16XL-2 Supersonic Laminar Flow Control Flight Test Experiment
NASA Technical Reports Server (NTRS)
Anders, Scott G.; Fischer, Michael C.
1999-01-01
The F-16XL-2 Supersonic Laminar Flow Control Flight Test Experiment was part of the NASA High-Speed Research Program. The goal of the experiment was to demonstrate extensive laminar flow, to validate computational fluid dynamics (CFD) codes and design methodology, and to establish laminar flow control design criteria. Topics include the flight test hardware and design, airplane modification, the pressure and suction distributions achieved, the laminar flow achieved, and the data analysis and code correlation.
A NON-OSCILLATORY SCHEME FOR OPEN CHANNEL FLOWS. (R825200)
In modeling shocks in open channel flows, the traditional finite difference schemes become inefficient and warrant special numerical treatment for smooth computations. This paper provides a general introduction to the non-oscillatory high-resolution methodology, coupled with the ...
Augmentative effect of pulsatility on the wall shear stress in tube flow.
Nakata, M; Tatsumi, E; Tsukiya, T; Taenaka, Y; Nishimura, T; Nishinaka, T; Takano, H; Masuzawa, T; Ohba, K
1999-08-01
Wall shear stress (WSS) has been considered to play an important role in the physiological and metabolic functions of the vascular endothelial cells. We investigated the effects of the pulse rate and the maximum flow rate on the WSS to clarify the influence of pulsatility. Water was perfused in a 1/2 inch transparent straight cylinder with a nonpulsatile centrifugal pump and a pulsatile pneumatic ventricular assist device (VAD). In nonpulsatile flow (NF), the flow rate was changed 1 to 6 L/min by 1 L/min increments to obtain standard values of WSS at each flow rate. In pulsatile flow (PF), the pulse rate was controlled at 40, 60, and 80 bpm, and the maximum flow rate was varied from 3.3 to 12.0 L/min while the mean flow rate was kept at 3 L/min. The WSS was estimated from the velocity profile at measuring points using the laser illuminated fluorescence method. In NF, the WSS was 12.0 dyne/cm2 at 3 L/min and 33.0 dyne/cm2 at 6 L/min. In PF, the pulse rate change with the same mean, and the maximum flow rate did not affect WSS. On the other hand, the increase in the maximum flow rate at the constant mean flow rate of 3 L/min augmented the mean WSS from 13.1 to 32.9 dyne/cm2. We concluded that the maximum flow rate exerted a substantial augmentative effect on WSS, and the maximum flow rate was a dominant factor of pulsatility in this effect.
Optimized Delivery System Achieves Enhanced Endomyocardial Stem Cell Retention
Behfar, Atta; Latere, Jean-Pierre; Bartunek, Jozef; Homsy, Christian; Daro, Dorothee; Crespo-Diaz, Ruben J.; Stalboerger, Paul G.; Steenwinckel, Valerie; Seron, Aymeric; Redfield, Margaret M.; Terzic, Andre
2014-01-01
Background Regenerative cell-based therapies are associated with limited myocardial retention of delivered stem cells. The objective of this study is to develop an endocardial delivery system for enhanced cell retention. Methods and Results Stem cell retention was simulated in silico using one and three-dimensional models of tissue distortion and compliance associated with delivery. Needle designs, predicted to be optimal, were accordingly engineered using nitinol – a nickel and titanium alloy displaying shape memory and super-elasticity. Biocompatibility was tested with human mesenchymal stem cells. Experimental validation was performed with species-matched cells directly delivered into Langendorff-perfused porcine hearts or administered percutaneously into the endocardium of infarcted pigs. Cell retention was quantified by flow cytometry and real time quantitative polymerase chain reaction methodology. Models, computing optimal distribution of distortion calibrated to favor tissue compliance, predicted that a 75°-curved needle featuring small-to-large graded side holes would ensure the highest cell retention profile. In isolated hearts, the nitinol curved needle catheter (C-Cath) design ensured 3-fold superior stem cell retention compared to a standard needle. In the setting of chronic infarction, percutaneous delivery of stem cells with C-Cath yielded a 37.7±7.1% versus 10.0±2.8% retention achieved with a traditional needle, without impact on biocompatibility or safety. Conclusions Modeling guided development of a nitinol-based curved needle delivery system with incremental side holes achieved enhanced myocardial stem cell retention. PMID:24326777
Continuous Improvement of a Groundwater Model over a 20-Year Period: Lessons Learned.
Andersen, Peter F; Ross, James L; Fenske, Jon P
2018-04-17
Groundwater models developed for specific sites generally become obsolete within a few years due to changes in: (1) modeling technology; (2) site/project personnel; (3) project funding; and (4) modeling objectives. Consequently, new models are sometimes developed for the same sites using the latest technology and data, but without potential knowledge gained from the prior models. When it occurs, this practice is particularly problematic because, although technology, data, and observed conditions change, development of the new numerical model may not consider the conceptual model's underpinnings. As a contrary situation, we present the unique case of a numerical flow and trichloroethylene (TCE) transport model that was first developed in 1993 and since revised and updated annually by the same personnel. The updates are prompted by an increase in the amount of data, exposure to a wider range of hydrologic conditions over increasingly longer timeframes, technological advances, evolving modeling objectives, and revised modeling methodologies. The history of updates shows smooth, incremental changes in the conceptual model and modeled aquifer parameters that result from both increase and decrease in complexity. Myriad modeling objectives have included demonstrating the ineffectiveness of a groundwater extraction/injection system, evaluating potential TCE degradation, locating new monitoring points, and predicting likelihood of exceedance of groundwater standards. The application emphasizes an original tenet of successful groundwater modeling: iterative adjustment of the conceptual model based on observations of actual vs. model response. © 2018, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Keylock, C. J.; Nishimura, K.; Peinke, J.
2012-03-01
Kolmogorov's classic theory for turbulence assumed an independence between velocity increments and the value for the velocity itself. However, recent work has called this assumption in to question, which has implications for the structure of atmospheric, oceanic and fluvial flows. Here we propose a conceptually simple analytical framework for studying velocity-intermittency coupling that is similar in essence to the popular quadrant analysis method for studying near-wall flows. However, we study the dominant (longitudinal) velocity component along with a measure of the roughness of the signal, given mathematically by its series of Hölder exponents. Thus, we permit a possible dependence between velocity and intermittency. We compare boundary layer data obtained in a wind tunnel to turbulent jets and wake flows. These flow classes all have distinct characteristics, which cause them to be readily distinguished using our technique and the results are robust to changes in flow Reynolds numbers. Classification of environmental flows is then possible based on their similarities to the idealized flow classes and we demonstrate this using laboratory data for flow in a parallel-channel confluence. Our results have clear implications for sediment transport in a range of geophysical applications as they suggest that the recently proposed impulse-based methods for studying bed load transport are particularly relevant in domains such as gravel bed river flows where the boundary layer is disrupted and wake interactions predominate.
NASA Astrophysics Data System (ADS)
Raimonet, M.; Oudin, L.; Rabouille, C.; Garnier, J.; Silvestre, M.; Vautard, R.; Thieu, V.
2016-12-01
Water quality management of fresh and marine aquatic systems requires modelling tools along the land-ocean continuum in order to evaluate the effect of climate change on nutrient transfer and on potential ecosystem dysfonctioning (e.g. eutrophication, anoxia). In addition to direct effects of climate change on water temperature, it is essential to consider indirect effects of precipitation and temperature changes on hydrology since nutrient transfers are particularly sensitive to the partition of streamflow between surface flow and baseflow. Yet, the determination of surface flow and baseflow, their spatial repartition on drainage basins, and their relative potential evolution under climate change remains challenging. In this study, we developed a generic approach to determine 10-day surface flow and baseflow using a regionalized hydrological model applied at a high spatial resolution (unitary catchments of area circa 10km²). Streamflow data at gauged basins were used to calibrate hydrological model parameters that were then applied on neighbor ungauged basins to estimate streamflow at the scale of the French territory. The proposed methodology allowed representing spatialized surface flow and baseflow that are consistent with climatic and geomorphological settings. The methodology was then used to determine the effect of climate change on the spatial repartition of surface flow and baseflow on the Seine drainage bassin. Results showed large discrepancies of both the amount and the spatial repartition of changes of surface flow and baseflow according to the several GCM and RCM used to derive projected climatic forcing. Consequently, it is expected that the impact of climate change on nutrient transfer might also be quite heterogeneous for the Seine River. This methodology could be applied in any drainage basin where at least several gauged hydrometric stations are available. The estimated surface flow and baseflow can then be used in hydro-ecological models in order to evaluate direct and indirect impacts of climate change on nutrient transfers and potential ecosystem dysfunctioning along the land-ocean continuum.
Development of an Unstructured Mesh Code for Flows About Complete Vehicles
NASA Technical Reports Server (NTRS)
Peraire, Jaime; Gupta, K. K. (Technical Monitor)
2001-01-01
This report describes the research work undertaken at the Massachusetts Institute of Technology, under NASA Research Grant NAG4-157. The aim of this research is to identify effective algorithms and methodologies for the efficient and routine solution of flow simulations about complete vehicle configurations. For over ten years we have received support from NASA to develop unstructured mesh methods for Computational Fluid Dynamics. As a result of this effort a methodology based on the use of unstructured adapted meshes of tetrahedra and finite volume flow solvers has been developed. A number of gridding algorithms, flow solvers, and adaptive strategies have been proposed. The most successful algorithms developed from the basis of the unstructured mesh system FELISA. The FELISA system has been extensively for the analysis of transonic and hypersonic flows about complete vehicle configurations. The system is highly automatic and allows for the routine aerodynamic analysis of complex configurations starting from CAD data. The code has been parallelized and utilizes efficient solution algorithms. For hypersonic flows, a version of the code which incorporates real gas effects, has been produced. The FELISA system is also a component of the STARS aeroservoelastic system developed at NASA Dryden. One of the latest developments before the start of this grant was to extend the system to include viscous effects. This required the development of viscous generators, capable of generating the anisotropic grids required to represent boundary layers, and viscous flow solvers. We show some sample hypersonic viscous computations using the developed viscous generators and solvers. Although this initial results were encouraging it became apparent that in order to develop a fully functional capability for viscous flows, several advances in solution accuracy, robustness and efficiency were required. In this grant we set out to investigate some novel methodologies that could lead to the required improvements. In particular we focused on two fronts: (1) finite element methods and (2) iterative algebraic multigrid solution techniques.
Dynamic characterisation of the specific surface area for fracture networks
NASA Astrophysics Data System (ADS)
Cvetkovic, V.
2017-12-01
One important application of chemical transport is geological disposal of high-level nuclear waste for which crystalline rock is a prime candidate for instance in Scandinavia. Interconnected heterogeneous fractures of sparsely fractured rock such as granite, act as conduits for transport of dissolved tracers. Fluid flow is known to be highly channelized in such rocks. Channels imply narrow flow paths, adjacent to essentially stagnant water in the fracture and/or the rock matrix. Tracers are transported along channelised flow paths and retained by minerals and/or stagnant water, depending on their sorption properties; this mechanism is critical for rocks to act as a barrier and ultimately provide safety for a geological repository. The sorbing tracers are retained by diffusion and sorption on mineral surfaces, whereas non-sorbing tracers can be retained only by diffusion into stagnant water of fractures. The retention and transport properties of a sparsely fractured rock will primarily depend on the specific surface area (SSA) of the fracture network which is determined by the heterogeneous structure and flow. The main challenge when characterising SSA on the field-scale is its dependence on the flow dynamics. We first define SSA as a physical quantity and clarify its importance for chemical transport. A methodology for dynamic characterisation of SSA in fracture networks is proposed that relies on three sets of data: i) Flow rate data as obtained by a flow logging procedure; ii) transmissivity data as obtained by pumping tests; iii) fracture network data as obtained from outcrop and geophysical observations. The proposed methodology utilises these data directly as well as indirectly through flow and particle tracking simulations in three-dimensional discrete fracture networks. The methodology is exemplified using specific data from the Swedish site Laxemar. The potential impact of uncertainties is of particular significance and is illustrated for radionuclide attenuation. Effects of internal fracture heterogeneity vs fracture network heterogeneity, and of rock deformation, on the statistical properties of SSA are briefly discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reckinger, Scott James; Livescu, Daniel; Vasilyev, Oleg V.
A comprehensive numerical methodology has been developed that handles the challenges introduced by considering the compressive nature of Rayleigh-Taylor instability (RTI) systems, which include sharp interfacial density gradients on strongly stratified background states, acoustic wave generation and removal at computational boundaries, and stratification-dependent vorticity production. The computational framework is used to simulate two-dimensional single-mode RTI to extreme late-times for a wide range of flow compressibility and variable density effects. The results show that flow compressibility acts to reduce the growth of RTI for low Atwood numbers, as predicted from linear stability analysis.
NPAC-Nozzle Performance Analysis Code
NASA Technical Reports Server (NTRS)
Barnhart, Paul J.
1997-01-01
A simple and accurate nozzle performance analysis methodology has been developed. The geometry modeling requirements are minimal and very flexible, thus allowing rapid design evaluations. The solution techniques accurately couple: continuity, momentum, energy, state, and other relations which permit fast and accurate calculations of nozzle gross thrust. The control volume and internal flow analyses are capable of accounting for the effects of: over/under expansion, flow divergence, wall friction, heat transfer, and mass addition/loss across surfaces. The results from the nozzle performance methodology are shown to be in excellent agreement with experimental data for a variety of nozzle designs over a range of operating conditions.
Inertial flow regimes of the suspension of finite size particles
NASA Astrophysics Data System (ADS)
Lashgari, Iman; Picano, Francesco; Brandt, Luca
2015-03-01
We study inertial flow regimes of the suspensions of finite size neutrally buoyant particles. These suspensions experience three different regimes by varying the Reynolds number, Re , and particle volume fraction, Φ. At low values of Re and Φ, flow is laminar-like where viscous stress is the dominating term in the stress budget. At high Re and relatively small Φ, the flow is turbulent-like where Reynolds stress has the largest contribution to the total stress. At high Φ, the flow regime is as a form of inertial shear-thickening characterized by a significant enhancement in the wall shear stress not due to the increment of Reynolds stress but to the particle stress. We further analyze the local behavior of the suspension in the three different regimes by studying the particle dispersion and collisions. Turbulent cases shows higher level of particle dispersion and higher values of the collision kernel (the radial distribution function times the particle relative velocity as a function of the distance between the particles) than those of the inertial shear-thickening regimes providing additional evidence of two different transport mechanisms in the Bagnoldian regime. Support from the European Research Council (ERC) is acknowledged.
Propelled microprobes in turbulence
NASA Astrophysics Data System (ADS)
Calzavarini, E.; Huang, Y. X.; Schmitt, F. G.; Wang, L. P.
2018-05-01
The temporal statistics of incompressible fluid velocity and passive scalar fields in developed turbulent conditions is investigated by means of direct numerical simulations along the trajectories of self-propelled pointlike probes drifting in a flow. Such probes are characterized by a propulsion velocity which is fixed in intensity and direction; however, like vessels in a flow they are continuously deviated on their intended course as the result of local sweeping of the fluid flow. The recorded time series by these moving probes represent the simplest realization of transect measurements in a fluid flow environment. We investigate the nontrivial combination of Lagrangian and Eulerian statistical properties displayed by the transect time series. We show that, as a result of the homogeneity and isotropy of the flow, the single-point acceleration statistics of the probes follows a predictable trend at varying the propulsion speed, a feature that is also present in the scalar time-derivative fluctuations. Further, by focusing on two-time statistics we characterize how the Lagrangian-to-Eulerian transition occurs at increasing the propulsion velocity. The analysis of intermittency of temporal increments highlights in a striking way the opposite trends displayed by the fluid velocity and passive scalars.
CT scanning and flow measurements of shale fractures after multiple shearing events
Crandall, Dustin; Moore, Johnathan; Gill, Magdalena; ...
2017-11-05
A shearing apparatus was used in conjunction with a Hassler-style core holder to incrementally shear fractured shale cores while maintaining various confining pressures. Computed tomography scans were performed after each shearing event, and were used to obtain information on evolving fracture geometry. Fracture transmissivity was measured after each shearing event to understand the hydrodynamic response to the evolving fracture structure. The digital fracture volumes were used to perform laminar single phase flow simulations (local cubic law with a tapered plate correction model) to qualitatively examine small scale flow path variations within the altered fractures. Fractures were found to generally increasemore » in aperture after several shear slip events, with corresponding transmissivity increases. Lower confining pressure resulted in a fracture more prone to episodic mechanical failure and sudden changes in transmissivity. Conversely, higher confining pressures resulted in a system where, after an initial setting of the fracture surfaces, changes to the fracture geometry and transmissivity occurred gradually. Flow paths within the fractures are largely controlled by the location and evolution of zero aperture locations. Lastly, a reduction in the number of primary flow pathways through the fracture, and an increase in their width, was observed during all shearing tests.« less
CT scanning and flow measurements of shale fractures after multiple shearing events
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crandall, Dustin; Moore, Johnathan; Gill, Magdalena
A shearing apparatus was used in conjunction with a Hassler-style core holder to incrementally shear fractured shale cores while maintaining various confining pressures. Computed tomography scans were performed after each shearing event, and were used to obtain information on evolving fracture geometry. Fracture transmissivity was measured after each shearing event to understand the hydrodynamic response to the evolving fracture structure. The digital fracture volumes were used to perform laminar single phase flow simulations (local cubic law with a tapered plate correction model) to qualitatively examine small scale flow path variations within the altered fractures. Fractures were found to generally increasemore » in aperture after several shear slip events, with corresponding transmissivity increases. Lower confining pressure resulted in a fracture more prone to episodic mechanical failure and sudden changes in transmissivity. Conversely, higher confining pressures resulted in a system where, after an initial setting of the fracture surfaces, changes to the fracture geometry and transmissivity occurred gradually. Flow paths within the fractures are largely controlled by the location and evolution of zero aperture locations. Lastly, a reduction in the number of primary flow pathways through the fracture, and an increase in their width, was observed during all shearing tests.« less
A Comprehensive Comparison of Current Operating Reserve Methodologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krad, Ibrahim; Ibanez, Eduardo; Gao, Wenzhong
Electric power systems are currently experiencing a paradigm shift from a traditionally static system to a system that is becoming increasingly more dynamic and variable. Emerging technologies are forcing power system operators to adapt to their performance characteristics. These technologies, such as distributed generation and energy storage systems, have changed the traditional idea of a distribution system with power flowing in one direction into a distribution system with bidirectional flows. Variable generation, in the form of wind and solar generation, also increases the variability and uncertainty in the system. As such, power system operators are revisiting the ways in whichmore » they treat this evolving power system, namely by modifying their operating reserve methodologies. This paper intends to show an in-depth analysis on different operating reserve methodologies and investigate their impacts on power system reliability and economic efficiency.« less
Setting limits through global budgeting: hospital cost containment in Rhode Island.
Hackey, R B
1996-01-01
In 1974, hospitals in Rhode Island have participated in annual negotiations with state officials and representatives from Blue Cross to determine the allowed increase in statewide hospital costs (the "Maxicap") for the next fiscal year, based on projected increases in hospitals' revenues, changes in patient volume and operating expenses. Individual hospital budgets may be above or below the Maxicap as long as the total increase in hospital costs for all hospitals in the state does not exceed the negotiated amount. At a time when regulatory solutions are increasingly under fire, continued support for Rhode Island's approach to hospital cost containment from third party payers, providers and public officials stands in stark contrast to other states where rate setting was either dismantled or discredited as a cost control strategy. A negotiated global cap on hospital expenditures offers an alternative to formula-based state rate-setting methodologies which could be incorporated as part of an all-payer reimbursement methodology or as an incremental step towards more comprehensive reform.
NASA Astrophysics Data System (ADS)
Schneider, Kai; Kadoch, Benjamin; Bos, Wouter
2017-11-01
The angle between two subsequent particle displacement increments is evaluated as a function of the time lag. The directional change of particles can thus be quantified at different scales and multiscale statistics can be performed. Flow dependent and geometry dependent features can be distinguished. The mean angle satisfies scaling behaviors for short time lags based on the smoothness of the trajectories. For intermediate time lags a power law behavior can be observed for some turbulent flows, which can be related to Kolmogorov scaling. The long time behavior depends on the confinement geometry of the flow. We show that the shape of the probability distribution function of the directional change can be well described by a Fischer distribution. Results for two-dimensional (direct and inverse cascade) and three-dimensional turbulence with and without confinement, illustrate the properties of the proposed multiscale statistics. The presented Monte-Carlo simulations allow disentangling geometry dependent and flow independent features. Finally, we also analyze trajectories of football players, which are, in general, not randomly spaced on a field.
Effect of homogenous-heterogeneous reactions on MHD Prandtl fluid flow over a stretching sheet
NASA Astrophysics Data System (ADS)
Khan, Imad; Malik, M. Y.; Hussain, Arif; Salahuddin, T.
An analysis is performed to explore the effects of homogenous-heterogeneous reactions on two-dimensional flow of Prandtl fluid over a stretching sheet. In present analysis, we used the developed model of homogeneous-heterogeneous reactions in boundary layer flow. The mathematical configuration of presented flow phenomenon yields the nonlinear partial differential equations. Using scaling transformations, the governing partial differential equations (momentum equation and homogenous-heterogeneous reactions equations) are transformed into non-linear ordinary differential equations (ODE's). Then, resulting non-linear ODE's are solved by computational scheme known as shooting method. The quantitative and qualitative manners of concerned physical quantities (velocity, concentration and drag force coefficient) are examined under prescribed physical constrained through figures and tables. It is observed that velocity profile enhances verses fluid parameters α and β while Hartmann number reduced it. The homogeneous and heterogeneous reactions parameters have reverse effects on concentration profile. Concentration profile shows retarding behavior for large values of Schmidt number. Skin fraction coefficient enhances with increment in Hartmann number H and fluid parameter α .
Arterial flow regulator enables transplantation and growth of human fetal kidneys in rats.
Chang, N K; Gu, J; Gu, S; Osorio, R W; Concepcion, W; Gu, E
2015-06-01
Here we introduce a novel method of transplanting human fetal kidneys into adult rats. To overcome the technical challenges of fetal-to-adult organ transplantation, we devised an arterial flow regulator (AFR), consisting of a volume adjustable saline-filled cuff, which enables low-pressure human fetal kidneys to be transplanted into high-pressure adult rat hosts. By incrementally withdrawing saline from the AFR over time, blood flow entering the human fetal kidney was gradually increased until full blood flow was restored 30 days after transplantation. Human fetal kidneys were shown to dramatically increase in size and function. Moreover, rats which had all native renal mass removed 30 days after successful transplantation of the human fetal kidney were shown to have a mean survival time of 122 days compared to 3 days for control rats that underwent bilateral nephrectomy without a prior human fetal kidney transplant. These in vivo human fetal kidney models may serve as powerful platforms for drug testing and discovery. © Copyright 2015 The American Society of Transplantation and the American Society of Transplant Surgeons.