Sample records for data-flow processing

  1. Work flow of signal processing data of ground penetrating radar case of rigid pavement measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Handayani, Gunawan

    The signal processing of Ground Penetrating Radar (GPR) requires a certain work flow to obtain good results. Even though the Ground Penetrating Radar data looks similar with seismic reflection data, but the GPR data has particular signatures that the seismic reflection data does not have. This is something to do with coupling between antennae and the ground surface. Because of this, the GPR data should be treated differently from the seismic signal data processing work flow. Even though most of the processing steps still follow the same work flow of seismic reflection data such as: filtering, predictive deconvolution etc. Thismore » paper presents the work flow of GPR processing data on rigid pavement measurements. The processing steps start from raw data, de-Wow process, remove DC and continue with the standard process to get rid of noises i.e. filtering process. Some radargram particular features of rigid pavement along with pile foundations are presented.« less

  2. Extensible packet processing architecture

    DOEpatents

    Robertson, Perry J.; Hamlet, Jason R.; Pierson, Lyndon G.; Olsberg, Ronald R.; Chun, Guy D.

    2013-08-20

    A technique for distributed packet processing includes sequentially passing packets associated with packet flows between a plurality of processing engines along a flow through data bus linking the plurality of processing engines in series. At least one packet within a given packet flow is marked by a given processing engine to signify by the given processing engine to the other processing engines that the given processing engine has claimed the given packet flow for processing. A processing function is applied to each of the packet flows within the processing engines and the processed packets are output on a time-shared, arbitered data bus coupled to the plurality of processing engines.

  3. Advanced Information Processing System (AIPS) proof-of-concept system functional design I/O network system services

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The function design of the Input/Output (I/O) services for the Advanced Information Processing System (AIPS) proof of concept system is described. The data flow diagrams, which show the functional processes in I/O services and the data that flows among them, are contained. A complete list of the data identified on the data flow diagrams and in the process descriptions are provided.

  4. Adapting high-level language programs for parallel processing using data flow

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1988-01-01

    EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.

  5. A scalable neuroinformatics data flow for electrophysiological signals using MapReduce.

    PubMed

    Jayapandian, Catherine; Wei, Annan; Ramesh, Priya; Zonjy, Bilal; Lhatoo, Samden D; Loparo, Kenneth; Zhang, Guo-Qiang; Sahoo, Satya S

    2015-01-01

    Data-driven neuroscience research is providing new insights in progression of neurological disorders and supporting the development of improved treatment approaches. However, the volume, velocity, and variety of neuroscience data generated from sophisticated recording instruments and acquisition methods have exacerbated the limited scalability of existing neuroinformatics tools. This makes it difficult for neuroscience researchers to effectively leverage the growing multi-modal neuroscience data to advance research in serious neurological disorders, such as epilepsy. We describe the development of the Cloudwave data flow that uses new data partitioning techniques to store and analyze electrophysiological signal in distributed computing infrastructure. The Cloudwave data flow uses MapReduce parallel programming algorithm to implement an integrated signal data processing pipeline that scales with large volume of data generated at high velocity. Using an epilepsy domain ontology together with an epilepsy focused extensible data representation format called Cloudwave Signal Format (CSF), the data flow addresses the challenge of data heterogeneity and is interoperable with existing neuroinformatics data representation formats, such as HDF5. The scalability of the Cloudwave data flow is evaluated using a 30-node cluster installed with the open source Hadoop software stack. The results demonstrate that the Cloudwave data flow can process increasing volume of signal data by leveraging Hadoop Data Nodes to reduce the total data processing time. The Cloudwave data flow is a template for developing highly scalable neuroscience data processing pipelines using MapReduce algorithms to support a variety of user applications.

  6. A scalable neuroinformatics data flow for electrophysiological signals using MapReduce

    PubMed Central

    Jayapandian, Catherine; Wei, Annan; Ramesh, Priya; Zonjy, Bilal; Lhatoo, Samden D.; Loparo, Kenneth; Zhang, Guo-Qiang; Sahoo, Satya S.

    2015-01-01

    Data-driven neuroscience research is providing new insights in progression of neurological disorders and supporting the development of improved treatment approaches. However, the volume, velocity, and variety of neuroscience data generated from sophisticated recording instruments and acquisition methods have exacerbated the limited scalability of existing neuroinformatics tools. This makes it difficult for neuroscience researchers to effectively leverage the growing multi-modal neuroscience data to advance research in serious neurological disorders, such as epilepsy. We describe the development of the Cloudwave data flow that uses new data partitioning techniques to store and analyze electrophysiological signal in distributed computing infrastructure. The Cloudwave data flow uses MapReduce parallel programming algorithm to implement an integrated signal data processing pipeline that scales with large volume of data generated at high velocity. Using an epilepsy domain ontology together with an epilepsy focused extensible data representation format called Cloudwave Signal Format (CSF), the data flow addresses the challenge of data heterogeneity and is interoperable with existing neuroinformatics data representation formats, such as HDF5. The scalability of the Cloudwave data flow is evaluated using a 30-node cluster installed with the open source Hadoop software stack. The results demonstrate that the Cloudwave data flow can process increasing volume of signal data by leveraging Hadoop Data Nodes to reduce the total data processing time. The Cloudwave data flow is a template for developing highly scalable neuroscience data processing pipelines using MapReduce algorithms to support a variety of user applications. PMID:25852536

  7. Environmental Data Flow Six Sigma Process Improvement Savings Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paige, Karen S

    An overview of the Environmental Data Flow Six Sigma improvement project covers LANL’s environmental data processing following receipt from the analytical laboratories. The Six Sigma project identified thirty-three process improvements, many of which focused on cutting costs or reducing the time it took to deliver data to clients.

  8. The United States Military Entrance Processing Command (USMEPCOM) Uses Six Sigma Process to Develop and Improve Data Quality

    DTIC Science & Technology

    2007-06-01

    MEPS - MIRS Applicant Accession Data Services Applicants Service - MIRS (AF, Navy, CG) Applicant Information (15K DMDC MEPS MCRISS / ARISS QuICR...A000V Army / Marine Corps ARISS / MCRISS Send 4K transmission to MIRS Personal Record Created in MIRS A000V SSN Process Flow 18 Determine SSN...All Clear SSN Process Flow Result Codes M CIAD 20 USMIRS Data Flow Testing 2000 Accession Partners ARISS MCRISS OPM DMDC NDSL ViroMed WinCAT/ (DOS

  9. Free turbulent shear flows. Volume 2: Summary of data

    NASA Technical Reports Server (NTRS)

    Birch, S. F.

    1973-01-01

    The proceedings of a conference on free turbulent shear flows are presented. Objectives of the conference are as follows: (1) collect and process data for a variety of free mixing problems, (2) assess present theoretical capability for predicting mean velocity, concentration, and temperature distributions in free turbulent flows, (3) identify and recommend experimental studies to advance knowledge of free shear flows, and (4) increase understanding of basic turbulent mixing process for application to free shear flows. Examples of specific cases of jet flow are included.

  10. Visual Modelling of Data Warehousing Flows with UML Profiles

    NASA Astrophysics Data System (ADS)

    Pardillo, Jesús; Golfarelli, Matteo; Rizzi, Stefano; Trujillo, Juan

    Data warehousing involves complex processes that transform source data through several stages to deliver suitable information ready to be analysed. Though many techniques for visual modelling of data warehouses from the static point of view have been devised, only few attempts have been made to model the data flows involved in a data warehousing process. Besides, each attempt was mainly aimed at a specific application, such as ETL, OLAP, what-if analysis, data mining. Data flows are typically very complex in this domain; for this reason, we argue, designers would greatly benefit from a technique for uniformly modelling data warehousing flows for all applications. In this paper, we propose an integrated visual modelling technique for data cubes and data flows. This technique is based on UML profiling; its feasibility is evaluated by means of a prototype implementation.

  11. Computation of Flow Through Water-Control Structures Using Program DAMFLO.2

    USGS Publications Warehouse

    Sanders, Curtis L.; Feaster, Toby D.

    2004-01-01

    As part of its mission to collect, analyze, and store streamflow data, the U.S. Geological Survey computes flow through several dam structures throughout the country. Flows are computed using hydraulic equations that describe flow through sluice and Tainter gates, crest gates, lock gates, spillways, locks, pumps, and siphons, which are calibrated using flow measurements. The program DAMFLO.2 was written to compute, tabulate, and plot flow through dam structures using data that describe the physical properties of dams and various hydraulic parameters and ratings that use time-varying data, such as lake elevations or gate openings. The program uses electronic computer files of time-varying data, such as lake elevation or gate openings, retrieved from the U.S. Geological Survey Automated Data Processing System. Computed time-varying flow data from DAMFLO.2 are output in flat files, which can be entered into the Automated Data Processing System database. All computations are made in units of feet and seconds. DAMFLO.2 uses the procedures and language developed by the SAS Institute Inc.

  12. Knowledge-based modularization and global optimization of artificial neural network models in hydrological forecasting.

    PubMed

    Corzo, Gerald; Solomatine, Dimitri

    2007-05-01

    Natural phenomena are multistationary and are composed of a number of interacting processes, so one single model handling all processes often suffers from inaccuracies. A solution is to partition data in relation to such processes using the available domain knowledge or expert judgment, to train separate models for each of the processes, and to merge them in a modular model (committee). In this paper a problem of water flow forecast in watershed hydrology is considered where the flow process can be presented as consisting of two subprocesses -- base flow and excess flow, so that these two processes can be separated. Several approaches to data separation techniques are studied. Two case studies with different forecast horizons are considered. Parameters of the algorithms responsible for data partitioning are optimized using genetic algorithms and global pattern search. It was found that modularization of ANN models using domain knowledge makes models more accurate, if compared with a global model trained on the whole data set, especially when forecast horizon (and hence the complexity of the modelled processes) is increased.

  13. Intershot Analysis of Flows in DIII-D

    NASA Astrophysics Data System (ADS)

    Meyer, W. H.; Allen, S. L.; Samuell, C. M.; Howard, J.

    2016-10-01

    Analysis of the DIII-D flow diagnostic data require demodulation of interference images, and inversion of the resultant line integrated emissivity and flow (phase) images. Four response matrices are pre-calculated: the emissivity line integral and the line integral of the scalar product of the lines-of-site with the orthogonal unit vectors of parallel flow. Equilibrium data determines the relative weight of the component matrices used in the final flow inversion matrix. Serial processing has been used for the lower divertor viewing flow camera 800x600 pixel image. The full cross section viewing camera will require parallel processing of the 2160x2560 pixel image. We will discuss using a Posix thread pool and a Tesla K40c GPU in the processing of this data. Prepared by LLNL under Contract DE-AC52-07NA27344. This material is based upon work supported by the U.S. DOE, Office of Science, Fusion Energy Sciences.

  14. Quantitative image processing in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  15. 40 CFR 98.295 - Procedures for estimating missing data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... value shall be the best available estimate(s) of the parameter(s), based on all available process data or data used for accounting purposes. (c) For each missing value collected during the performance test (hourly CO2 concentration, stack gas volumetric flow rate, or average process vent flow from mine...

  16. Data assimilation with soil water content sensors and pedotransfer functions in soil water flow modeling

    USDA-ARS?s Scientific Manuscript database

    Soil water flow models are based on a set of simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Soil water content monitoring data can be used to reduce the errors in models. Data assimilation (...

  17. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR FLOW AND CUSTODY OF FIELD DATA FORMS (UA-C-5.0)

    EPA Science Inventory

    The purpose of this SOP is to describe the flow of field data forms through the data processing system and to define who is responsible for the data at any time. It applies to field data forms collected and processed by NHEXAS Arizona. This procedure was followed to ensure consi...

  18. Cyber Situational Awareness through Operational Streaming Analysis

    DTIC Science & Technology

    2011-04-07

    Our system makes use of two specific data sources from network traffic: raw packet data and NetFlow connection summary records (de- scribed below...implemented an operational prototype system using the following two data feeds. a) NetFlow Data: Our system processes the NetFlow records of all...Internet gateway traffic for a large enterprise network. It uses the standard Cisco NetFlow version 5 proto- col, which defines a flow as a

  19. Field Data Collection Methods and Data Processing of the Influence of Low Momentum Ratio and the Rate of Sediment Transport Forcing on Confluence Hydrodynamics, Morphodynamics and Mixing

    NASA Astrophysics Data System (ADS)

    Moradi, Gelare; Cardot, Romain; Lane, Stuart; Rennie, Colin

    2017-04-01

    River confluences are zones where two or more rivers join and form a single channel downstream of their junction. Because of their essential role in the dynamic of fluvial networks, there has been an increase in the attention given to their hydrodynamics and morphodynamics during last three decades. Despite this increased understanding of the complex flow behavior and morphological aspects, few studies has been focused on low momentum ratio river confluences and mixing processes. As among these few studies, most of them have been driven by the mean of laboratory experiments and numerical models, a combination of field data collection and data processing is required to study the effect of low momentum ratio on flow dynamic, rive morphology and rate of mixing in river confluences. In the present poster, the flow discharge and velocity data of two upper Rhône river confluences in Switzerland, which are characterized by low momentum ratio and a varied rate of poorly sorted sediment transport is shown. The data set is mostly collected, using spatial distributed acoustic Doppler current profiling (aDcp) measurements. The morphological changes are studied using a combination of high-resolution aerial imagery data obtained by a phantom drone and acoustic bathymetric surveys. The mixing processes are investigated by measuring the surface water temperature with a thermic camera mounted on an E-bee drone [, whereas sediment pathways can be explored through the use of the 'bottom-tracking' feature of the aDcp device (not sure there will be such results at the conference time)]. These collected data is processed using a matlab code, Pix4D and visualization software. These processed data then can be used to describe the flow behavior, morphological aspects and mixing processes at river confluences characterized by low momentum ratio and to test laboratory derived conceptual models of flow processes at such junctions. The obtained results can be used under a wider range of forcing conditions to provide detailed data on the three-dimensional flow field and the morphology, to validate numerical models.

  20. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR FLOW AND CUSTODY OF FIELD DATA FORMS (UA-C-5.0)

    EPA Science Inventory

    The purpose of this SOP is to describe the flow of field data forms through the data processing system and to define who is responsible for the data at any time. It applies to field data forms collected and processed by Arizona NHEXAS. This procedure was followed to ensure cons...

  1. Process Flow Features as a Host-Based Event Knowledge Representation

    DTIC Science & Technology

    2012-06-14

    an executing process during a window of time called a process flow. Process flows are calculated from key process data structures extracted from...for Cluster 98. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.9. Davies- Boldin Dunn Index Sliding Window 5 on Windows 7...82 4.10. Davies- Boldin Dunn Index Sliding Window 10 on Windows 7 . 83 4.11. Davies- Boldin Dunn Index Sliding Window 20 on Windows 7 . 83 ix List of

  2. Clustering execution in a processing system to increase power savings

    DOEpatents

    Bose, Pradip; Buyuktosunoglu, Alper; Jacobson, Hans M.; Vega, Augusto J.

    2018-03-20

    Embodiments relate to clustering execution in a processing system. An aspect includes accessing a control flow graph that defines a data dependency and an execution sequence of a plurality of tasks of an application that executes on a plurality of system components. The execution sequence of the tasks in the control flow graph is modified as a clustered control flow graph that clusters active and idle phases of a system component while maintaining the data dependency. The clustered control flow graph is sent to an operating system, where the operating system utilizes the clustered control flow graph for scheduling the tasks.

  3. Advanced Recording and Preprocessing of Physiological Signals. [data processing equipment for flow measurement of blood flow by ultrasonics

    NASA Technical Reports Server (NTRS)

    Bentley, P. B.

    1975-01-01

    The measurement of the volume flow-rate of blood in an artery or vein requires both an estimate of the flow velocity and its spatial distribution and the corresponding cross-sectional area. Transcutaneous measurements of these parameters can be performed using ultrasonic techniques that are analogous to the measurement of moving objects by use of a radar. Modern digital data recording and preprocessing methods were applied to the measurement of blood-flow velocity by means of the CW Doppler ultrasonic technique. Only the average flow velocity was measured and no distribution or size information was obtained. Evaluations of current flowmeter design and performance, ultrasonic transducer fabrication methods, and other related items are given. The main thrust was the development of effective data-handling and processing methods by application of modern digital techniques. The evaluation resulted in useful improvements in both the flowmeter instrumentation and the ultrasonic transducers. Effective digital processing algorithms that provided enhanced blood-flow measurement accuracy and sensitivity were developed. Block diagrams illustrative of the equipment setup are included.

  4. A Scalable, Open Source Platform for Data Processing, Archiving and Dissemination

    DTIC Science & Technology

    2016-01-01

    Object Oriented Data Technology (OODT) big data toolkit developed by NASA and the Work-flow INstance Generation and Selection (WINGS) scientific work...to several challenge big data problems and demonstrated the utility of OODT-WINGS in addressing them. Specific demonstrated analyses address i...source software, Apache, Object Oriented Data Technology, OODT, semantic work-flows, WINGS, big data , work- flow management 16. SECURITY CLASSIFICATION OF

  5. Efficient packet forwarding using cyber-security aware policies

    DOEpatents

    Ros-Giralt, Jordi

    2017-04-04

    For balancing load, a forwarder can selectively direct data from the forwarder to a processor according to a loading parameter. The selective direction includes forwarding the data to the processor for processing, transforming and/or forwarding the data to another node, and dropping the data. The forwarder can also adjust the loading parameter based on, at least in part, feedback received from the processor. One or more processing elements can store values associated with one or more flows into a structure without locking the structure. The stored values can be used to determine how to direct the flows, e.g., whether to process a flow or to drop it. The structure can be used within an information channel providing feedback to a processor.

  6. Efficient packet forwarding using cyber-security aware policies

    DOEpatents

    Ros-Giralt, Jordi

    2017-10-25

    For balancing load, a forwarder can selectively direct data from the forwarder to a processor according to a loading parameter. The selective direction includes forwarding the data to the processor for processing, transforming and/or forwarding the data to another node, and dropping the data. The forwarder can also adjust the loading parameter based on, at least in part, feedback received from the processor. One or more processing elements can store values associated with one or more flows into a structure without locking the structure. The stored values can be used to determine how to direct the flows, e.g., whether to process a flow or to drop it. The structure can be used within an information channel providing feedback to a processor.

  7. Evolutionary analysis of groundwater flow: Application of multivariate statistical analysis to hydrochemical data in the Densu Basin, Ghana

    NASA Astrophysics Data System (ADS)

    Yidana, Sandow Mark; Bawoyobie, Patrick; Sakyi, Patrick; Fynn, Obed Fiifi

    2018-02-01

    An evolutionary trend has been postulated through the analysis of hydrochemical data of a crystalline rock aquifer system in the Densu Basin, Southern Ghana. Hydrochemcial data from 63 groundwater samples, taken from two main groundwater outlets (Boreholes and hand dug wells) were used to postulate an evolutionary theory for the basin. Sequential factor and hierarchical cluster analysis were used to disintegrate the data into three factors and five clusters (spatial associations). These were used to characterize the controls on groundwater hydrochemistry and its evolution in the terrain. The dissolution of soluble salts and cation exchange processes are the dominant processes controlling groundwater hydrochemistry in the terrain. The trend of evolution of this set of processes follows the pattern of groundwater flow predicted by a calibrated transient groundwater model in the area. The data suggest that anthropogenic activities represent the second most important process in the hydrochemistry. Silicate mineral weathering is the third most important set of processes. Groundwater associations resulting from Q-mode hierarchical cluster analysis indicate an evolutionary pattern consistent with the general groundwater flow pattern in the basin. These key findings are at variance with results of previous investigations and indicate that when carefully done, groundwater hydrochemical data can be very useful for conceptualizing groundwater flow in basins.

  8. SSDA code to apply data assimilation in soil water flow modeling: Documentation and user manual

    USDA-ARS?s Scientific Manuscript database

    Soil water flow models are based on simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Data assimilation (DA) with the ensemble Kalman filter (EnKF) corrects modeling results based on measured s...

  9. Clustering execution in a processing system to increase power savings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Pradip; Buyuktosunoglu, Alper; Jacobson, Hans M.

    Embodiments relate to clustering execution in a processing system. An aspect includes accessing a control flow graph that defines a data dependency and an execution sequence of a plurality of tasks of an application that executes on a plurality of system components. The execution sequence of the tasks in the control flow graph is modified as a clustered control flow graph that clusters active and idle phases of a system component while maintaining the data dependency. The clustered control flow graph is sent to an operating system, where the operating system utilizes the clustered control flow graph for scheduling themore » tasks.« less

  10. Flow process in combustors

    NASA Technical Reports Server (NTRS)

    Gouldin, F. C.

    1982-01-01

    Fluid mechanical effects on combustion processes in steady flow combustors, especially gas turbine combustors were investigated. Flow features of most interest were vorticity, especially swirl, and turbulence. Theoretical analyses, numerical calculations, and experiments were performed. The theoretical and numerical work focused on noncombusting flows, while the experimental work consisted of both reacting and nonreacting flow studies. An experimental data set, e.g., velocity, temperature and composition, was developed for a swirl flow combustor for use by combustion modelers for development and validation work.

  11. An improved algorithm of image processing technique for film thickness measurement in a horizontal stratified gas-liquid two-phase flow

    NASA Astrophysics Data System (ADS)

    Kuntoro, Hadiyan Yusuf; Hudaya, Akhmad Zidni; Dinaryanto, Okto; Majid, Akmal Irfan; Deendarlianto

    2016-06-01

    Due to the importance of the two-phase flow researches for the industrial safety analysis, many researchers developed various methods and techniques to study the two-phase flow phenomena on the industrial cases, such as in the chemical, petroleum and nuclear industries cases. One of the developing methods and techniques is image processing technique. This technique is widely used in the two-phase flow researches due to the non-intrusive capability to process a lot of visualization data which are contain many complexities. Moreover, this technique allows to capture direct-visual information data of the flow which are difficult to be captured by other methods and techniques. The main objective of this paper is to present an improved algorithm of image processing technique from the preceding algorithm for the stratified flow cases. The present algorithm can measure the film thickness (hL) of stratified flow as well as the geometrical properties of the interfacial waves with lower processing time and random-access memory (RAM) usage than the preceding algorithm. Also, the measurement results are aimed to develop a high quality database of stratified flow which is scanty. In the present work, the measurement results had a satisfactory agreement with the previous works.

  12. An improved algorithm of image processing technique for film thickness measurement in a horizontal stratified gas-liquid two-phase flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuntoro, Hadiyan Yusuf, E-mail: hadiyan.y.kuntoro@mail.ugm.ac.id; Majid, Akmal Irfan; Deendarlianto, E-mail: deendarlianto@ugm.ac.id

    Due to the importance of the two-phase flow researches for the industrial safety analysis, many researchers developed various methods and techniques to study the two-phase flow phenomena on the industrial cases, such as in the chemical, petroleum and nuclear industries cases. One of the developing methods and techniques is image processing technique. This technique is widely used in the two-phase flow researches due to the non-intrusive capability to process a lot of visualization data which are contain many complexities. Moreover, this technique allows to capture direct-visual information data of the flow which are difficult to be captured by other methodsmore » and techniques. The main objective of this paper is to present an improved algorithm of image processing technique from the preceding algorithm for the stratified flow cases. The present algorithm can measure the film thickness (h{sub L}) of stratified flow as well as the geometrical properties of the interfacial waves with lower processing time and random-access memory (RAM) usage than the preceding algorithm. Also, the measurement results are aimed to develop a high quality database of stratified flow which is scanty. In the present work, the measurement results had a satisfactory agreement with the previous works.« less

  13. Issues and approach to develop validated analysis tools for hypersonic flows: One perspective

    NASA Technical Reports Server (NTRS)

    Deiwert, George S.

    1992-01-01

    Critical issues concerning the modeling of low-density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools. A description of the activity in the Ames Research Center's Aerothermodynamics Branch is also given. Inherent in the process is a strong synergism between ground test and real-gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flow-field simulation codes are discussed. These models have been partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions are sparse; reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground-based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high-enthalpy flow facilities, such as shock tubes and ballistic ranges.

  14. Multiverse data-flow control.

    PubMed

    Schindler, Benjamin; Waser, Jürgen; Ribičić, Hrvoje; Fuchs, Raphael; Peikert, Ronald

    2013-06-01

    In this paper, we present a data-flow system which supports comparative analysis of time-dependent data and interactive simulation steering. The system creates data on-the-fly to allow for the exploration of different parameters and the investigation of multiple scenarios. Existing data-flow architectures provide no generic approach to handle modules that perform complex temporal processing such as particle tracing or statistical analysis over time. Moreover, there is no solution to create and manage module data, which is associated with alternative scenarios. Our solution is based on generic data-flow algorithms to automate this process, enabling elaborate data-flow procedures, such as simulation, temporal integration or data aggregation over many time steps in many worlds. To hide the complexity from the user, we extend the World Lines interaction techniques to control the novel data-flow architecture. The concept of multiple, special-purpose cursors is introduced to let users intuitively navigate through time and alternative scenarios. Users specify only what they want to see, the decision which data are required is handled automatically. The concepts are explained by taking the example of the simulation and analysis of material transport in levee-breach scenarios. To strengthen the general applicability, we demonstrate the investigation of vortices in an offline-simulated dam-break data set.

  15. PADDLEFISH BUCCAL FLOW VELOCITY DURING RAM SUSPENSION FEEDING AND RAM VENTILATION

    PubMed

    Cech; Cheer

    1994-01-01

    A micro-thermistor probe was inserted into the buccal cavity of freely swimming paddlefish to measure flow velocity during ram ventilation, ram suspension feeding and prey processing. Swimming speed was measured from videotapes recorded simultaneously with the buccal flow velocity measurements. Both swimming velocity and buccal flow velocity were significantly higher during suspension feeding than during ram ventilation. As the paddlefish shifted from ventilation to feeding, buccal flow velocity increased to approximately 60 % of the swimming velocity. During prey processing, buccal flow velocity was significantly higher than the swimming velocity, indicating that prey processing involves the generation of suction. The Reynolds number (Re) for flow at the level of the paddlefish gill rakers during feeding is about 30, an order of magnitude lower than the Re calculated previously for pump suspension-feeding blackfish. These data, combined with data available from the literature, indicate that the gill rakers of ram suspension-feeding teleost fishes may operate at a substantially lower Re than the rakers of pump suspension feeders.

  16. Thermal-hydraulic behavior of a mixed chevron single-pass plate-and-frame heat exchanger

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manglik, R.M.; Muley, A.

    1995-12-31

    Effective heat exchange is very critical for improving the process efficiency and operating economy of chemical and process plants. Here, experimental friction factor and heat transfer data for single-phase water flows in a plate-and-frame heat exchanger are presented. A mixed chevron plate arrangement with {beta} = 30{degree}/60{degree} in a single-pass U-type, counterflow configuration is employed. The friction factor and heat transfer data are for isothermal flow and cooling conditions, respectively, and the flow rates correspond to transition and turbulent flow regimes (300 < Re < 6,000 and 2.4 < Pr < 4.5). Based on these data, Nusselt number and frictionmore » factor correlations for fully developed turbulent flows (Re {ge} 1,000) are presented. The results highlight the effects of {beta} on the thermal-hydraulic performance, transition to turbulent flows, and the relative impact of using symmetric or mixed chevron plate arrangements.« less

  17. Developing guidelines for elementary flow nomenclature

    EPA Science Inventory

    In general, a flow in life cycle inventory data refers to an input or output to a process. Flows may be of two broad types: elementary flows or intermediate (known as “technosphere”) flows according to ISO 14044 (ISO 14044 2006). Elementary flows may be defined as mat...

  18. Recent advances in the Lesser Antilles observatories Part 1 : Seismic Data Acquisition Design based on EarthWorm and SeisComP

    NASA Astrophysics Data System (ADS)

    Saurel, Jean-Marie; Randriamora, Frédéric; Bosson, Alexis; Kitou, Thierry; Vidal, Cyril; Bouin, Marie-Paule; de Chabalier, Jean-Bernard; Clouard, Valérie

    2010-05-01

    Lesser Antilles observatories are in charge of monitoring the volcanoes and earthquakes in the Eastern Caribbean region. During the past two years, our seismic networks have evolved toward a full digital technology. These changes, which include modern three components sensors, high dynamic range digitizers, high speed terrestrial and satellite telemetry, improve data quality but also increase the data flows to process and to store. Moreover, the generalization of data exchange to build a wide virtual seismic network around the Caribbean domain requires a great flexibility to provide and receive data flows in various formats. As many observatories, we have decided to use the most popular and robust open source data acquisition systems in use in today observatories community : EarthWorm and SeisComP. The first is renowned for its ability to process real time seismic data flows, with a high number of tunable modules (filters, triggers, automatic pickers, locators). The later is renowned for its ability to exchange seismic data using the international SEED standard (Standard for Exchange of Earthquake Data), either by producing archive files, or by managing output and input SEEDLink flows. French Antilles Seismological and Volcanological Observatories have chosen to take advantage of the best features of each software to design a new data flow scheme and to integrate it in our global observatory data management system, WebObs [Beauducel et al., 2004]1, see the companion paper (Part 2). We assigned the tasks to the different softwares, regarding their main abilities : - EarthWorm first performs the integration of data from different heterogeneous sources; - SeisComP takes all this homogeneous EarthWorm data flow, adds other sources and produces SEED archives and SEED data flow; - EarthWorm is then used again to process this clean and complete SEEDLink data flow, mainly producing triggers, automatic locations and alarms; - WebObs provides a friendly human interface, both to the administrator for station management, and to the regular user for real time everyday analysis of the seismic data (event classification database, location scripts, automatic shakemaps and regional catalog with associated hypocenter maps).

  19. Comparative analysis between Payen and Daedalia Planum lava fields

    NASA Astrophysics Data System (ADS)

    Giacomini, Lorenza; Massironi, Matteo; Pasquarè, Giorgio; Carli, Cristian; Martellato, Elena; Frigeri, Alessandro; Cremonese, Gabriele; Bistacchi, Andrea; Federico, Costanzo

    The Payen volcanic complex is a large Quaternary fissural structure belonging to the back-arc extensional area of the Andes in the Mendoza Province (Argentina). From the eastern portion of this volcanic structure huge pahoehoe lava flows were emitted, extending more than 180 km from the feeding vents. These huge flows propagated over the nearly flat surface of the Pampean foreland (ca 0.3° slope). The very low viscosity of the olivine basalt lavas, coupled with the inflation process are the most probable explanation for their considerable length. In an inflation process a thin viscoelastic crust, produced at an early stage, is later inflated by the underlying fluid core, which remains hot and fluid thanks to the thermal-shield effect of the crust. The inflation shows some typical morphological fingerprints like tumuli, lava lobes, lava rises and lava ridges. In order to compare the morphology of the Argentinean Payen flows with lava flows on Mars, MOLA, THEMIS, MOC, MRO/HIRISE, and MEX/OMEGA data have been analysed, providing a multi-scale characterisation of Martian flows. Mars Global Surveyor/MOLA data were used to investigate the topographic environment over which flows propagated on Mars in order to detect very low angle slopes where possibly inflation processes could have developed. Then Mars Odyssey/THEMIS and Mars Global Surveyor's MOC data were used to detect Martian lava flows with inflation "fingerprints", whereas OMEGA data were used to obtain some inferences about their composition. Finally the MRO/HIRISE images recently acquired, can provide further details and constraints on surface morphologies and lava fronts. All these data were used to analyze Daedalia Planum lava field, at about 300 km southwest of Arsia Mons, and clear morphological similarities with the longest flows of the Payen lava fields were found. These striking morphological analogies suggest that inflation process is quite common also for the Daedalia field. This is also supported by simple calculation of effusion rates for not inflated lava flows foreseeing for the Daedalia Planum long lava flows improbable huge rates. Consequently lower effusion rates coupled with very efficient spreading process are more likely. Nonetheless the comparison of typology vs frequency and dimension of inflation related features of Payen and Daedalia Planum field suggest that even the effusion rates responsible of inflated flows on Mars are by far higher than the one on the Earth.

  20. Issues and approach to develop validated analysis tools for hypersonic flows: One perspective

    NASA Technical Reports Server (NTRS)

    Deiwert, George S.

    1993-01-01

    Critical issues concerning the modeling of low density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools, and the activity in the NASA Ames Research Center's Aerothermodynamics Branch is described. Inherent in the process is a strong synergism between ground test and real gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flowfield simulation codes are discussed. These models were partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions is sparse and reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high enthalpy flow facilities, such as shock tubes and ballistic ranges.

  1. 40 CFR 98.426 - Data reporting requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... flow meter in your process chain in relation to the points of CO2 stream capture, dehydration... measure CO2 concentration. (7) The location of the flow meter in your process chain in relation to the... through subsequent flow meter(s) in metric tons. (iii) The total annual CO2 mass supplied in metric tons...

  2. A Process Dynamics and Control Experiment for the Undergraduate Laboratory

    ERIC Educational Resources Information Center

    Spencer, Jordan L.

    2009-01-01

    This paper describes a process control experiment. The apparatus includes a three-vessel glass flow system with a variable flow configuration, means for feeding dye solution controlled by a stepper-motor driven valve, and a flow spectrophotometer. Students use impulse response data and nonlinear regression to estimate three parameters of a model…

  3. Flow processes in overexpanded chemical rocket nozzles. Part 1: Flow separation

    NASA Technical Reports Server (NTRS)

    Schmucker, R. H.

    1984-01-01

    An investigation was made of published nozzle flow separation data in order to determine the parameters which affect the separation conditions. A comparison of experimental data with empirical and theoretical separation prediction methods leads to the selection of suitable equations for the separation criterion. The results were used to predict flow separation of the main space shuttle engine.

  4. Flow processes in overexpanded chemical rocket nozzles. Part 1: Flow separation

    NASA Technical Reports Server (NTRS)

    Schmucker, R. H.

    1973-01-01

    An investigation was made of published nozzle flow separation data in order to determine the parameters which affect the separation condition. A comparison of experimental data with empirical and theoretical separation prediction methods leads to the selection of suitable equations for the separation criterion. The results were used to predict flow separation of the main space shuttle engine.

  5. User's guide to the Variably Saturated Flow (VSF) process to MODFLOW

    USGS Publications Warehouse

    Thoms, R. Brad; Johnson, Richard L.; Healy, Richard W.

    2006-01-01

    A new process for simulating three-dimensional (3-D) variably saturated flow (VSF) using Richards' equation has been added to the 3-D modular finite-difference ground-water model MODFLOW. Five new packages are presented here as part of the VSF Process--the Richards' Equation Flow (REF1) Package, the Seepage Face (SPF1) Package, the Surface Ponding (PND1) Package, the Surface Evaporation (SEV1) Package, and the Root Zone Evapotranspiration (RZE1) Package. Additionally, a new Adaptive Time-Stepping (ATS1) Package is presented for use by both the Ground-Water Flow (GWF) Process and VSF. The VSF Process allows simulation of flow in unsaturated media above the ground-water zone and facilitates modeling of ground-water/surface-water interactions. Model performance is evaluated by comparison to an analytical solution for one-dimensional (1-D) constant-head infiltration (Dirichlet boundary condition), field experimental data for a 1-D constant-head infiltration, laboratory experimental data for two-dimensional (2-D) constant-flux infiltration (Neumann boundary condition), laboratory experimental data for 2-D transient drainage through a seepage face, and numerical model results (VS2DT) of a 2-D flow-path simulation using realistic surface boundary conditions. A hypothetical 3-D example case also is presented to demonstrate the new capability using periodic boundary conditions (for example, daily precipitation) and varied surface topography over a larger spatial scale (0.133 square kilometer). The new model capabilities retain the modular structure of the MODFLOW code and preserve MODFLOW's existing capabilities as well as compatibility with commercial pre-/post-processors. The overall success of the VSF Process in simulating mixed boundary conditions and variable soil types demonstrates its utility for future hydrologic investigations. This report presents a new flow package implementing the governing equations for variably saturated ground-water flow, four new boundary condition packages unique to unsaturated flow, the Adaptive Time-Stepping Package for use with both the GWF Process and the new VSF Process, detailed descriptions of the input and output files for each package, and six simulation examples verifying model performance.

  6. VisFlow - Web-based Visualization Framework for Tabular Data with a Subset Flow Model.

    PubMed

    Yu, Bowen; Silva, Claudio T

    2017-01-01

    Data flow systems allow the user to design a flow diagram that specifies the relations between system components which process, filter or visually present the data. Visualization systems may benefit from user-defined data flows as an analysis typically consists of rendering multiple plots on demand and performing different types of interactive queries across coordinated views. In this paper, we propose VisFlow, a web-based visualization framework for tabular data that employs a specific type of data flow model called the subset flow model. VisFlow focuses on interactive queries within the data flow, overcoming the limitation of interactivity from past computational data flow systems. In particular, VisFlow applies embedded visualizations and supports interactive selections, brushing and linking within a visualization-oriented data flow. The model requires all data transmitted by the flow to be a data item subset (i.e. groups of table rows) of some original input table, so that rendering properties can be assigned to the subset unambiguously for tracking and comparison. VisFlow features the analysis flexibility of a flow diagram, and at the same time reduces the diagram complexity and improves usability. We demonstrate the capability of VisFlow on two case studies with domain experts on real-world datasets showing that VisFlow is capable of accomplishing a considerable set of visualization and analysis tasks. The VisFlow system is available as open source on GitHub.

  7. OAM-labeled free-space optical flow routing.

    PubMed

    Gao, Shecheng; Lei, Ting; Li, Yangjin; Yuan, Yangsheng; Xie, Zhenwei; Li, Zhaohui; Yuan, Xiaocong

    2016-09-19

    Space-division multiplexing allows unprecedented scaling of bandwidth density for optical communication. Routing spatial channels among transmission ports is critical for future scalable optical network, however, there is still no characteristic parameter to label the overlapped optical carriers. Here we propose a free-space optical flow routing (OFR) scheme by using optical orbital angular moment (OAM) states to label optical flows and simultaneously steer each flow according to their OAM states. With an OAM multiplexer and a reconfigurable OAM demultiplexer, massive individual optical flows can be routed to the demanded optical ports. In the routing process, the OAM beams act as data carriers at the same time their topological charges act as each carrier's labels. Using this scheme, we experimentally demonstrate switching, multicasting and filtering network functions by simultaneously steer 10 input optical flows on demand to 10 output ports. The demonstration of data-carrying OFR with nonreturn-to-zero signals shows that this process enables synchronous processing of massive spatial channels and flexible optical network.

  8. SAR processing on the MPP

    NASA Technical Reports Server (NTRS)

    Batcher, K. E.; Eddey, E. E.; Faiss, R. O.; Gilmore, P. A.

    1981-01-01

    The processing of synthetic aperture radar (SAR) signals using the massively parallel processor (MPP) is discussed. The fast Fourier transform convolution procedures employed in the algorithms are described. The MPP architecture comprises an array unit (ARU) which processes arrays of data; an array control unit which controls the operation of the ARU and performs scalar arithmetic; a program and data management unit which controls the flow of data; and a unique staging memory (SM) which buffers and permutes data. The ARU contains a 128 by 128 array of bit-serial processing elements (PE). Two-by-four surarrays of PE's are packaged in a custom VLSI HCMOS chip. The staging memory is a large multidimensional-access memory which buffers and permutes data flowing with the system. Efficient SAR processing is achieved via ARU communication paths and SM data manipulation. Real time processing capability can be realized via a multiple ARU, multiple SM configuration.

  9. First Order Kinetics Visualized by Capillary Flow and Simple Data Acquisition

    ERIC Educational Resources Information Center

    Festersen, Lea; Gilch, Peter; Reiffers, Anna; Mundt, Ramona

    2018-01-01

    First order processes are of paramount importance for chemical kinetics. In a well-established demonstration experiment, the flow of water out of a vertical glass tube through a capillary simulates a chemical first order process. Here, a digital version of this experiment for lecture hall demonstrations is presented. To this end, water flowing out…

  10. Nonlinear dynamics in flow through unsaturated fractured-porous media: Status and perspectives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faybishenko, Boris

    2002-11-27

    The need has long been recognized to improve predictions of flow and transport in partially saturated heterogeneous soils and fractured rock of the vadose zone for many practical applications, such as remediation of contaminated sites, nuclear waste disposal in geological formations, and climate predictions. Until recently, flow and transport processes in heterogeneous subsurface media with oscillating irregularities were assumed to be random and were not analyzed using methods of nonlinear dynamics. The goals of this paper are to review the theoretical concepts, present the results, and provide perspectives on investigations of flow and transport in unsaturated heterogeneous soils and fracturedmore » rock, using the methods of nonlinear dynamics and deterministic chaos. The results of laboratory and field investigations indicate that the nonlinear dynamics of flow and transport processes in unsaturated soils and fractured rocks arise from the dynamic feedback and competition between various nonlinear physical processes along with complex geometry of flow paths. Although direct measurements of variables characterizing the individual flow processes are not technically feasible, their cumulative effect can be characterized by analyzing time series data using the models and methods of nonlinear dynamics and chaos. Identifying flow through soil or rock as a nonlinear dynamical system is important for developing appropriate short- and long-time predictive models, evaluating prediction uncertainty, assessing the spatial distribution of flow characteristics from time series data, and improving chemical transport simulations. Inferring the nature of flow processes through the methods of nonlinear dynamics could become widely used in different areas of the earth sciences.« less

  11. Development of a Computational Framework for Big Data-Driven Prediction of Long-Term Bridge Performance and Traffic Flow

    DOT National Transportation Integrated Search

    2018-04-01

    Consistent efforts with dense sensor deployment and data gathering processes for bridge big data have accumulated profound information regarding bridge performance, associated environments, and traffic flows. However, direct applications of bridge bi...

  12. Research activity at the shock tube facility at NASA Ames

    NASA Astrophysics Data System (ADS)

    Sharma, Surendra P.

    1992-03-01

    The real gas phenomena dominate the relaxation process occurring in the flow around hypersonic vehicles. The air flow around these vehicles undergoes vibrational excitation, chemical dissociation, and ionization. These chemical and kinetic phenomena absorb energy, change compressibility, cause temperature to fall, and density to rise. In high-altitude, low density environments, the characteristic thicknesses of the shock layers can be smaller than the relaxation distances required for the gas to attain chemical and thermodynamic equilibrium. To determine the effects of chemical nonequilibrium over a realistic hypersonic vehicle, it would be desirable to conduct an experiment in which all aspects of fluid flow are simulated. Such an experiment is extremely difficult to setup. The only practical alternative is to develop a theoretical model of the phenomena and to compute the flow around the vehicle including the chemical nonequilibrium, and compare the results with the experiments conducted in the facilities under conditions where only a portion of the flow phenomena is simulated. Three types of experimental data are needed to assist the aerospace community in this model development process: (1) data which will enhance our phenomenological understanding of the relaxation process, (2) data on rate reactions for the relevant reactions, and (3) data on bulk properties, such as spectral radiation emitted by the gas, for a given set of aerodynamic conditions. NASA Ames is in a process of collecting such data by simulating the required aerothermochemical conditions in an electric arc driven shock tube.

  13. A vector scanning processing technique for pulsed laser velocimetry

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Edwards, Robert V.

    1989-01-01

    Pulsed laser sheet velocimetry yields nonintrusive measurements of two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high precision (1 pct) velocity estimates, but can require several hours of processing time on specialized array processors. Under some circumstances, a simple, fast, less accurate (approx. 5 pct), data reduction technique which also gives unambiguous velocity vector information is acceptable. A direct space domain processing technique was examined. The direct space domain processing technique was found to be far superior to any other techniques known, in achieving the objectives listed above. It employs a new data coding and reduction technique, where the particle time history information is used directly. Further, it has no 180 deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 minutes on an 80386 based PC, producing a 2-D velocity vector map of the flow field. Hence, using this new space domain vector scanning (VS) technique, pulsed laser velocimetry data can be reduced quickly and reasonably accurately, without specialized array processing hardware.

  14. 30 CFR 203.84 - What is in a net revenue and relief justification report?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... cash flow data for 12 qualifying months, using the format specified in the “Guidelines for the...) The cash flow table you submit must include historical data for: (1) Lease production subject to...) Transportation and processing costs. (b) Do not include in your cash flow table the non-allowable costs listed at...

  15. 30 CFR 203.84 - What is in a net revenue and relief justification report?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... cash flow data for 12 qualifying months, using the format specified in the “Guidelines for the...) The cash flow table you submit must include historical data for: (1) Lease production subject to...) Transportation and processing costs. (b) Do not include in your cash flow table the non-allowable costs listed at...

  16. 30 CFR 203.84 - What is in a net revenue and relief justification report?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... justification report? This report presents cash flow data for 12 qualifying months, using the format specified... having some production. (a) The cash flow table you submit must include historical data for: (1) Lease... allowable costs; and (5) Transportation and processing costs. (b) Do not include in your cash flow table the...

  17. 30 CFR 203.84 - What is in a net revenue and relief justification report?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... cash flow data for 12 qualifying months, using the format specified in the “Guidelines for the...) The cash flow table you submit must include historical data for: (1) Lease production subject to...) Transportation and processing costs. (b) Do not include in your cash flow table the non-allowable costs listed at...

  18. 30 CFR 203.84 - What is in a net revenue and relief justification report?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Transportation and processing costs. (b) Do not include in your cash flow table the non-allowable costs listed at... cash flow data for 12 qualifying months, using the format specified in the “Guidelines for the... cash flow table you submit must include historical data for: (1) Lease production subject to royalty...

  19. Visualizing human communication in business process simulations

    NASA Astrophysics Data System (ADS)

    Groehn, Matti; Jalkanen, Janne; Haho, Paeivi; Nieminen, Marko; Smeds, Riitta

    1999-03-01

    In this paper a description of business process simulation is given. Crucial part in the simulation of business processes is the analysis of social contacts between the participants. We will introduce a tool to collect log data and how this log data can be effectively analyzed using two different kind of methods: discussion flow charts and self-organizing maps. Discussion flow charts revealed the communication patterns and self-organizing maps are a very effective way of clustering the participants into development groups.

  20. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR FLOW AND CUSTODY OF UA LABORATORY DATA (UA-C-8.0)

    EPA Science Inventory

    The purpose of this SOP is to describe the flow and custody of laboratory data generated by NHEXAS Arizona through data processing and delivery to the project data manager for creation of the master database. This procedure was followed to ensure consistent data retrieval during...

  1. Multi-processor including data flow accelerator module

    DOEpatents

    Davidson, George S.; Pierce, Paul E.

    1990-01-01

    An accelerator module for a data flow computer includes an intelligent memory. The module is added to a multiprocessor arrangement and uses a shared tagged memory architecture in the data flow computer. The intelligent memory module assigns locations for holding data values in correspondence with arcs leading to a node in a data dependency graph. Each primitive computation is associated with a corresponding memory cell, including a number of slots for operands needed to execute a primitive computation, a primitive identifying pointer, and linking slots for distributing the result of the cell computation to other cells requiring that result as an operand. Circuitry is provided for utilizing tag bits to determine automatically when all operands required by a processor are available and for scheduling the primitive for execution in a queue. Each memory cell of the module may be associated with any of the primitives, and the particular primitive to be executed by the processor associated with the cell is identified by providing an index, such as the cell number for the primitive, to the primitive lookup table of starting addresses. The module thus serves to perform functions previously performed by a number of sections of data flow architectures and coexists with conventional shared memory therein. A multiprocessing system including the module operates in a hybrid mode, wherein the same processing modules are used to perform some processing in a sequential mode, under immediate control of an operating system, while performing other processing in a data flow mode.

  2. Shuttle/payload communications and data systems interface analysis

    NASA Technical Reports Server (NTRS)

    Huth, G. K.

    1980-01-01

    The payload/orbiter functional command signal flow and telemetry signal flow are discussed. Functional descriptions of the various orbiter communication/avionic equipment involved in processing a command to a payload either from the ground through the orbiter by the payload specialist on the orbiter are included. Functional descriptions of the various orbiter communication/avionic equipment involved in processing telemetry data by the orbiter and transmitting the processed data to the ground are presented. The results of the attached payload/orbiter single processing and data handling system evaluation are described. The causes of the majority of attached payload/orbiter interface problems are delineated. A refined set of required flux density values for a detached payload to communicate with the orbiter is presented.

  3. Toward an E-Government Semantic Platform

    NASA Astrophysics Data System (ADS)

    Sbodio, Marco Luca; Moulin, Claude; Benamou, Norbert; Barthès, Jean-Paul

    This chapter describes the major aspects of an e-government platform in which semantics underpins more traditional technologies in order to enable new capabilities and to overcome technical and cultural challenges. The design and development of such an e-government Semantic Platform has been conducted with the financial support of the European Commission through the Terregov research project: "Impact of e-government on Territorial Government Services" (Terregov 2008). The goal of this platform is to let local government and government agencies offer online access to their services in an interoperable way, and to allow them to participate in orchestrated processes involving services provided by multiple agencies. Implementing a business process through an electronic procedure is indeed a core goal in any networked organization. However, the field of e-government brings specific constraints to the operations allowed in procedures, especially concerning the flow of private citizens' data: because of legal reasons in most countries, such data are allowed to circulate only from agency to agency directly. In order to promote transparency and responsibility in e-government while respecting the specific constraints on data flows, Terregov supports the creation of centrally controlled orchestrated processes; while the cross agencies data flows are centrally managed, data flow directly across agencies.

  4. Re-evaluation of heat flow data near Parkfield, CA: Evidence for a weak San Andreas Fault

    USGS Publications Warehouse

    Fulton, P.M.; Saffer, D.M.; Harris, Reid N.; Bekins, B.A.

    2004-01-01

    Improved interpretations of the strength of the San Andreas Fault near Parkfield, CA based on thermal data require quantification of processes causing significant scatter and uncertainty in existing heat flow data. These effects include topographic refraction, heat advection by topographically-driven groundwater flow, and uncertainty in thermal conductivity. Here, we re-evaluate the heat flow data in this area by correcting for full 3-D terrain effects. We then investigate the potential role of groundwater flow in redistributing fault-generated heat, using numerical models of coupled heat and fluid flow for a wide range of hydrologic scenarios. We find that a large degree of the scatter in the data can be accounted for by 3-D terrain effects, and that for plausible groundwater flow scenarios frictional heat generated along a strong fault is unlikely to be redistributed by topographically-driven groundwater flow in a manner consistent with the 3-D corrected data. Copyright 2004 by the American Geophysical Union.

  5. New techniques for experimental generation of two-dimensional blade-vortex interaction at low Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Booth, E., Jr.; Yu, J. C.

    1986-01-01

    An experimental investigation of two dimensional blade vortex interaction was held at NASA Langley Research Center. The first phase was a flow visualization study to document the approach process of a two dimensional vortex as it encountered a loaded blade model. To accomplish the flow visualization study, a method for generating two dimensional vortex filaments was required. The numerical study used to define a new vortex generation process and the use of this process in the flow visualization study were documented. Additionally, photographic techniques and data analysis methods used in the flow visualization study are examined.

  6. Characterizing the primary material sources and dominant erosional processes for post-fire debris-flow initiation in a headwater basin using multi-temporal terrestrial laser scanning data

    USGS Publications Warehouse

    Staley, Dennis M.; Waslewicz, Thad A.; Kean, Jason W.

    2014-01-01

    Wildfire dramatically alters the hydrologic response of a watershed such that even modest rainstorms can produce hazardous debris flows. Relative to shallow landslides, the primary sources of material and dominant erosional processes that contribute to post-fire debris-flow initiation are poorly constrained. Improving our understanding of how and where material is eroded from a watershed during a post-fire debris-flow requires (1) precise measurements of topographic change to calculate volumetric measurements of erosion and deposition, and (2) the identification of relevant morphometrically defined process domains to spatially constrain these measurements of erosion and deposition. In this study, we combine the morphometric analysis of a steep, small (0.01 km2) headwater drainage basin with measurements of topographic change using high-resolution (2.5 cm) multi-temporal terrestrial laser scanning data made before and after a post-fire debris flow. The results of the morphometric analysis are used to define four process domains: hillslope-divergent, hillslope-convergent, transitional, and channelized incision. We determine that hillslope-divergent and hillslope-convergent process domains represent the primary sources of material over the period of analysis in the study basin. From these results we conclude that raindrop-impact induced erosion, ravel, surface wash, and rilling are the primary erosional processes contributing to post-fire debris-flow initiation in the small, steep headwater basin. Further work is needed to determine (1) how these results vary with increasing drainage basin size, (2) how these data might scale upward for use with coarser resolution measurements of topography, and (3) how these results change with evolving sediment supply conditions and vegetation recovery.

  7. Retooling Laser Speckle Contrast Analysis Algorithm to Enhance Non-Invasive High Resolution Laser Speckle Functional Imaging of Cutaneous Microcirculation

    NASA Astrophysics Data System (ADS)

    Gnyawali, Surya C.; Blum, Kevin; Pal, Durba; Ghatak, Subhadip; Khanna, Savita; Roy, Sashwati; Sen, Chandan K.

    2017-01-01

    Cutaneous microvasculopathy complicates wound healing. Functional assessment of gated individual dermal microvessels is therefore of outstanding interest. Functional performance of laser speckle contrast imaging (LSCI) systems is compromised by motion artefacts. To address such weakness, post-processing of stacked images is reported. We report the first post-processing of binary raw data from a high-resolution LSCI camera. Sharp images of low-flowing microvessels were enabled by introducing inverse variance in conjunction with speckle contrast in Matlab-based program code. Extended moving window averaging enhanced signal-to-noise ratio. Functional quantitative study of blood flow kinetics was performed on single gated microvessels using a free hand tool. Based on detection of flow in low-flow microvessels, a new sharp contrast image was derived. Thus, this work presents the first distinct image with quantitative microperfusion data from gated human foot microvasculature. This versatile platform is applicable to study a wide range of tissue systems including fine vascular network in murine brain without craniotomy as well as that in the murine dorsal skin. Importantly, the algorithm reported herein is hardware agnostic and is capable of post-processing binary raw data from any camera source to improve the sensitivity of functional flow data above and beyond standard limits of the optical system.

  8. Retooling Laser Speckle Contrast Analysis Algorithm to Enhance Non-Invasive High Resolution Laser Speckle Functional Imaging of Cutaneous Microcirculation

    PubMed Central

    Gnyawali, Surya C.; Blum, Kevin; Pal, Durba; Ghatak, Subhadip; Khanna, Savita; Roy, Sashwati; Sen, Chandan K.

    2017-01-01

    Cutaneous microvasculopathy complicates wound healing. Functional assessment of gated individual dermal microvessels is therefore of outstanding interest. Functional performance of laser speckle contrast imaging (LSCI) systems is compromised by motion artefacts. To address such weakness, post-processing of stacked images is reported. We report the first post-processing of binary raw data from a high-resolution LSCI camera. Sharp images of low-flowing microvessels were enabled by introducing inverse variance in conjunction with speckle contrast in Matlab-based program code. Extended moving window averaging enhanced signal-to-noise ratio. Functional quantitative study of blood flow kinetics was performed on single gated microvessels using a free hand tool. Based on detection of flow in low-flow microvessels, a new sharp contrast image was derived. Thus, this work presents the first distinct image with quantitative microperfusion data from gated human foot microvasculature. This versatile platform is applicable to study a wide range of tissue systems including fine vascular network in murine brain without craniotomy as well as that in the murine dorsal skin. Importantly, the algorithm reported herein is hardware agnostic and is capable of post-processing binary raw data from any camera source to improve the sensitivity of functional flow data above and beyond standard limits of the optical system. PMID:28106129

  9. National Pipeline Mapping System (NPMS) : repository standards

    DOT National Transportation Integrated Search

    1997-07-01

    This draft document contains 7 sections. They are as follows: 1. General Topics, 2. Data Formats, 3. Metadata, 4. Attribute Data, 5. Data Flow, 6. Descriptive Process, and 7. Validation and Processing of Submitted Data. These standards were created w...

  10. Influence of lateral groundwater flow in a shallow aquifer on eco-hydrological process in a shrub-grass coexistence semiarid area

    NASA Astrophysics Data System (ADS)

    Wang, Siru; Sun, Jinhua; Lei, Huimin; Zhu, Qiande; Jiang, Sanyuan

    2017-04-01

    Topography has a considerable influence on eco-hydrological processes resulting from the patterns of solar radiation distribution and lateral water flow. However, not much quantitative information on the contribution of lateral groundwater flow on ecological processes such as vegetation growth and evapo-transpiration is available. To fill this gap, we used a simple eco-hydrological model based on water balance with a 3D groundwater module that uses Darcy's law. This model was applied to a non-contributing area of 50km2 dominated by grassland and shrubland with an underlying shallow aquifer. It was calibrated using manually and remotely sensed vegetation data and water flux data observed by eddy covariance system of two flux towers as well as water table data obtained from HOBO recorders of 40 wells. The results demonstrate that the maximum hydraulic gradient and the maximum flux of lateral groundwater flow reached to 0.156m m-1 and 0.093m3 s-1 respectively. The average annual maximum LAI in grassland, predominantly in low-lying areas, improved by about 5.9% while that in shrubland, predominantly in high-lying areas, remained the same when lateral groundwater flow is considered adequately compared to the case without considering lateral groundwater flow. They also show that LAI is positively and nonlinearly related to evapotranspiration, and that the greater the magnitude of evapotranspiration, the smaller the rate of increase of LAI. The results suggest that lateral groundwater flow should not be neglected when simulating eco-hydrological process in areas with a shallow aquifer.

  11. The design and instrumentation of the Purdue annular cascade facility with initial data acquisition and analysis

    NASA Technical Reports Server (NTRS)

    Stauter, R. C.; Fleeter, S.

    1982-01-01

    Three dimensional aerodynamic data, required to validate and/or indicate necessary refinements to inviscid and viscous analyses of the flow through turbomachine blade rows, are discussed. Instrumentation and capabilities for pressure measurement, probe insertion and traversing, and flow visualization are reviewed. Advanced measurement techniques including Laser Doppler Anemometers, are considered. Data processing is reviewed. Predictions were correlated with the experimental data. A flow visualization technique using helium filled soap bubbles was demonstrated.

  12. Landsat imagery and its treatment in a publicly available data portal to monitor flow velocity variations of Greenland outlet glaciers

    NASA Astrophysics Data System (ADS)

    Scheinert, M.; Rosenau, R.; Ebermann, B.; Horwath, M.

    2016-12-01

    Utilizing the freely available Landsat archive we have set up a monitoring system to process and provide flow-velocity fields for more than 300 outlet glaciers along the margin of the Greenland ice sheet. We will present major processing steps. These include, among others, an improved orthorectification that is based on the Global Digital Elevation Map V2 (GDEM-V2) of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). For those Landsat 7 products which feature the scan line corrector (SLC) failure a destriping correction was applied. An adaptive, recursive filter approach was applied in order to remove outliers. Altogether, the enhanced processing leads to a higher accuracy of the flow-velocity fields. By mid-2016 we succeeded in incorporating more than 37,000 optical multi-sensoral scenes from Landsat 1 to 8. These scenes cover the period from 1972 to 2015. Until now, for almost 300 glaciers we processed more than 100,000 flow-velocity fields for the time span until 2012. For the time until 2015 velocity fields were inferred only for the fastest flowing glaciers. However, new recordings of Landsat 7 and Landsat 8 as well as the availability of further scenes through the Landsat Global Archive Consolidation (LGAC) effort will help to enlarge the database. With a further quality check, we can provide more than 40,000 flow-velocity for public accessibility. More products will be added continuously while the almost automated processing is ongoing. The long time span enables to determine trends of the flow velocity over different (long) periods. A major achievement can be seen in the fact that a high temporal resolution facilitates the analysis of seasonal flow-velocity variations. We will discuss prominent examples of the non-uniform pattern of ice flow velocity changes. For this, a powerful tool is provided by the monitoring system and its web-based data portal. It allows to study the flow-velocity changes in time and space, and to possibly identify distinctive patterns. Rapid changes like surge events can be detected and analyzed in detail. The presentation will demonstrate how the data portal enables to interactively perform the calculation of profiles or time series for locations the user can select on the map. Also, the user can choose from different options to download the examined data.

  13. Knowledge Representation and Communication: Imparting Current State Information Flow to CPR Stakeholders

    PubMed Central

    de la Cruz, Norberto B.; Spiece, Leslie J.

    2000-01-01

    Understanding and communicating the who, what, where, when, why, and how of the clinics and services for which the computerized patient record (CPR) will be built is an integral part of the implementation process. Formal methodologies have been developed to diagram information flow -- flow charts, state-transition diagram (STDs), data flow diagrams (DFDs). For documentation of the processes at our ambulatory CPR pilot site, flowcharting was selected as the preferred method based upon its versatility and understandability.

  14. A novel processing platform for post tape out flows

    NASA Astrophysics Data System (ADS)

    Vu, Hien T.; Kim, Soohong; Word, James; Cai, Lynn Y.

    2018-03-01

    As the computational requirements for post tape out (PTO) flows increase at the 7nm and below technology nodes, there is a need to increase the scalability of the computational tools in order to reduce the turn-around time (TAT) of the flows. Utilization of design hierarchy has been one proven method to provide sufficient partitioning to enable PTO processing. However, as the data is processed through the PTO flow, its effective hierarchy is reduced. The reduction is necessary to achieve the desired accuracy. Also, the sequential nature of the PTO flow is inherently non-scalable. To address these limitations, we are proposing a quasi-hierarchical solution that combines multiple levels of parallelism to increase the scalability of the entire PTO flow. In this paper, we describe the system and present experimental results demonstrating the runtime reduction through scalable processing with thousands of computational cores.

  15. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR FLOW AND CUSTODY OF UNIVERSITY OF ARIZONA LABORATORY DATA (UA-C-8.0)

    EPA Science Inventory

    The purpose of this SOP is to describe the flow and custody of laboratory data generated by the Arizona Border Study through data processing and delivery to the project data manager for creation of the master database. This procedure was followed to ensure consistent data retrie...

  16. Data flow modeling techniques

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.

    1984-01-01

    There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.

  17. A Whale of a Tale: Creating Spacecraft Telemetry Data Analysis Products for the Deep Impact Mission

    NASA Technical Reports Server (NTRS)

    Sturdevant, Kathryn

    2006-01-01

    A description of the Whale product generation utility and its means of analyzing project data for Deep Impact Missions is presented. The topics include: 1) Whale Definition; 2) Whale Overview; 3) Whale Challenges; 4) Network Configuration; 5) Network Diagram; 6) Whale Data Flow: Design Decisions; 7) Whale Data Flow Diagram; 8) Whale Data Flow; 9) Whale Team and Users; 10) Creeping Requirements; 11) Whale Competition; 12) Statistics: Processing Time; 13) CPU and Disk Usage; 14) The Ripple Effect of More Data; and 15) Data Validation and the Automation Challenge.

  18. New Techniques for Deep Learning with Geospatial Data using TensorFlow, Earth Engine, and Google Cloud Platform

    NASA Astrophysics Data System (ADS)

    Hancher, M.

    2017-12-01

    Recent years have seen promising results from many research teams applying deep learning techniques to geospatial data processing. In that same timeframe, TensorFlow has emerged as the most popular framework for deep learning in general, and Google has assembled petabytes of Earth observation data from a wide variety of sources and made them available in analysis-ready form in the cloud through Google Earth Engine. Nevertheless, developing and applying deep learning to geospatial data at scale has been somewhat cumbersome to date. We present a new set of tools and techniques that simplify this process. Our approach combines the strengths of several underlying tools: TensorFlow for its expressive deep learning framework; Earth Engine for data management, preprocessing, postprocessing, and visualization; and other tools in Google Cloud Platform to train TensorFlow models at scale, perform additional custom parallel data processing, and drive the entire process from a single familiar Python development environment. These tools can be used to easily apply standard deep neural networks, convolutional neural networks, and other custom model architectures to a variety of geospatial data structures. We discuss our experiences applying these and related tools to a range of machine learning problems, including classic problems like cloud detection, building detection, land cover classification, as well as more novel problems like illegal fishing detection. Our improved tools will make it easier for geospatial data scientists to apply modern deep learning techniques to their own problems, and will also make it easier for machine learning researchers to advance the state of the art of those techniques.

  19. Control of Technology Transfer at JPL

    NASA Technical Reports Server (NTRS)

    Oliver, Ronald

    2006-01-01

    Controlled Technology: 1) Design: preliminary or critical design data, schematics, technical flow charts, SNV code/diagnostics, logic flow diagrams, wirelist, ICDs, detailed specifications or requirements. 2) Development: constraints, computations, configurations, technical analyses, acceptance criteria, anomaly resolution, detailed test plans, detailed technical proposals. 3) Production: process or how-to: assemble, operated, repair, maintain, modify. 4) Manufacturing: technical instructions, specific parts, specific materials, specific qualities, specific processes, specific flow. 5) Operations: how-to operate, contingency or standard operating plans, Ops handbooks. 6) Repair: repair instructions, troubleshooting schemes, detailed schematics. 7) Test: specific procedures, data, analysis, detailed test plan and retest plans, detailed anomaly resolutions, detailed failure causes and corrective actions, troubleshooting, trended test data, flight readiness data. 8) Maintenance: maintenance schedules and plans, methods for regular upkeep, overhaul instructions. 9) Modification: modification instructions, upgrades kit parts, including software

  20. 40-Gbps optical backbone network deep packet inspection based on FPGA

    NASA Astrophysics Data System (ADS)

    Zuo, Yuan; Huang, Zhiping; Su, Shaojing

    2014-11-01

    In the era of information, the big data, which contains huge information, brings about some problems, such as high speed transmission, storage and real-time analysis and process. As the important media for data transmission, the Internet is the significant part for big data processing research. With the large-scale usage of the Internet, the data streaming of network is increasing rapidly. The speed level in the main fiber optic communication of the present has reached 40Gbps, even 100Gbps, therefore data on the optical backbone network shows some features of massive data. Generally, data services are provided via IP packets on the optical backbone network, which is constituted with SDH (Synchronous Digital Hierarchy). Hence this method that IP packets are directly mapped into SDH payload is named POS (Packet over SDH) technology. Aiming at the problems of real time process of high speed massive data, this paper designs a process system platform based on ATCA for 40Gbps POS signal data stream recognition and packet content capture, which employs the FPGA as the CPU. This platform offers pre-processing of clustering algorithms, service traffic identification and data mining for the following big data storage and analysis with high efficiency. Also, the operational procedure is proposed in this paper. Four channels of 10Gbps POS signal decomposed by the analysis module, which chooses FPGA as the kernel, are inputted to the flow classification module and the pattern matching component based on TCAM. Based on the properties of the length of payload and net flows, buffer management is added to the platform to keep the key flow information. According to data stream analysis, DPI (deep packet inspection) and flow balance distribute, the signal is transmitted to the backend machine through the giga Ethernet ports on back board. Practice shows that the proposed platform is superior to the traditional applications based on ASIC and NP.

  1. File-based data flow in the CMS Filter Farm

    NASA Astrophysics Data System (ADS)

    Andre, J.-M.; Andronidis, A.; Bawej, T.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; Nunez-Barranco-Fernandez, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.

    2015-12-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small “documents” using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These “files” can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.

  2. Regional variation of flow duration curves in the eastern United States: Process-based analyses of the interaction between climate and landscape properties

    Treesearch

    Wafa Chouaib; Peter V. Caldwell; Younes Alila

    2018-01-01

    This paper advances the physical understanding of the flow duration curve (FDC) regional variation. It provides a process-based analysis of the interaction between climate and landscape properties to explain disparities in FDC shapes. We used (i) long term measured flow and precipitation data over 73 catchments from the eastern US. (ii) We calibrated the...

  3. Time-resolved 3D MR velocity mapping at 3T: improved navigator-gated assessment of vascular anatomy and blood flow.

    PubMed

    Markl, Michael; Harloff, Andreas; Bley, Thorsten A; Zaitsev, Maxim; Jung, Bernd; Weigang, Ernst; Langer, Mathias; Hennig, Jürgen; Frydrychowicz, Alex

    2007-04-01

    To evaluate an improved image acquisition and data-processing strategy for assessing aortic vascular geometry and 3D blood flow at 3T. In a study with five normal volunteers and seven patients with known aortic pathology, prospectively ECG-gated cine three-dimensional (3D) MR velocity mapping with improved navigator gating, real-time adaptive k-space ordering and dynamic adjustment of the navigator acceptance criteria was performed. In addition to morphological information and three-directional blood flow velocities, phase-contrast (PC)-MRA images were derived from the same data set, which permitted 3D isosurface rendering of vascular boundaries in combination with visualization of blood-flow patterns. Analysis of navigator performance and image quality revealed improved scan efficiencies of 63.6%+/-10.5% and temporal resolution (<50 msec) compared to previous implementations. Semiquantitative evaluation of image quality by three independent observers demonstrated excellent general image appearance with moderate blurring and minor ghosting artifacts. Results from volunteer and patient examinations illustrate the potential of the improved image acquisition and data-processing strategy for identifying normal and pathological blood-flow characteristics. Navigator-gated time-resolved 3D MR velocity mapping at 3T in combination with advanced data processing is a powerful tool for performing detailed assessments of global and local blood-flow characteristics in the aorta to describe or exclude vascular alterations. Copyright (c) 2007 Wiley-Liss, Inc.

  4. Analytical modeling and sensor monitoring for optimal processing of advanced textile structural composites by resin transfer molding

    NASA Technical Reports Server (NTRS)

    Loos, Alfred C.; Macrae, John D.; Hammond, Vincent H.; Kranbuehl, David E.; Hart, Sean M.; Hasko, Gregory H.; Markus, Alan M.

    1993-01-01

    A two-dimensional model of the resin transfer molding (RTM) process was developed which can be used to simulate the infiltration of resin into an anisotropic fibrous preform. Frequency dependent electromagnetic sensing (FDEMS) has been developed for in situ monitoring of the RTM process. Flow visualization tests were performed to obtain data which can be used to verify the sensor measurements and the model predictions. Results of the tests showed that FDEMS can accurately detect the position of the resin flow-front during mold filling, and that the model predicted flow-front patterns agreed well with the measured flow-front patterns.

  5. Intraflow width variations in Martian and terrestrial lava flows

    NASA Astrophysics Data System (ADS)

    Peitersen, Matthew N.; Crown, David A.

    1997-03-01

    Flow morphology is used to interpret emplacement processes for lava flows on Earth and Mars. Accurate measurements of flow geometry are essential, particularly for planetary flows where neither compositional sampling nor direct observations of active flows may be possible. Width behavior may indicate a flow's response to topography, its emplacement regime, and its physical properties. Variations in width with downflow distance from the vent may therefore provide critical clues to flow emplacement processes. Flow width is also one of the few characteristics that can be readily measured from planetary mission data with accuracy. Recent analyses of individual flows at two terrestrial and four Martian sites show that widths within an individual flow vary by up to an order of magnitude. Width is generally thought to be correlated to topography; however, recent studies show that this relationship is neither straightforward nor easily quantifiable.

  6. A vector scanning processing technique for pulsed laser velocimetry

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Edwards, Robert V.

    1989-01-01

    Pulsed-laser-sheet velocimetry yields two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high-precision (1-percent) velocity estimates, but can require hours of processing time on specialized array processors. Sometimes, however, a less accurate (about 5 percent) data-reduction technique which also gives unambiguous velocity vector information is acceptable. Here, a direct space-domain processing technique is described and shown to be far superior to previous methods in achieving these objectives. It uses a novel data coding and reduction technique and has no 180-deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 min on an 80386-based PC, producing a two-dimensional velocity-vector map of the flowfield. Pulsed-laser velocimetry data can thus be reduced quickly and reasonably accurately, without specialized array processing hardware.

  7. Sequential geophysical and flow inversion to characterize fracture networks in subsurface systems

    DOE PAGES

    Mudunuru, Maruti Kumar; Karra, Satish; Makedonska, Nataliia; ...

    2017-09-05

    Subsurface applications, including geothermal, geological carbon sequestration, and oil and gas, typically involve maximizing either the extraction of energy or the storage of fluids. Fractures form the main pathways for flow in these systems, and locating these fractures is critical for predicting flow. However, fracture characterization is a highly uncertain process, and data from multiple sources, such as flow and geophysical are needed to reduce this uncertainty. We present a nonintrusive, sequential inversion framework for integrating data from geophysical and flow sources to constrain fracture networks in the subsurface. In this framework, we first estimate bounds on the statistics formore » the fracture orientations using microseismic data. These bounds are estimated through a combination of a focal mechanism (physics-based approach) and clustering analysis (statistical approach) of seismic data. Then, the fracture lengths are constrained using flow data. In conclusion, the efficacy of this inversion is demonstrated through a representative example.« less

  8. Sequential geophysical and flow inversion to characterize fracture networks in subsurface systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudunuru, Maruti Kumar; Karra, Satish; Makedonska, Nataliia

    Subsurface applications, including geothermal, geological carbon sequestration, and oil and gas, typically involve maximizing either the extraction of energy or the storage of fluids. Fractures form the main pathways for flow in these systems, and locating these fractures is critical for predicting flow. However, fracture characterization is a highly uncertain process, and data from multiple sources, such as flow and geophysical are needed to reduce this uncertainty. We present a nonintrusive, sequential inversion framework for integrating data from geophysical and flow sources to constrain fracture networks in the subsurface. In this framework, we first estimate bounds on the statistics formore » the fracture orientations using microseismic data. These bounds are estimated through a combination of a focal mechanism (physics-based approach) and clustering analysis (statistical approach) of seismic data. Then, the fracture lengths are constrained using flow data. In conclusion, the efficacy of this inversion is demonstrated through a representative example.« less

  9. Cooling the vertical surface by conditionally single pulses

    NASA Astrophysics Data System (ADS)

    Karpov, Pavel; Nazarov, Alexander; Serov, Anatoly; Terekhov, Victor

    2017-10-01

    You Sprays with periodic supply of the droplet phase have great opportunities to control the heat exchange processes. Varying pulse duration and frequency of their repetition, we can achieve the optimal conditions of evaporative cooling with minimization of the liquid flow rate. The paper presents experimental data on studying local heat transfer on a large subcooled surface, obtained on the original setup with multinozzle controlled system of impact irrigation by the gas-droplet flow. A contribution to intensification of the spray parameters (flow rate, pulse duration, repetition frequency) per a growth of integral heat transfer was studied. Data on instantaneous distribution of the heat flux value helped us to describe the processes occurring on the studied surface. These data could describe the regime of "island" film cooling.

  10. Quantification of chemical transport processes from the soil to surface runoff.

    PubMed

    Tian, Kun; Huang, Chi-Hua; Wang, Guang-Qian; Fu, Xu-Dong; Parker, Gary

    2013-01-01

    There is a good conceptual understanding of the processes that govern chemical transport from the soil to surface runoff, but few studies have actually quantified these processes separately. Thus, we designed a laboratory flow cell and experimental procedures to quantify the chemical transport from soil to runoff water in the following individual processes: (i) convection with a vertical hydraulic gradient, (ii) convection via surface flow or the Bernoulli effect, (iii) diffusion, and (iv) soil loss. We applied different vertical hydraulic gradients by setting the flow cell to generate different seepage or drainage conditions. Our data confirmed the general form of the convection-diffusion equation. However, we now have additional quantitative data that describe the contribution of each individual chemical loading process in different surface runoff and soil hydrological conditions. The results of this study will be useful for enhancing our understanding of different geochemical processes in the surface soil mixing zone. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  11. Alternatives to current flow cytometry data analysis for clinical and research studies.

    PubMed

    Gondhalekar, Carmen; Rajwa, Bartek; Patsekin, Valery; Ragheb, Kathy; Sturgis, Jennifer; Robinson, J Paul

    2018-02-01

    Flow cytometry has well-established methods for data analysis based on traditional data collection techniques. These techniques typically involved manual insertion of tube samples into an instrument that, historically, could only measure 1-3 colors. The field has since evolved to incorporate new technologies for faster and highly automated sample preparation and data collection. For example, the use of microwell plates on benchtop instruments is now a standard on virtually every new instrument, and so users can easily accumulate multiple data sets quickly. Further, because the user must carefully define the layout of the plate, this information is already defined when considering the analytical process, expanding the opportunities for automated analysis. Advances in multi-parametric data collection, as demonstrated by the development of hyperspectral flow-cytometry, 20-40 color polychromatic flow cytometry, and mass cytometry (CyTOF), are game-changing. As data and assay complexity increase, so too does the complexity of data analysis. Complex data analysis is already a challenge to traditional flow cytometry software. New methods for reviewing large and complex data sets can provide rapid insight into processes difficult to define without more advanced analytical tools. In settings such as clinical labs where rapid and accurate data analysis is a priority, rapid, efficient and intuitive software is needed. This paper outlines opportunities for analysis of complex data sets using examples of multiplexed bead-based assays, drug screens and cell cycle analysis - any of which could become integrated into the clinical environment. Copyright © 2017. Published by Elsevier Inc.

  12. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    PubMed Central

    2011-01-01

    Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples. PMID:21352538

  13. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines.

    PubMed

    Cieślik, Marcin; Mura, Cameron

    2011-02-25

    Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples.

  14. Use of soil moisture dynamics and patterns for the investigation of runoff generation processes with emphasis on preferential flow

    NASA Astrophysics Data System (ADS)

    Blume, T.; Zehe, E.; Bronstert, A.

    2007-08-01

    Spatial patterns as well as temporal dynamics of soil moisture have a major influence on runoff generation. The investigation of these dynamics and patterns can thus yield valuable information on hydrological processes, especially in data scarce or previously ungauged catchments. The combination of spatially scarce but temporally high resolution soil moisture profiles with episodic and thus temporally scarce moisture profiles at additional locations provides information on spatial as well as temporal patterns of soil moisture at the hillslope transect scale. This approach is better suited to difficult terrain (dense forest, steep slopes) than geophysical techniques and at the same time less cost-intensive than a high resolution grid of continuously measuring sensors. Rainfall simulation experiments with dye tracers while continuously monitoring soil moisture response allows for visualization of flow processes in the unsaturated zone at these locations. Data was analyzed at different spacio-temporal scales using various graphical methods, such as space-time colour maps (for the event and plot scale) and indicator maps (for the long-term and hillslope scale). Annual dynamics of soil moisture and decimeter-scale variability were also investigated. The proposed approach proved to be successful in the investigation of flow processes in the unsaturated zone and showed the importance of preferential flow in the Malalcahuello Catchment, a data-scarce catchment in the Andes of Southern Chile. Fast response times of stream flow indicate that preferential flow observed at the plot scale might also be of importance at the hillslope or catchment scale. Flow patterns were highly variable in space but persistent in time. The most likely explanation for preferential flow in this catchment is a combination of hydrophobicity, small scale heterogeneity in rainfall due to redistribution in the canopy and strong gradients in unsaturated conductivities leading to self-reinforcing flow paths.

  15. Use of soil moisture dynamics and patterns at different spatio-temporal scales for the investigation of subsurface flow processes

    NASA Astrophysics Data System (ADS)

    Blume, T.; Zehe, E.; Bronstert, A.

    2009-07-01

    Spatial patterns as well as temporal dynamics of soil moisture have a major influence on runoff generation. The investigation of these dynamics and patterns can thus yield valuable information on hydrological processes, especially in data scarce or previously ungauged catchments. The combination of spatially scarce but temporally high resolution soil moisture profiles with episodic and thus temporally scarce moisture profiles at additional locations provides information on spatial as well as temporal patterns of soil moisture at the hillslope transect scale. This approach is better suited to difficult terrain (dense forest, steep slopes) than geophysical techniques and at the same time less cost-intensive than a high resolution grid of continuously measuring sensors. Rainfall simulation experiments with dye tracers while continuously monitoring soil moisture response allows for visualization of flow processes in the unsaturated zone at these locations. Data was analyzed at different spacio-temporal scales using various graphical methods, such as space-time colour maps (for the event and plot scale) and binary indicator maps (for the long-term and hillslope scale). Annual dynamics of soil moisture and decimeter-scale variability were also investigated. The proposed approach proved to be successful in the investigation of flow processes in the unsaturated zone and showed the importance of preferential flow in the Malalcahuello Catchment, a data-scarce catchment in the Andes of Southern Chile. Fast response times of stream flow indicate that preferential flow observed at the plot scale might also be of importance at the hillslope or catchment scale. Flow patterns were highly variable in space but persistent in time. The most likely explanation for preferential flow in this catchment is a combination of hydrophobicity, small scale heterogeneity in rainfall due to redistribution in the canopy and strong gradients in unsaturated conductivities leading to self-reinforcing flow paths.

  16. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  17. A Bayesian Model for Highly Accelerated Phase-Contrast MRI

    PubMed Central

    Rich, Adam; Potter, Lee C.; Jin, Ning; Ash, Joshua; Simonetti, Orlando P.; Ahmad, Rizwan

    2015-01-01

    Purpose Phase-contrast magnetic resonance imaging (PC-MRI) is a noninvasive tool to assess cardiovascular disease by quantifying blood flow; however, low data acquisition efficiency limits the spatial and temporal resolutions, real-time application, and extensions to 4D flow imaging in clinical settings. We propose a new data processing approach called Reconstructing Velocity Encoded MRI with Approximate message passing aLgorithms (ReVEAL) that accelerates the acquisition by exploiting data structure unique to PC-MRI. Theory and Methods ReVEAL models physical correlations across space, time, and velocity encodings. The proposed Bayesian approach exploits the relationships in both magnitude and phase among velocity encodings. A fast iterative recovery algorithm is introduced based on message passing. For validation, prospectively undersampled data are processed from a pulsatile flow phantom and five healthy volunteers. Results ReVEAL is in good agreement, quantified by peak velocity and stroke volume (SV), with reference data for acceleration rates R ≤ 10. For SV, Pearson r ≥ 0.996 for phantom imaging (n = 24) and r ≥ 0.956 for prospectively accelerated in vivo imaging (n = 10) for R ≤ 10. Conclusion ReVEAL enables accurate quantification of blood flow from highly undersampled data. The technique is extensible to 4D flow imaging, where higher acceleration may be possible due to additional redundancy. PMID:26444911

  18. Standard services for the capture, processing, and distribution of packetized telemetry data

    NASA Technical Reports Server (NTRS)

    Stallings, William H.

    1989-01-01

    Standard functional services for the capture, processing, and distribution of packetized data are discussed with particular reference to the future implementation of packet processing systems, such as those for the Space Station Freedom. The major functions are listed under the following major categories: input processing, packet processing, and output processing. A functional block diagram of a packet data processing facility is presented, showing the distribution of the various processing functions as well as the primary data flow through the facility.

  19. An evaluation of Dynamic TOPMODEL for low flow simulation

    NASA Astrophysics Data System (ADS)

    Coxon, G.; Freer, J. E.; Quinn, N.; Woods, R. A.; Wagener, T.; Howden, N. J. K.

    2015-12-01

    Hydrological models are essential tools for drought risk management, often providing input to water resource system models, aiding our understanding of low flow processes within catchments and providing low flow predictions. However, simulating low flows and droughts is challenging as hydrological systems often demonstrate threshold effects in connectivity, non-linear groundwater contributions and a greater influence of water resource system elements during low flow periods. These dynamic processes are typically not well represented in commonly used hydrological models due to data and model limitations. Furthermore, calibrated or behavioural models may not be effectively evaluated during more extreme drought periods. A better understanding of the processes that occur during low flows and how these are represented within models is thus required if we want to be able to provide robust and reliable predictions of future drought events. In this study, we assess the performance of dynamic TOPMODEL for low flow simulation. Dynamic TOPMODEL was applied to a number of UK catchments in the Thames region using time series of observed rainfall and potential evapotranspiration data that captured multiple historic droughts over a period of several years. The model performance was assessed against the observed discharge time series using a limits of acceptability framework, which included uncertainty in the discharge time series. We evaluate the models against multiple signatures of catchment low-flow behaviour and investigate differences in model performance between catchments, model diagnostics and for different low flow periods. We also considered the impact of surface water and groundwater abstractions and discharges on the observed discharge time series and how this affected the model evaluation. From analysing the model performance, we suggest future improvements to Dynamic TOPMODEL to improve the representation of low flow processes within the model structure.

  20. Regenerator matrix physical property data

    NASA Technical Reports Server (NTRS)

    Fucinari, C. A.

    1980-01-01

    Among several cellular ceramic structures manufactured by various suppliers for regenerator application in a gas turbine engine, three have the best potential for achieving durability and performance objectives for use in gas turbines, Stirling engines, and waste heat recovery systems: (1) an aluminum-silicate sinusoidal flow passage made from a corrugated wate paper process; (2) an extruded isosceles triangle flow passage; and (3) a second generation matrix incorporating a square flow passage formed by an embossing process. Key physical and thermal property data for these configurations presented include: heat transfer and pressure drop characteristics, compressive strength, tensile strength and elasticity, thermal expansion characteristics, chanical attack, and thermal stability.

  1. Software Aids Visualization of Computed Unsteady Flow

    NASA Technical Reports Server (NTRS)

    Kao, David; Kenwright, David

    2003-01-01

    Unsteady Flow Analysis Toolkit (UFAT) is a computer program that synthesizes motions of time-dependent flows represented by very large sets of data generated in computational fluid dynamics simulations. Prior to the development of UFAT, it was necessary to rely on static, single-snapshot depictions of time-dependent flows generated by flow-visualization software designed for steady flows. Whereas it typically takes weeks to analyze the results of a largescale unsteady-flow simulation by use of steady-flow visualization software, the analysis time is reduced to hours when UFAT is used. UFAT can be used to generate graphical objects of flow visualization results using multi-block curvilinear grids in the format of a previously developed NASA data-visualization program, PLOT3D. These graphical objects can be rendered using FAST, another popular flow visualization software developed at NASA. Flow-visualization techniques that can be exploited by use of UFAT include time-dependent tracking of particles, detection of vortex cores, extractions of stream ribbons and surfaces, and tetrahedral decomposition for optimal particle tracking. Unique computational features of UFAT include capabilities for automatic (batch) processing, restart, memory mapping, and parallel processing. These capabilities significantly reduce analysis time and storage requirements, relative to those of prior flow-visualization software. UFAT can be executed on a variety of supercomputers.

  2. Communication-Efficient Arbitration Models for Low-Resolution Data Flow Computing

    DTIC Science & Technology

    1988-12-01

    phase can be formally described as follows: Graph Partitioning Problem NP-complete: (Garey & Johnson) Given graph G = (V, E), weights w (v) for each v e V...Technical Report, MIT/LCS/TR-218, Cambridge, Mass. Agerwala, Tilak, February 1982, "Data Flow Systems", Computer, pp. 10-13. Babb, Robert G ., July 1984...34Parallel Processing with Large-Grain Data Flow Techniques," IEEE Computer 17, 7, pp. 55-61. Babb, Robert G ., II, Lise Storc, and William C. Ragsdale

  3. Visualization of Concrete Slump Flow Using the Kinect Sensor

    PubMed Central

    Park, Minbeom

    2018-01-01

    Workability is regarded as one of the important parameters of high-performance concrete and monitoring it is essential in concrete quality management at construction sites. The conventional workability test methods are basically based on length and time measured by a ruler and a stopwatch and, as such, inevitably involves human error. In this paper, we propose a 4D slump test method based on digital measurement and data processing as a novel concrete workability test. After acquiring the dynamically changing 3D surface of fresh concrete using a 3D depth sensor during the slump flow test, the stream images are processed with the proposed 4D slump processing algorithm and the results are compressed into a single 4D slump image. This image basically represents the dynamically spreading cross-section of fresh concrete along the time axis. From the 4D slump image, it is possible to determine the slump flow diameter, slump flow time, and slump height at any location simultaneously. The proposed 4D slump test will be able to activate research related to concrete flow simulation and concrete rheology by providing spatiotemporal measurement data of concrete flow. PMID:29510510

  4. Visualization of Concrete Slump Flow Using the Kinect Sensor.

    PubMed

    Kim, Jung-Hoon; Park, Minbeom

    2018-03-03

    Workability is regarded as one of the important parameters of high-performance concrete and monitoring it is essential in concrete quality management at construction sites. The conventional workability test methods are basically based on length and time measured by a ruler and a stopwatch and, as such, inevitably involves human error. In this paper, we propose a 4D slump test method based on digital measurement and data processing as a novel concrete workability test. After acquiring the dynamically changing 3D surface of fresh concrete using a 3D depth sensor during the slump flow test, the stream images are processed with the proposed 4D slump processing algorithm and the results are compressed into a single 4D slump image. This image basically represents the dynamically spreading cross-section of fresh concrete along the time axis. From the 4D slump image, it is possible to determine the slump flow diameter, slump flow time, and slump height at any location simultaneously. The proposed 4D slump test will be able to activate research related to concrete flow simulation and concrete rheology by providing spatiotemporal measurement data of concrete flow.

  5. AVIRIS ground data-processing system

    NASA Technical Reports Server (NTRS)

    Reimer, John H.; Heyada, Jan R.; Carpenter, Steve C.; Deich, William T. S.; Lee, Meemong

    1987-01-01

    The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has been under development at JPL for the past four years. During this time, a dedicated ground data-processing system has been designed and implemented to store and process the large amounts of data expected. This paper reviews the objectives of this ground data-processing system and describes the hardware. An outline of the data flow through the system is given, and the software and incorporated algorithms developed specifically for the systematic processing of AVIRIS data are described.

  6. Traffic Flow Density Distribution Based on FEM

    NASA Astrophysics Data System (ADS)

    Ma, Jing; Cui, Jianming

    In analysis of normal traffic flow, it usually uses the static or dynamic model to numerical analyze based on fluid mechanics. However, in such handling process, the problem of massive modeling and data handling exist, and the accuracy is not high. Finite Element Method (FEM) is a production which is developed from the combination of a modern mathematics, mathematics and computer technology, and it has been widely applied in various domain such as engineering. Based on existing theory of traffic flow, ITS and the development of FEM, a simulation theory of the FEM that solves the problems existing in traffic flow is put forward. Based on this theory, using the existing Finite Element Analysis (FEA) software, the traffic flow is simulated analyzed with fluid mechanics and the dynamics. Massive data processing problem of manually modeling and numerical analysis is solved, and the authenticity of simulation is enhanced.

  7. Software Tool Integrating Data Flow Diagrams and Petri Nets

    NASA Technical Reports Server (NTRS)

    Thronesbery, Carroll; Tavana, Madjid

    2010-01-01

    Data Flow Diagram - Petri Net (DFPN) is a software tool for analyzing other software to be developed. The full name of this program reflects its design, which combines the benefit of data-flow diagrams (which are typically favored by software analysts) with the power and precision of Petri-net models, without requiring specialized Petri-net training. (A Petri net is a particular type of directed graph, a description of which would exceed the scope of this article.) DFPN assists a software analyst in drawing and specifying a data-flow diagram, then translates the diagram into a Petri net, then enables graphical tracing of execution paths through the Petri net for verification, by the end user, of the properties of the software to be developed. In comparison with prior means of verifying the properties of software to be developed, DFPN makes verification by the end user more nearly certain, thereby making it easier to identify and correct misconceptions earlier in the development process, when correction is less expensive. After the verification by the end user, DFPN generates a printable system specification in the form of descriptions of processes and data.

  8. Research on key technologies of data processing in internet of things

    NASA Astrophysics Data System (ADS)

    Zhu, Yangqing; Liang, Peiying

    2017-08-01

    The data of Internet of things (IOT) has the characteristics of polymorphism, heterogeneous, large amount and processing real-time. The traditional structured and static batch processing method has not met the requirements of data processing of IOT. This paper studied a middleware that can integrate heterogeneous data of IOT, and integrated different data formats into a unified format. Designed a data processing model of IOT based on the Storm flow calculation architecture, integrated the existing Internet security technology to build the Internet security system of IOT data processing, which provided reference for the efficient transmission and processing of IOT data.

  9. Image processing of aerodynamic data

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.

    1985-01-01

    The use of digital image processing techniques in analyzing and evaluating aerodynamic data is discussed. An image processing system that converts images derived from digital data or from transparent film into black and white, full color, or false color pictures is described. Applications to black and white images of a model wing with a NACA 64-210 section in simulated rain and to computed low properties for transonic flow past a NACA 0012 airfoil are presented. Image processing techniques are used to visualize the variations of water film thicknesses on the wing model and to illustrate the contours of computed Mach numbers for the flow past the NACA 0012 airfoil. Since the computed data for the NACA 0012 airfoil are available only at discrete spatial locations, an interpolation method is used to provide values of the Mach number over the entire field.

  10. Lava-flow characterization at Pisgah Volcanic Field, California, with multiparameter imaging radar

    USGS Publications Warehouse

    Gaddis, L.R.

    1992-01-01

    Multi-incidence-angle (in the 25?? to 55?? range) radar data aquired by the NASA/JPL Airborne Synthetic Aperture Radar (AIRSAR) at three wavelengths simultaneously and displayed at three polarizations are examined for their utility in characterizing lava flows at Pisgah volcanic field, California. Pisgah lava flows were erupted in three phases; flow textures consist of hummocky pahoehoe, smooth pahoehoe, and aa (with and without thin sedimentary cover). Backscatter data shown as a function of relative age of Pisgah flows indicate that dating of lava flows on the basis of average radar backscatter may yield ambiguous results if primary flow textures and modification processes are not well understood. -from Author

  11. Heat flow bounds over the Cascadia margin derived from bottom simulating reflectors and implications for thermal models of subduction

    NASA Astrophysics Data System (ADS)

    Phrampus, Benjamin J.; Harris, Robert N.; Tréhu, Anne M.

    2017-09-01

    Understanding the thermal structure of the Cascadia subduction zone is important for understanding megathrust earthquake processes and seismogenic potential. Currently our understanding of the thermal structure of Cascadia is limited by a lack of high spatial resolution heat flow data and by poor understanding of thermal processes such as hydrothermal fluid circulation in the subducting basement, sediment thickening and dewatering, and frictional heat generation on the plate boundary. Here, using a data set of publically available seismic lines combined with new interpretations of bottom simulating reflector (BSR) distributions, we derive heat flow estimates across the Cascadia margin. Thermal models that account for hydrothermal circulation predict BSR-derived heat flow bounds better than purely conductive models, but still over-predict surface heat flows. We show that when the thermal effects of in-situ sedimentation and of sediment thickening and dewatering due to accretion are included, models with hydrothermal circulation become consistent with our BSR-derived heat flow bounds.

  12. ANALYSIS AND REDUCTION OF LANDSAT DATA FOR USE IN A HIGH PLAINS GROUND-WATER FLOW MODEL.

    USGS Publications Warehouse

    Thelin, Gail; Gaydas, Leonard; Donovan, Walter; Mladinich, Carol

    1984-01-01

    Data obtained from 59 Landsat scenes were used to estimate the areal extent of irrigated agriculture over the High Plains region of the United States for a ground-water flow model. This model provides information on current trends in the amount and distribution of water used for irrigation. The analysis and reduction process required that each Landsat scene be ratioed, interpreted, and aggregated. Data reduction by aggregation was an efficient technique for handling the volume of data analyzed. This process bypassed problems inherent in geometrically correcting and mosaicking the data at pixel resolution and combined the individual Landsat classification into one comprehensive data set.

  13. Eruption rate, area, and length relationships for some Hawaiian lava flows

    NASA Technical Reports Server (NTRS)

    Pieri, David C.; Baloga, Stephen M.

    1986-01-01

    The relationships between the morphological parameters of lava flows and the process parameters of lava composition, eruption rate, and eruption temperature were investigated using literature data on Hawaiian lava flows. Two simple models for lava flow heat loss by Stefan-Boltzmann radiation were employed to derive eruption rate versus planimetric area relationship. For the Hawaiian basaltic flows, the eruption rate is highly correlated with the planimetric area. Moreover, this observed correlation is superior to those from other obvious combinations of eruption rate and flow dimensions. The correlations obtained on the basis of the two theoretical models, suggest that the surface of the Hawaiian flows radiates at an effective temperature much less than the inner parts of the flowing lava, which is in agreement with field observations. The data also indicate that the eruption rate versus planimetric area correlations can be markedly degraded when data from different vents, volcanoes, and epochs are combined.

  14. Restoration of the Apollo 15 Heat Flow Experiment Data from 1975 to 1977

    NASA Technical Reports Server (NTRS)

    Nagihara, S.; Nakamura, Y.; Taylor, P. T.; Williams, D. R.; Kiefer, W. S.

    2017-01-01

    The Apollo 15 Heat Flow Experiment (HFE) was conducted from July 1971 through January 1977. Two heat flow probes were deployed roughly 8.5 meters apart. Probe 1 and Probe 2 penetrated to 1.4-meters and 1-meter depths into the lunar regolith, respectively. Temperatures at different depths and the surface were logged with 7.25-minute intervals and transmitted to Earth. At the conclusion of the experiment, only data obtained from July 1971 through December 1974 were processed and archived at the National Space Science Data Center (NSSDC) by the principal investigator of the experiment, Marcus Langseth of Columbia University. Langseth died in 1997. It is not known what happened to the HFE data tapes he used. Current researchers have strong interests in re-examining the HFE data for the full duration of the experiment. We have recovered and processed large portions of the Apollo 15 HFE data from 1975 through 1977 by assembling data and metadata from various sources.

  15. Aerothermal modeling program. Phase 2, element B: Flow interaction experiment

    NASA Technical Reports Server (NTRS)

    Nikjooy, M.; Mongia, H. C.; Murthy, S. N. B.; Sullivan, J. P.

    1987-01-01

    NASA has instituted an extensive effort to improve the design process and data base for the hot section components of gas turbine engines. The purpose of element B is to establish a benchmark quality data set that consists of measurements of the interaction of circular jets with swirling flow. Such flows are typical of those that occur in the primary zone of modern annular combustion liners. Extensive computations of the swirling flows are to be compared with the measurements for the purpose of assessing the accuracy of current physical models used to predict such flows.

  16. Automation in high-content flow cytometry screening.

    PubMed

    Naumann, U; Wand, M P

    2009-09-01

    High-content flow cytometric screening (FC-HCS) is a 21st Century technology that combines robotic fluid handling, flow cytometric instrumentation, and bioinformatics software, so that relatively large numbers of flow cytometric samples can be processed and analysed in a short period of time. We revisit a recent application of FC-HCS to the problem of cellular signature definition for acute graft-versus-host-disease. Our focus is on automation of the data processing steps using recent advances in statistical methodology. We demonstrate that effective results, on par with those obtained via manual processing, can be achieved using our automatic techniques. Such automation of FC-HCS has the potential to drastically improve diagnosis and biomarker identification.

  17. Design of Flow Systems for Improved Networking and Reduced Noise in Biomolecular Signal Processing in Biocomputing and Biosensing Applications

    PubMed Central

    Verma, Arjun; Fratto, Brian E.; Privman, Vladimir; Katz, Evgeny

    2016-01-01

    We consider flow systems that have been utilized for small-scale biomolecular computing and digital signal processing in binary-operating biosensors. Signal measurement is optimized by designing a flow-reversal cuvette and analyzing the experimental data to theoretically extract the pulse shape, as well as reveal the level of noise it possesses. Noise reduction is then carried out numerically. We conclude that this can be accomplished physically via the addition of properly designed well-mixing flow-reversal cell(s) as an integral part of the flow system. This approach should enable improved networking capabilities and potentially not only digital but analog signal-processing in such systems. Possible applications in complex biocomputing networks and various sense-and-act systems are discussed. PMID:27399702

  18. ITS physical architecture.

    DOT National Transportation Integrated Search

    2002-04-01

    The Physical Architecture identifies the physical subsystems and, architecture flows between subsystems that will implement the processes and support the data flows of the ITS Logical Architecture. The Physical Architecture further identifies the sys...

  19. Material flow data for numerical simulation of powder injection molding

    NASA Astrophysics Data System (ADS)

    Duretek, I.; Holzer, C.

    2017-01-01

    The powder injection molding (PIM) process is a cost efficient and important net-shape manufacturing process that is not completely understood. For the application of simulation programs for the powder injection molding process, apart from suitable physical models, exact material data and in particular knowledge of the flow behavior are essential in order to get precise numerical results. The flow processes of highly filled polymers are complex. Occurring effects are very hard to separate, like shear flow with yield stress, wall slip, elastic effects, etc. Furthermore, the occurrence of phase separation due to the multi-phase composition of compounds is quite probable. In this work, the flow behavior of a 316L stainless steel feedstock for powder injection molding was investigated. Additionally, the influence of pre-shearing on the flow behavior of PIM-feedstocks under practical conditions was examined and evaluated by a special PIM injection molding machine rheometer. In order to have a better understanding of key factors of PIM during the injection step, 3D non-isothermal numerical simulations were conducted with a commercial injection molding simulation software using experimental feedstock properties. The simulation results were compared with the experimental results. The mold filling studies amply illustrate the effect of mold temperature on the filling behavior during the mold filling stage. Moreover, the rheological measurements showed that at low shear rates no zero shear viscosity was observed, but instead the viscosity further increased strongly. This flow behavior could be described with the Cross-WLF approach with Herschel-Bulkley extension very well.

  20. Regional stochastic generation of streamflows using an ARIMA (1,0,1) process and disaggregation

    USGS Publications Warehouse

    Armbruster, Jeffrey T.

    1979-01-01

    An ARIMA (1,0,1) model was calibrated and used to generate long annual flow sequences at three sites in the Juniata River basin, Pennsylvania. The model preserves the mean, variance, and cross correlations of the observed station data. In addition, it has a desirable blend of both high and low frequency characteristics and therefore is capable of preserving the Hurst coefficient, h. The generated annual flows are disaggregated into monthly sequences using a modification of the Valencia-Schaake model. The low-flow frequency and flow duration characteristics of the generated monthly flows, with length equal to the historical data, compare favorably with the historical data. Once the models were verified, 100-year sequences were generated and analyzed for their low flow characteristics. One-, three- and six- month low-flow frequencies at recurrence intervals greater than 10 years are generally found to be lower than flow computed from the historical flows. A method is proposed for synthesizing flows at ungaged sites. (Kosco-USGS)

  1. Visualization of flow during cleaning process on a liquid nanofibrous filter

    NASA Astrophysics Data System (ADS)

    Bílek, P.

    2017-10-01

    This paper deals with visualization of flow during cleaning process on a nanofibrous filter. Cleaning of a filter is very important part of the filtration process which extends lifetime of the filter and improve filtration properties. Cleaning is carried out on flat-sheet filters, where particles are deposited on the filter surface and form a filtration cake. The cleaning process dislodges the deposited filtration cake, which is loose from the membrane surface to the retentate flow. The blocked pores in the filter are opened again and hydrodynamic properties are restored. The presented optical method enables to see flow behaviour in a thin laser sheet on the inlet side of a tested filter during the cleaning process. The local concentration of solid particles is possible to estimate and achieve new information about the cleaning process. In the article is described the cleaning process on nanofibrous membranes for waste water treatment. The hydrodynamic data were compared to the images of the cleaning process.

  2. An integrated study to evaluate debris flow hazard in alpine environment

    NASA Astrophysics Data System (ADS)

    Tiranti, Davide; Crema, Stefano; Cavalli, Marco; Deangeli, Chiara

    2018-05-01

    Debris flows are among the most dangerous natural processes affecting the alpine environment due to their magnitude (volume of transported material) and the long runout. The presence of structures and infrastructures on alluvial fans can lead to severe problems in terms of interactions between debris flows and human activities. Risk mitigation in these areas requires identifying the magnitude, triggers, and propagation of debris flows. Here, we propose an integrated methodology to characterize these phenomena. The methodology consists of three complementary procedures. Firstly, we adopt a classification method based on the propensity of the catchment bedrocks to produce clayey-grained material. The classification allows us to identify the most likely rheology of the process. Secondly, we calculate a sediment connectivity index to estimate the topographic control on the possible coupling between the sediment source areas and the catchment channel network. This step allows for the assessment of the debris supply, which is most likely available for the channelized processes. Finally, with the data obtained in the previous steps, we modelled the propagation and depositional pattern of debris flows with a 3D code based on Cellular Automata. The results of the numerical runs allow us to identify the depositional patterns and the areas potentially involved in the flow processes. This integrated methodology is applied to a test-bed catchment located in Northwestern Alps. The results indicate that this approach can be regarded as a useful tool to estimate debris flow related potential hazard scenarios in an alpine environment in an expeditious way without possessing an exhaustive knowledge of the investigated catchment, including data on historical debris flow events.

  3. Identification of varying time scales in sediment transport using the Hilbert-Huang Transform method

    NASA Astrophysics Data System (ADS)

    Kuai, Ken Z.; Tsai, Christina W.

    2012-02-01

    SummarySediment transport processes vary at a variety of time scales - from seconds, hours, days to months and years. Multiple time scales exist in the system of flow, sediment transport and bed elevation change processes. As such, identification and selection of appropriate time scales for flow and sediment processes can assist in formulating a system of flow and sediment governing equations representative of the dynamic interaction of flow and particles at the desired details. Recognizing the importance of different varying time scales in the fluvial processes of sediment transport, we introduce the Hilbert-Huang Transform method (HHT) to the field of sediment transport for the time scale analysis. The HHT uses the Empirical Mode Decomposition (EMD) method to decompose a time series into a collection of the Intrinsic Mode Functions (IMFs), and uses the Hilbert Spectral Analysis (HSA) to obtain instantaneous frequency data. The EMD extracts the variability of data with different time scales, and improves the analysis of data series. The HSA can display the succession of time varying time scales, which cannot be captured by the often-used Fast Fourier Transform (FFT) method. This study is one of the earlier attempts to introduce the state-of-the-art technique for the multiple time sales analysis of sediment transport processes. Three practical applications of the HHT method for data analysis of both suspended sediment and bedload transport time series are presented. The analysis results show the strong impact of flood waves on the variations of flow and sediment time scales at a large sampling time scale, as well as the impact of flow turbulence on those time scales at a smaller sampling time scale. Our analysis reveals that the existence of multiple time scales in sediment transport processes may be attributed to the fractal nature in sediment transport. It can be demonstrated by the HHT analysis that the bedload motion time scale is better represented by the ratio of the water depth to the settling velocity, h/ w. In the final part, HHT results are compared with an available time scale formula in literature.

  4. Resilience of riverbed vegetation to uprooting by flow

    NASA Astrophysics Data System (ADS)

    Perona, P.; Crouzy, B.

    2018-03-01

    Riverine ecosystem biodiversity is largely maintained by ecogeomorphic processes including vegetation renewal via uprooting and recovery times to flow disturbances. Plant roots thus heavily contribute to engineering resilience to perturbation of such ecosystems. We show that vegetation uprooting by flow occurs as a fatigue-like mechanism, which statistically requires a given exposure time to imposed riverbed flow erosion rates before the plant collapses. We formulate a physically based stochastic model for the actual plant rooting depth and the time-to-uprooting, which allows us to define plant resilience to uprooting for generic time-dependent flow erosion dynamics. This theory shows that plant resilience to uprooting depends on the time-to-uprooting and that root mechanical anchoring acts as a process memory stored within the plant-soil system. The model is validated against measured data of time-to-uprooting of Avena sativa seedlings with various root lengths under different flow conditions. This allows for assessing the natural variance of the uprooting-by-flow process and to compute the prediction entropy, which quantifies the relative importance of the deterministic and the random components affecting the process.

  5. Two-lane traffic-flow model with an exact steady-state solution.

    PubMed

    Kanai, Masahiro

    2010-12-01

    We propose a stochastic cellular-automaton model for two-lane traffic flow based on the misanthrope process in one dimension. The misanthrope process is a stochastic process allowing for an exact steady-state solution; hence, we have an exact flow-density diagram for two-lane traffic. In addition, we introduce two parameters that indicate, respectively, driver's driving-lane preference and passing-lane priority. Due to the additional parameters, the model shows a deviation of the density ratio for driving-lane use and a biased lane efficiency in flow. Then, a mean-field approach explicitly describes the asymmetric flow by the hop rates, the driving-lane preference, and the passing-lane priority. Meanwhile, the simulation results are in good agreement with an observational data, and we thus estimate these parameters. We conclude that the proposed model successfully produces two-lane traffic flow particularly with the driving-lane preference and the passing-lane priority.

  6. File-Based Data Flow in the CMS Filter Farm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andre, J.M.; et al.

    2015-12-23

    During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes aremore » also generated in the form of small “documents” using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These “files” can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.« less

  7. In-situ Condition Monitoring of Components in Small Modular Reactors Using Process and Electrical Signature Analysis. Final report, volume 1. Development of experimental flow control loop, data analysis and plant monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhyaya, Belle; Hines, J. Wesley; Damiano, Brian

    The research and development under this project was focused on the following three major objectives: Objective 1: Identification of critical in-vessel SMR components for remote monitoring and development of their low-order dynamic models, along with a simulation model of an integral pressurized water reactor (iPWR). Objective 2: Development of an experimental flow control loop with motor-driven valves and pumps, incorporating data acquisition and on-line monitoring interface. Objective 3: Development of stationary and transient signal processing methods for electrical signatures, machinery vibration, and for characterizing process variables for equipment monitoring. This objective includes the development of a data analysis toolbox. Themore » following is a summary of the technical accomplishments under this project: - A detailed literature review of various SMR types and electrical signature analysis of motor-driven systems was completed. A bibliography of literature is provided at the end of this report. Assistance was provided by ORNL in identifying some key references. - A review of literature on pump-motor modeling and digital signal processing methods was performed. - An existing flow control loop was upgraded with new instrumentation, data acquisition hardware and software. The upgrading of the experimental loop included the installation of a new submersible pump driven by a three-phase induction motor. All the sensors were calibrated before full-scale experimental runs were performed. - MATLAB-Simulink model of a three-phase induction motor and pump system was completed. The model was used to simulate normal operation and fault conditions in the motor-pump system, and to identify changes in the electrical signatures. - A simulation model of an integral PWR (iPWR) was updated and the MATLAB-Simulink model was validated for known transients. The pump-motor model was interfaced with the iPWR model for testing the impact of primary flow perturbations (upsets) on plant parameters and the pump electrical signatures. Additionally, the reactor simulation is being used to generate normal operation data and data with instrumentation faults and process anomalies. A frequency controller was interfaced with the motor power supply in order to vary the electrical supply frequency. The experimental flow control loop was used to generate operational data under varying motor performance characteristics. Coolant leakage events were simulated by varying the bypass loop flow rate. The accuracy of motor power calculation was improved by incorporating the power factor, computed from motor current and voltage in each phase of the induction motor.- A variety of experimental runs were made for steady-state and transient pump operating conditions. Process, vibration, and electrical signatures were measured using a submersible pump with variable supply frequency. High correlation was seen between motor current and pump discharge pressure signal; similar high correlation was exhibited between pump motor power and flow rate. Wide-band analysis indicated high coherence (in the frequency domain) between motor current and vibration signals. - Wide-band operational data from a PWR were acquired from AMS Corporation and used to develop time-series models, and to estimate signal spectrum and sensor time constant. All the data were from different pressure transmitters in the system, including primary and secondary loops. These signals were pre-processed using the wavelet transform for filtering both low-frequency and high-frequency bands. This technique of signal pre-processing provides minimum distortion of the data, and results in a more optimal estimation of time constants of plant sensors using time-series modeling techniques.« less

  8. Testing seismic amplitude source location for fast debris-flow detection at Illgraben, Switzerland

    NASA Astrophysics Data System (ADS)

    Walter, Fabian; Burtin, Arnaud; McArdell, Brian W.; Hovius, Niels; Weder, Bianca; Turowski, Jens M.

    2017-06-01

    Heavy precipitation can mobilize tens to hundreds of thousands of cubic meters of sediment in steep Alpine torrents in a short time. The resulting debris flows (mixtures of water, sediment and boulders) move downstream with velocities of several meters per second and have a high destruction potential. Warning protocols for affected communities rely on raising awareness about the debris-flow threat, precipitation monitoring and rapid detection methods. The latter, in particular, is a challenge because debris-flow-prone torrents have their catchments in steep and inaccessible terrain, where instrumentation is difficult to install and maintain. Here we test amplitude source location (ASL) as a processing scheme for seismic network data for early warning purposes. We use debris-flow and noise seismograms from the Illgraben catchment, Switzerland, a torrent system which produces several debris-flow events per year. Automatic in situ detection is currently based on geophones mounted on concrete check dams and radar stage sensors suspended above the channel. The ASL approach has the advantage that it uses seismometers, which can be installed at more accessible locations where a stable connection to mobile phone networks is available for data communication. Our ASL processing uses time-averaged ground vibration amplitudes to estimate the location of the debris-flow front. Applied to continuous data streams, inversion of the seismic amplitude decay throughout the network is robust and efficient, requires no manual identification of seismic phase arrivals and eliminates the need for a local seismic velocity model. We apply the ASL technique to a small debris-flow event on 19 July 2011, which was captured with a temporary seismic monitoring network. The processing rapidly detects the debris-flow event half an hour before arrival at the outlet of the torrent and several minutes before detection by the in situ alarm system. An analysis of continuous seismic records furthermore indicates that detectability of Illgraben debris flows of this size is unaffected by changing environmental and anthropogenic seismic noise and that false detections can be greatly reduced with simple processing steps.

  9. Conceptual-level workflow modeling of scientific experiments using NMR as a case study

    PubMed Central

    Verdi, Kacy K; Ellis, Heidi JC; Gryk, Michael R

    2007-01-01

    Background Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. Results We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Conclusion Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment. PMID:17263870

  10. Conceptual-level workflow modeling of scientific experiments using NMR as a case study.

    PubMed

    Verdi, Kacy K; Ellis, Heidi Jc; Gryk, Michael R

    2007-01-30

    Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment.

  11. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    Engh, G.J. van den; Stokdijk, W.

    1992-09-22

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.

  12. Pulse-Flow Microencapsulation System

    NASA Technical Reports Server (NTRS)

    Morrison, Dennis R.

    2006-01-01

    The pulse-flow microencapsulation system (PFMS) is an automated system that continuously produces a stream of liquid-filled microcapsules for delivery of therapeutic agents to target tissues. Prior microencapsulation systems have relied on batch processes that involve transfer of batches between different apparatuses for different stages of production followed by sampling for acquisition of quality-control data, including measurements of size. In contrast, the PFMS is a single, microprocessor-controlled system that performs all processing steps, including acquisition of quality-control data. The quality-control data can be used as real-time feedback to ensure the production of large quantities of uniform microcapsules.

  13. Modular space station, phase B extension. Information management advanced development. Volume 4: Data processing assembly

    NASA Technical Reports Server (NTRS)

    Gerber, C. R.

    1972-01-01

    The computation and logical functions which are performed by the data processing assembly of the modular space station are defined. The subjects discussed are: (1) requirements analysis, (2) baseline data processing assembly configuration, (3) information flow study, (4) throughput simulation, (5) redundancy study, (6) memory studies, and (7) design requirements specification.

  14. Capillary hydrodynamics and transport processes during phase change in microscale systems

    NASA Astrophysics Data System (ADS)

    Kuznetsov, V. V.

    2017-09-01

    The characteristics of two-phase gas-liquid flow and heat transfer during flow boiling and condensing in micro-scale heat exchangers are discussed in this paper. The results of numerical simulation of the evaporating liquid film flowing downward in rectangular minichannel of the two-phase compact heat exchanger are presented and the peculiarities of microscale heat transport in annular flow with phase changes are discussed. Presented model accounts the capillarity induced transverse flow of liquid and predicts the microscale heat transport processes when the nucleate boiling becomes suppressed. The simultaneous influence of the forced convection, nucleate boiling and liquid film evaporation during flow boiling in plate-fin heat exchangers is considered. The equation for prediction of the flow boiling heat transfer at low flux conditions is presented and verified using experimental data.

  15. Massively parallel data processing for quantitative total flow imaging with optical coherence microscopy and tomography

    NASA Astrophysics Data System (ADS)

    Sylwestrzak, Marcin; Szlag, Daniel; Marchand, Paul J.; Kumar, Ashwin S.; Lasser, Theo

    2017-08-01

    We present an application of massively parallel processing of quantitative flow measurements data acquired using spectral optical coherence microscopy (SOCM). The need for massive signal processing of these particular datasets has been a major hurdle for many applications based on SOCM. In view of this difficulty, we implemented and adapted quantitative total flow estimation algorithms on graphics processing units (GPU) and achieved a 150 fold reduction in processing time when compared to a former CPU implementation. As SOCM constitutes the microscopy counterpart to spectral optical coherence tomography (SOCT), the developed processing procedure can be applied to both imaging modalities. We present the developed DLL library integrated in MATLAB (with an example) and have included the source code for adaptations and future improvements. Catalogue identifier: AFBT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFBT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 913552 No. of bytes in distributed program, including test data, etc.: 270876249 Distribution format: tar.gz Programming language: CUDA/C, MATLAB. Computer: Intel x64 CPU, GPU supporting CUDA technology. Operating system: 64-bit Windows 7 Professional. Has the code been vectorized or parallelized?: Yes, CPU code has been vectorized in MATLAB, CUDA code has been parallelized. RAM: Dependent on users parameters, typically between several gigabytes and several tens of gigabytes Classification: 6.5, 18. Nature of problem: Speed up of data processing in optical coherence microscopy Solution method: Utilization of GPU for massively parallel data processing Additional comments: Compiled DLL library with source code and documentation, example of utilization (MATLAB script with raw data) Running time: 1,8 s for one B-scan (150 × faster in comparison to the CPU data processing time)

  16. Features of free software packages in flow cytometry: a comparison between four non-commercial software sources.

    PubMed

    Sahraneshin Samani, Fazel; Moore, Jodene K; Khosravani, Pardis; Ebrahimi, Marzieh

    2014-08-01

    Flow cytometers designed to analyze large particles are enabling new applications in biology. Data analysis is a critical component of the process FCM. In this article we compare features of four free software packages including WinMDI, Cyflogic, Flowing software, and Cytobank.

  17. Application of the Hydroecological Integrity Assessment Process for Missouri Streams

    USGS Publications Warehouse

    Kennen, Jonathan G.; Henriksen, James A.; Heasley, John; Cade, Brian S.; Terrell, James W.

    2009-01-01

    Natural flow regime concepts and theories have established the justification for maintaining or restoring the range of natural hydrologic variability so that physiochemical processes, native biodiversity, and the evolutionary potential of aquatic and riparian assemblages can be sustained. A synthesis of recent research advances in hydroecology, coupled with stream classification using hydroecologically relevant indices, has produced the Hydroecological Integrity Assessment Process (HIP). HIP consists of (1) a regional classification of streams into hydrologic stream types based on flow data from long-term gaging-station records for relatively unmodified streams, (2) an identification of stream-type specific indices that address 11 subcomponents of the flow regime, (3) an ability to establish environmental flow standards, (4) an evaluation of hydrologic alteration, and (5) a capacity to conduct alternative analyses. The process starts with the identification of a hydrologic baseline (reference condition) for selected locations, uses flow data from a stream-gage network, and proceeds to classify streams into hydrologic stream types. Concurrently, the analysis identifies a set of non-redundant and ecologically relevant hydrologic indices for 11 subcomponents of flow for each stream type. Furthermore, regional hydrologic models for synthesizing flow conditions across a region and the development of flow-ecology response relations for each stream type can be added to further enhance the process. The application of HIP to Missouri streams identified five stream types ((1) intermittent, (2) perennial runoff-flashy, (3) perennial runoff-moderate baseflow, (4) perennial groundwater-stable, and (5) perennial groundwater-super stable). Two Missouri-specific computer software programs were developed: (1) a Missouri Hydrologic Assessment Tool (MOHAT) which is used to establish a hydrologic baseline, provide options for setting environmental flow standards, and compare past and proposed hydrologic alterations; and (2) a Missouri Stream Classification Tool (MOSCT) designed for placing previously unclassified streams into one of the five pre-defined stream types.

  18. 40 CFR 98.426 - Data reporting requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... flow meter in your process chain in relation to the points of CO2 stream capture, dehydration... concentration. (7) The location of the flow meter in your process chain in relation to the points of CO2 stream... meter(s) in metric tons. (iii) The total annual CO2 mass supplied in metric tons. (iv) The location of...

  19. 40 CFR 98.426 - Data reporting requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... flow meter in your process chain in relation to the points of CO2 stream capture, dehydration... concentration. (7) The location of the flow meter in your process chain in relation to the points of CO2 stream... meter(s) in metric tons. (iii) The total annual CO2 mass supplied in metric tons. (iv) The location of...

  20. 40 CFR 98.426 - Data reporting requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... flow meter in your process chain in relation to the points of CO2 stream capture, dehydration... concentration. (7) The location of the flow meter in your process chain in relation to the points of CO2 stream... meter(s) in metric tons. (iii) The total annual CO2 mass supplied in metric tons. (iv) The location of...

  1. A Bayesian model for highly accelerated phase-contrast MRI.

    PubMed

    Rich, Adam; Potter, Lee C; Jin, Ning; Ash, Joshua; Simonetti, Orlando P; Ahmad, Rizwan

    2016-08-01

    Phase-contrast magnetic resonance imaging is a noninvasive tool to assess cardiovascular disease by quantifying blood flow; however, low data acquisition efficiency limits the spatial and temporal resolutions, real-time application, and extensions to four-dimensional flow imaging in clinical settings. We propose a new data processing approach called Reconstructing Velocity Encoded MRI with Approximate message passing aLgorithms (ReVEAL) that accelerates the acquisition by exploiting data structure unique to phase-contrast magnetic resonance imaging. The proposed approach models physical correlations across space, time, and velocity encodings. The proposed Bayesian approach exploits the relationships in both magnitude and phase among velocity encodings. A fast iterative recovery algorithm is introduced based on message passing. For validation, prospectively undersampled data are processed from a pulsatile flow phantom and five healthy volunteers. The proposed approach is in good agreement, quantified by peak velocity and stroke volume (SV), with reference data for acceleration rates R≤10. For SV, Pearson r≥0.99 for phantom imaging (n = 24) and r≥0.96 for prospectively accelerated in vivo imaging (n = 10) for R≤10. The proposed approach enables accurate quantification of blood flow from highly undersampled data. The technique is extensible to four-dimensional flow imaging, where higher acceleration may be possible due to additional redundancy. Magn Reson Med 76:689-701, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  2. Fuel Spray Diagnostics

    NASA Technical Reports Server (NTRS)

    Humenik, F. M.; Bosque, M. A.

    1983-01-01

    Fundamental experimental data base for turbulent flow mixing models is provided and better prediction of the more complex turbulent chemical reacting flows. Analytical application to combustor design is provided and a better fundamental understanding of the combustion process.

  3. Communication-Efficient Arbitration Models for Low-Resolution Data Flow Computing

    DTIC Science & Technology

    1988-12-01

    Given graph G = (V, E), weights w (v) for each v e V and L (e) for each e c E, and positive integers B and J, find a partition of V into disjoint...MIT/LCS/TR-218, Cambridge, Mass. Agerwala, Tilak, February 1982, "Data Flow Systems", Computer, pp. 10-13. Babb, Robert G ., July 1984, "Parallel...Processing with Large-Grain Data Flow Techniques," IEEE Computer 17, 7, pp. 55-61. Babb, Robert G ., II, Lise Storc, and William C. Ragsdale, 1986, "A Large

  4. NaturAnalogs for the Unsaturated Zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Simmons; A. Unger; M. Murrell

    2000-03-08

    The purpose of this Analysis/Model Report (AMR) is to document natural and anthropogenic (human-induced) analog sites and processes that are applicable to flow and transport processes expected to occur at the potential Yucca Mountain repository in order to build increased confidence in modeling processes of Unsaturated Zone (UZ) flow and transport. This AMR was prepared in accordance with ''AMR Development Plan for U0135, Natural Analogs for the UZ'' (CRWMS 1999a). Knowledge from analog sites and processes is used as corroborating information to test and build confidence in flow and transport models of Yucca Mountain, Nevada. This AMR supports the Unsaturatedmore » Zone (UZ) Flow and Transport Process Model Report (PMR) and the Yucca Mountain Site Description. The objectives of this AMR are to test and build confidence in the representation of UZ processes in numerical models utilized in the UZ Flow and Transport Model. This is accomplished by: (1) applying data from Boxy Canyon, Idaho in simulations of UZ flow using the same methodologies incorporated in the Yucca Mountain UZ Flow and Transport Model to assess the fracture-matrix interaction conceptual model; (2) Providing a preliminary basis for analysis of radionuclide transport at Pena Blanca, Mexico as an analog of radionuclide transport at Yucca Mountain; and (3) Synthesizing existing information from natural analog studies to provide corroborating evidence for representation of ambient and thermally coupled UZ flow and transport processes in the UZ Model.« less

  5. Particle-sampling statistics in laser anemometers Sample-and-hold systems and saturable systems

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Jensen, A. S.

    1983-01-01

    The effect of the data-processing system on the particle statistics obtained with laser anemometry of flows containing suspended particles is examined. Attention is given to the sample and hold processor, a pseudo-analog device which retains the last measurement until a new measurement is made, followed by time-averaging of the data. The second system considered features a dead time, i.e., a saturable system with a significant reset time with storage in a data buffer. It is noted that the saturable system operates independent of the particle arrival rate. The probabilities of a particle arrival in a given time period are calculated for both processing systems. It is shown that the system outputs are dependent on the mean particle flow rate, the flow correlation time, and the flow statistics, indicating that the particle density affects both systems. The results are significant for instances of good correlation between the particle density and velocity, such as occurs near the edge of a jet.

  6. Performance study of a data flow architecture

    NASA Technical Reports Server (NTRS)

    Adams, George

    1985-01-01

    Teams of scientists studied data flow concepts, static data flow machine architecture, and the VAL language. Each team mapped its application onto the machine and coded it in VAL. The principal findings of the study were: (1) Five of the seven applications used the full power of the target machine. The galactic simulation and multigrid fluid flow teams found that a significantly smaller version of the machine (16 processing elements) would suffice. (2) A number of machine design parameters including processing element (PE) function unit numbers, array memory size and bandwidth, and routing network capability were found to be crucial for optimal machine performance. (3) The study participants readily acquired VAL programming skills. (4) Participants learned that application-based performance evaluation is a sound method of evaluating new computer architectures, even those that are not fully specified. During the course of the study, participants developed models for using computers to solve numerical problems and for evaluating new architectures. These models form the bases for future evaluation studies.

  7. Dose-Dependent Thresholds of 10-ns Electric Pulse Induced Plasma Membrane Disruption and Cytotoxicity in Multiple Cell Lines

    DTIC Science & Technology

    2011-01-01

    normalized to parallel controls. Flow Cytometry and Confocal Microscopy Upon exposure to 10-ns EP, aliquots of the cellular suspension were added to a tube...Survival data was processed and plotted using GrapherH software (Golden Software, Golden, Colorado). Flow cytometry results were processed in C6 software...Accuri Cytometers, Inc., Ann Arbor, MI) and FCSExpress software (DeNovo Software, Los Angeles, CA). Final analysis and presentation of flow cytometry

  8. Estimating Discharge and Nonpoint Source Nitrate Loading to Streams From Three End-Member Pathways Using High-Frequency Water Quality Data

    NASA Astrophysics Data System (ADS)

    Miller, Matthew P.; Tesoriero, Anthony J.; Hood, Krista; Terziotti, Silvia; Wolock, David M.

    2017-12-01

    The myriad hydrologic and biogeochemical processes taking place in watersheds occurring across space and time are integrated and reflected in the quantity and quality of water in streams and rivers. Collection of high-frequency water quality data with sensors in surface waters provides new opportunities to disentangle these processes and quantify sources and transport of water and solutes in the coupled groundwater-surface water system. A new approach for separating the streamflow hydrograph into three components was developed and coupled with high-frequency nitrate data to estimate time-variable nitrate loads from chemically dilute quick flow, chemically concentrated quick flow, and slowflow groundwater end-member pathways for periods of up to 2 years in a groundwater-dominated and a quick-flow-dominated stream in central Wisconsin, using only streamflow and in-stream water quality data. The dilute and concentrated quick flow end-members were distinguished using high-frequency specific conductance data. Results indicate that dilute quick flow contributed less than 5% of the nitrate load at both sites, whereas 89 ± 8% of the nitrate load at the groundwater-dominated stream was from slowflow groundwater, and 84 ± 25% of the nitrate load at the quick-flow-dominated stream was from concentrated quick flow. Concentrated quick flow nitrate concentrations varied seasonally at both sites, with peak concentrations in the winter that were 2-3 times greater than minimum concentrations during the growing season. Application of this approach provides an opportunity to assess stream vulnerability to nonpoint source nitrate loading and expected stream responses to current or changing conditions and practices in watersheds.

  9. Understanding the Origin of Species with Genome-Scale Data: the Role of Gene Flow

    PubMed Central

    Sousa, Vitor; Hey, Jody

    2017-01-01

    As it becomes easier to sequence multiple genomes from closely related species, evolutionary biologists working on speciation are struggling to get the most out of very large population-genomic data sets. Such data hold the potential to resolve evolutionary biology’s long-standing questions about the role of gene exchange in species formation. In principle the new population genomic data can be used to disentangle the conflicting roles of natural selection and gene flow during the divergence process. However there are great challenges in taking full advantage of such data, especially with regard to including recombination in genetic models of the divergence process. Current data, models, methods and the potential pitfalls in using them will be considered here. PMID:23657479

  10. Thermal imaging for cold air flow visualisation and analysis

    NASA Astrophysics Data System (ADS)

    Grudzielanek, M.; Pflitsch, A.; Cermak, J.

    2012-04-01

    In this work we present first applications of a thermal imaging system for animated visualization and analysis of cold air flow in field studies. The development of mobile thermal imaging systems advanced very fast in the last decades. The surface temperature of objects, which is detected with long-wave infrared radiation, affords conclusions in different problems of research. Modern thermal imaging systems allow infrared picture-sequences and a following data analysis; the systems are not exclusive imaging methods like in the past. Thus, the monitoring and analysing of dynamic processes became possible. We measured the cold air flow on a sloping grassland area with standard methods (sonic anemometers and temperature loggers) plus a thermal imaging system measuring in the range from 7.5 to 14µm. To analyse the cold air with the thermal measurements, we collected the surface infrared temperatures at a projection screen, which was located in cold air flow direction, opposite the infrared (IR) camera. The intention of using a thermal imaging system for our work was: 1. to get a general idea of practicability in our problem, 2. to assess the value of the extensive and more detailed data sets and 3. to optimise visualisation. The results were very promising. Through the possibility of generating time-lapse movies of the image sequences in time scaling, processes of cold air flow, like flow waves, turbulence and general flow speed, can be directly identified. Vertical temperature gradients and near-ground inversions can be visualised very well. Time-lapse movies will be presented. The extensive data collection permits a higher spatial resolution of the data than standard methods, so that cold air flow attributes can be explored in much more detail. Time series are extracted from the IR data series, analysed statistically, and compared to data obtained using traditional systems. Finally, we assess the usefulness of the additional measurement of cold air flow with thermal imaging systems.

  11. Indirectly Estimating International Net Migration Flows by Age and Gender: The Community Demographic Model International Migration (CDM-IM) Dataset

    PubMed Central

    Nawrotzki, Raphael J.; Jiang, Leiwen

    2015-01-01

    Although data for the total number of international migrant flows is now available, no global dataset concerning demographic characteristics, such as the age and gender composition of migrant flows exists. This paper reports on the methods used to generate the CDM-IM dataset of age and gender specific profiles of bilateral net (not gross) migrant flows. We employ raw data from the United Nations Global Migration Database and estimate net migrant flows by age and gender between two time points around the year 2000, accounting for various demographic processes (fertility, mortality). The dataset contains information on 3,713 net migrant flows. Validation analyses against existing data sets and the historical, geopolitical context demonstrate that the CDM-IM dataset is of reasonably high quality. PMID:26692590

  12. Predictions of spray combustion interactions

    NASA Technical Reports Server (NTRS)

    Shuen, J. S.; Solomon, A. S. P.; Faeth, G. M.

    1984-01-01

    Mean and fluctuating phase velocities; mean particle mass flux; particle size; and mean gas-phase Reynolds stress, composition and temperature were measured in stationary, turbulent, axisymmetric, and flows which conform to the boundary layer approximations while having well-defined initial and boundary conditions in dilute particle-laden jets, nonevaporating sprays, and evaporating sprays injected into a still air environment. Three models of the processes, typical of current practice, were evaluated. The local homogeneous flow and deterministic separated flow models did not provide very satisfactory predictions over the present data base. In contrast, the stochastic separated flow model generally provided good predictions and appears to be an attractive approach for treating nonlinear interphase transport processes in turbulent flows containing particles (drops).

  13. Integrated Water Flow Model (IWFM), A Tool For Numerically Simulating Linked Groundwater, Surface Water And Land-Surface Hydrologic Processes

    NASA Astrophysics Data System (ADS)

    Dogrul, E. C.; Brush, C. F.; Kadir, T. N.

    2006-12-01

    The Integrated Water Flow Model (IWFM) is a comprehensive input-driven application for simulating groundwater flow, surface water flow and land-surface hydrologic processes, and interactions between these processes, developed by the California Department of Water Resources (DWR). IWFM couples a 3-D finite element groundwater flow process and 1-D land surface, lake, stream flow and vertical unsaturated-zone flow processes which are solved simultaneously at each time step. The groundwater flow system is simulated as a multilayer aquifer system with a mixture of confined and unconfined aquifers separated by semiconfining layers. The groundwater flow process can simulate changing aquifer conditions (confined to unconfined and vice versa), subsidence, tile drains, injection wells and pumping wells. The land surface process calculates elemental water budgets for agricultural, urban, riparian and native vegetation classes. Crop water demands are dynamically calculated using distributed soil properties, land use and crop data, and precipitation and evapotranspiration rates. The crop mix can also be automatically modified as a function of pumping lift using logit functions. Surface water diversions and groundwater pumping can each be specified, or can be automatically adjusted at run time to balance water supply with water demand. The land-surface process also routes runoff to streams and deep percolation to the unsaturated zone. Surface water networks are specified as a series of stream nodes (coincident with groundwater nodes) with specified bed elevation, conductance and stage-flow relationships. Stream nodes are linked to form stream reaches. Stream inflows at the model boundary, surface water diversion locations, and one or more surface water deliveries per location are specified. IWFM routes stream flows through the network, calculating groundwater-surface water interactions, accumulating inflows from runoff, and allocating available stream flows to meet specified or calculated deliveries. IWFM utilizes a very straight-forward input file structure, allowing rapid development of complex simulations. A key feature of IWFM is a new algorithm for computation of groundwater flow across element faces. Enhancements to version 3.0 include automatic time-tracking of input and output data sets, linkage with the HEC-DSS database, and dynamic crop allocation using logit functions. Utilities linking IWFM to the PEST automated calibration suite are also available. All source code, executables and documentation are available for download from the DWR web site. IWFM is currently being used to develop hydrologic simulations of California's Central Valley (C2VSIM); the west side of California's San Joaquin Valley (WESTSIM); Butte County, CA; Solano County, CA; Merced County, CA; and the Oregon side of the Walla Walla River Basin.

  14. Using artificial intelligence to improve identification of nanofluid gas-liquid two-phase flow pattern in mini-channel

    NASA Astrophysics Data System (ADS)

    Xiao, Jian; Luo, Xiaoping; Feng, Zhenfei; Zhang, Jinxin

    2018-01-01

    This work combines fuzzy logic and a support vector machine (SVM) with a principal component analysis (PCA) to create an artificial-intelligence system that identifies nanofluid gas-liquid two-phase flow states in a vertical mini-channel. Flow-pattern recognition requires finding the operational details of the process and doing computer simulations and image processing can be used to automate the description of flow patterns in nanofluid gas-liquid two-phase flow. This work uses fuzzy logic and a SVM with PCA to improve the accuracy with which the flow pattern of a nanofluid gas-liquid two-phase flow is identified. To acquire images of nanofluid gas-liquid two-phase flow patterns of flow boiling, a high-speed digital camera was used to record four different types of flow-pattern images, namely annular flow, bubbly flow, churn flow, and slug flow. The textural features extracted by processing the images of nanofluid gas-liquid two-phase flow patterns are used as inputs to various identification schemes such as fuzzy logic, SVM, and SVM with PCA to identify the type of flow pattern. The results indicate that the SVM with reduced characteristics of PCA provides the best identification accuracy and requires less calculation time than the other two schemes. The data reported herein should be very useful for the design and operation of industrial applications.

  15. An Overview of Ares-I CFD Ascent Aerodynamic Data Development And Analysis Based on USM3D

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad; Parlette, Edward B.

    2011-01-01

    An overview of the computational results obtained from the NASA Langley developed unstructured grid, Reynolds-averaged Navier-Stokes flow solver USM3D, in support of the Ares-I project within the NASA s Constellation program, are presented. The numerical data are obtained for representative flow conditions pertinent to the ascent phase of the trajectory at both wind tunnel and flight Reynolds number without including any propulsion effects. The USM3D flow solver has been designated to have the primary role within the Ares-I project in developing the computational aerodynamic data for the vehicle while other flow solvers, namely OVERFLOW and FUN3D, have supporting roles to provide complementary results for fewer cases as part of the verification process to ensure code-to-code solution consistency. Similarly, as part of the solution validation efforts, the predicted numerical results are correlated with the aerodynamic wind tunnel data that have been generated within the project in the past few years. Sample aerodynamic results and the processes established for the computational solution/data development for the evolving Ares-I design cycles are presented.

  16. [Software-based visualization of patient flow at a university eye clinic].

    PubMed

    Greb, O; Abou Moulig, W; Hufendiek, K; Junker, B; Framme, C

    2017-03-01

    This article presents a method for visualization and navigation of patient flow in outpatient eye clinics with a high level of complexity. A network-based software solution was developed targeting long-term process optimization by structural analysis and temporal coordination of process navigation. Each examination unit receives a separate waiting list of patients in which the patient flow for every patient is recorded in a timeline. Time periods and points in time can be executed by mouse clicks and the desired diagnostic procedure can be entered. Recent progress in any of these diagnostic requests, as well as a variety of information on patient progress are collated and drawn into the corresponding timeline which can be viewed by any of the personnel involved. The software called TimeElement has been successfully tested in the practical implemenation for several months. As an example the patient flow regarding time stamps of defined events for intravitreous injections on 250 patients was recorded and an average attendance time of 169.71 min was found, whereby the time was also automatically recorded for each individual stage. Recording of patient flow data is a fundamental component of patient flow management, waiting time reduction, patient flow navigation with time and coordination in particular regarding timeline-based visualization for each individual patient. Long-term changes in process management can be planned and evaluated by comparing patient flow data. As using the software itself causes structural changes within the organization, a questionnaire is being planned for appraisal by the personnel involved.

  17. Mashups over the Deep Web

    NASA Astrophysics Data System (ADS)

    Hornung, Thomas; Simon, Kai; Lausen, Georg

    Combining information from different Web sources often results in a tedious and repetitive process, e.g. even simple information requests might require to iterate over a result list of one Web query and use each single result as input for a subsequent query. One approach for this chained queries are data-centric mashups, which allow to visually model the data flow as a graph, where the nodes represent the data source and the edges the data flow.

  18. Establishing a process for conducting cross-jurisdictional record linkage in Australia.

    PubMed

    Moore, Hannah C; Guiver, Tenniel; Woollacott, Anthony; de Klerk, Nicholas; Gidding, Heather F

    2016-04-01

    To describe the realities of conducting a cross-jurisdictional data linkage project involving state and Australian Government-based data collections to inform future national data linkage programs of work. We outline the processes involved in conducting a Proof of Concept data linkage project including the implementation of national data integration principles, data custodian and ethical approval requirements, and establishment of data flows. The approval process involved nine approval and regulatory bodies and took more than two years. Data will be linked across 12 datasets involving three data linkage centres. A framework was established to allow data to flow between these centres while maintaining the separation principle that serves to protect the privacy of the individual. This will be the first project to link child immunisation records from an Australian Government dataset to other administrative health datasets for a population cohort covering 2 million births in two Australian states. Although the project experienced some delays, positive outcomes were realised, primarily the development of strong collaborations across key stakeholder groups including community engagement. We have identified several recommendations and enhancements to this now established framework to further streamline the process for data linkage studies involving Australian Government data. © 2015 Public Health Association of Australia.

  19. Microparticle Flow Sensor

    NASA Technical Reports Server (NTRS)

    Morrison, Dennis R.

    2005-01-01

    The microparticle flow sensor (MFS) is a system for identifying and counting microscopic particles entrained in a flowing liquid. The MFS includes a transparent, optoelectronically instrumented laminar-flow chamber (see figure) and a computer for processing instrument-readout data. The MFS could be used to count microparticles (including micro-organisms) in diverse applications -- for example, production of microcapsules, treatment of wastewater, pumping of industrial chemicals, and identification of ownership of liquid products.

  20. Experimental and Numerical Modeling of Fluid Flow Processes in Continuous Casting: Results from the LIMMCAST-Project

    NASA Astrophysics Data System (ADS)

    Timmel, K.; Kratzsch, C.; Asad, A.; Schurmann, D.; Schwarze, R.; Eckert, S.

    2017-07-01

    The present paper reports about numerical simulations and model experiments concerned with the fluid flow in the continuous casting process of steel. This work was carried out in the LIMMCAST project in the framework of the Helmholtz alliance LIMTECH. A brief description of the LIMMCAST facilities used for the experimental modeling at HZDR is given here. Ultrasonic and inductive techniques and the X-ray radioscopy were employed for flow measurements or visualizations of two-phase flow regimes occurring in the submerged entry nozzle and the mold. Corresponding numerical simulations were performed at TUBAF taking into account the dimensions and properties of the model experiments. Numerical models were successfully validated using the experimental data base. The reasonable and in many cases excellent agreement of numerical with experimental data allows to extrapolate the models to real casting configurations. Exemplary results will be presented here showing the effect of electromagnetic brakes or electromagnetic stirrers on the flow in the mold or illustrating the properties of two-phase flows resulting from an Ar injection through the stopper rod.

  1. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    The purpose is to document research to develop strategies for concurrent processing of complex algorithms in data driven architectures. The problem domain consists of decision-free algorithms having large-grained, computationally complex primitive operations. Such are often found in signal processing and control applications. The anticipated multiprocessor environment is a data flow architecture containing between two and twenty computing elements. Each computing element is a processor having local program memory, and which communicates with a common global data memory. A new graph theoretic model called ATAMM which establishes rules for relating a decomposed algorithm to its execution in a data flow architecture is presented. The ATAMM model is used to determine strategies to achieve optimum time performance and to develop a system diagnostic software tool. In addition, preliminary work on a new multiprocessor operating system based on the ATAMM specifications is described.

  2. Metal droplet erosion and shielding plasma layer under plasma flows typical of transient processes in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martynenko, Yu. V., E-mail: Martynenko-YV@nrcki.ru

    It is shown that the shielding plasma layer and metal droplet erosion in tokamaks are closely interrelated, because shielding plasma forms from the evaporated metal droplets, while droplet erosion is caused by the shielding plasma flow over the melted metal surface. Analysis of experimental data and theoretical models of these processes is presented.

  3. A standardised graphic method for describing data privacy frameworks in primary care research using a flexible zone model.

    PubMed

    Kuchinke, Wolfgang; Ohmann, Christian; Verheij, Robert A; van Veen, Evert-Ben; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C

    2014-12-01

    To develop a model describing core concepts and principles of data flow, data privacy and confidentiality, in a simple and flexible way, using concise process descriptions and a diagrammatic notation applied to research workflow processes. The model should help to generate robust data privacy frameworks for research done with patient data. Based on an exploration of EU legal requirements for data protection and privacy, data access policies, and existing privacy frameworks of research projects, basic concepts and common processes were extracted, described and incorporated into a model with a formal graphical representation and a standardised notation. The Unified Modelling Language (UML) notation was enriched by workflow and own symbols to enable the representation of extended data flow requirements, data privacy and data security requirements, privacy enhancing techniques (PET) and to allow privacy threat analysis for research scenarios. Our model is built upon the concept of three privacy zones (Care Zone, Non-care Zone and Research Zone) containing databases, data transformation operators, such as data linkers and privacy filters. Using these model components, a risk gradient for moving data from a zone of high risk for patient identification to a zone of low risk can be described. The model was applied to the analysis of data flows in several general clinical research use cases and two research scenarios from the TRANSFoRm project (e.g., finding patients for clinical research and linkage of databases). The model was validated by representing research done with the NIVEL Primary Care Database in the Netherlands. The model allows analysis of data privacy and confidentiality issues for research with patient data in a structured way and provides a framework to specify a privacy compliant data flow, to communicate privacy requirements and to identify weak points for an adequate implementation of data privacy. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. A Multi-Disciplinary Approach to Remote Sensing through Low-Cost UAVs.

    PubMed

    Calvario, Gabriela; Sierra, Basilio; Alarcón, Teresa E; Hernandez, Carmen; Dalmau, Oscar

    2017-06-16

    The use of Unmanned Aerial Vehicles (UAVs) based on remote sensing has generated low cost monitoring, since the data can be acquired quickly and easily. This paper reports the experience related to agave crop analysis with a low cost UAV. The data were processed by traditional photogrammetric flow and data extraction techniques were applied to extract new layers and separate the agave plants from weeds and other elements of the environment. Our proposal combines elements of photogrammetry, computer vision, data mining, geomatics and computer science. This fusion leads to very interesting results in agave control. This paper aims to demonstrate the potential of UAV monitoring in agave crops and the importance of information processing with reliable data flow.

  5. A Multi-Disciplinary Approach to Remote Sensing through Low-Cost UAVs

    PubMed Central

    Calvario, Gabriela; Sierra, Basilio; Alarcón, Teresa E.; Hernandez, Carmen; Dalmau, Oscar

    2017-01-01

    The use of Unmanned Aerial Vehicles (UAVs) based on remote sensing has generated low cost monitoring, since the data can be acquired quickly and easily. This paper reports the experience related to agave crop analysis with a low cost UAV. The data were processed by traditional photogrammetric flow and data extraction techniques were applied to extract new layers and separate the agave plants from weeds and other elements of the environment. Our proposal combines elements of photogrammetry, computer vision, data mining, geomatics and computer science. This fusion leads to very interesting results in agave control. This paper aims to demonstrate the potential of UAV monitoring in agave crops and the importance of information processing with reliable data flow. PMID:28621740

  6. Recent Advancements in the Infrared Flow Visualization System for the NASA Ames Unitary Plan Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Garbeff, Theodore J., II; Baerny, Jennifer K.

    2017-01-01

    The following details recent efforts undertaken at the NASA Ames Unitary Plan wind tunnels to design and deploy an advanced, production-level infrared (IR) flow visualization data system. Highly sensitive IR cameras, coupled with in-line image processing, have enabled the visualization of wind tunnel model surface flow features as they develop in real-time. Boundary layer transition, shock impingement, junction flow, vortex dynamics, and buffet are routinely observed in both transonic and supersonic flow regimes all without the need of dedicated ramps in test section total temperature. Successful measurements have been performed on wing-body sting mounted test articles, semi-span floor mounted aircraft models, and sting mounted launch vehicle configurations. The unique requirements of imaging in production wind tunnel testing has led to advancements in the deployment of advanced IR cameras in a harsh test environment, robust data acquisition storage and workflow, real-time image processing algorithms, and evaluation of optimal surface treatments. The addition of a multi-camera IR flow visualization data system to the Ames UPWT has demonstrated itself to be a valuable analyses tool in the study of new and old aircraft/launch vehicle aerodynamics and has provided new insight for the evaluation of computational techniques.

  7. Combustion Chemistry of Fuels: Quantitative Speciation Data Obtained from an Atmospheric High-temperature Flow Reactor with Coupled Molecular-beam Mass Spectrometer.

    PubMed

    Köhler, Markus; Oßwald, Patrick; Krueger, Dominik; Whitside, Ryan

    2018-02-19

    This manuscript describes a high-temperature flow reactor experiment coupled to the powerful molecular beam mass spectrometry (MBMS) technique. This flexible tool offers a detailed observation of chemical gas-phase kinetics in reacting flows under well-controlled conditions. The vast range of operating conditions available in a laminar flow reactor enables access to extraordinary combustion applications that are typically not achievable by flame experiments. These include rich conditions at high temperatures relevant for gasification processes, the peroxy chemistry governing the low temperature oxidation regime or investigations of complex technical fuels. The presented setup allows measurements of quantitative speciation data for reaction model validation of combustion, gasification and pyrolysis processes, while enabling a systematic general understanding of the reaction chemistry. Validation of kinetic reaction models is generally performed by investigating combustion processes of pure compounds. The flow reactor has been enhanced to be suitable for technical fuels (e.g. multi-component mixtures like Jet A-1) to allow for phenomenological analysis of occurring combustion intermediates like soot precursors or pollutants. The controlled and comparable boundary conditions provided by the experimental design allow for predictions of pollutant formation tendencies. Cold reactants are fed premixed into the reactor that are highly diluted (in around 99 vol% in Ar) in order to suppress self-sustaining combustion reactions. The laminar flowing reactant mixture passes through a known temperature field, while the gas composition is determined at the reactors exhaust as a function of the oven temperature. The flow reactor is operated at atmospheric pressures with temperatures up to 1,800 K. The measurements themselves are performed by decreasing the temperature monotonically at a rate of -200 K/h. With the sensitive MBMS technique, detailed speciation data is acquired and quantified for almost all chemical species in the reactive process, including radical species.

  8. Effects of Coating Materials and Processing Conditions on Flow Enhancement of Cohesive Acetaminophen Powders by High-Shear Processing With Pharmaceutical Lubricants.

    PubMed

    Wei, Guoguang; Mangal, Sharad; Denman, John; Gengenbach, Thomas; Lee Bonar, Kevin; Khan, Rubayat I; Qu, Li; Li, Tonglei; Zhou, Qi Tony

    2017-10-01

    This study has investigated the surface coating efficiency and powder flow improvement of a model cohesive acetaminophen powder by high-shear processing with pharmaceutical lubricants through 2 common equipment, conical comil and high-shear mixer. Effects of coating materials and processing parameters on powder flow and surface coating coverage were evaluated. Both Carr's index and shear cell data indicated that processing with the lubricants using comil or high-shear mixer substantially improved the flow of the cohesive acetaminophen powder. Flow improvement was most pronounced for those processed with 1% wt/wt magnesium stearate, from "cohesive" for the V-blended sample to "easy flowing" for the optimally coated sample. Qualitative and quantitative characterizations demonstrated a greater degree of surface coverage for high-shear mixing compared with comilling; nevertheless, flow properties of the samples at the corresponding optimized conditions were comparable between 2 techniques. Scanning electron microscopy images demonstrated different coating mechanisms with magnesium stearate or l-leucine (magnesium stearate forms a coating layer and leucine coating increases surface roughness). Furthermore, surface coating with hydrophobic magnesium stearate did not retard the dissolution kinetics of acetaminophen. Future studies are warranted to evaluate tableting behavior of such dry-coated pharmaceutical powders. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  9. SeaWiFS Science Algorithm Flow Chart

    NASA Technical Reports Server (NTRS)

    Darzi, Michael

    1998-01-01

    This flow chart describes the baseline science algorithms for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Data Processing System (SDPS). As such, it includes only processing steps used in the generation of the operational products that are archived by NASA's Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC). It is meant to provide the reader with a basic understanding of the scientific algorithm steps applied to SeaWiFS data. It does not include non-science steps, such as format conversions, and places the greatest emphasis on the geophysical calculations of the level-2 processing. Finally, the flow chart reflects the logic sequences and the conditional tests of the software so that it may be used to evaluate the fidelity of the implementation of the scientific algorithm. In many cases however, the chart may deviate from the details of the software implementation so as to simplify the presentation.

  10. Graphical Language for Data Processing

    NASA Technical Reports Server (NTRS)

    Alphonso, Keith

    2011-01-01

    A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.

  11. Flow velocity, water temperature, and conductivity in Shark River Slough, Everglades National Park, Florida: June 2002-July 2003

    USGS Publications Warehouse

    Riscassi, Ami L.; Schaffranek, Raymond W.

    2004-01-01

    The data described in this report were collected in the U. S. Geological Survey (USGS) Priority Ecosystems Science project investigating Forcing Effects on Flow Structure in Vegetated Wetlands of the Everglades. Data collected at five locations in Shark River Slough, Everglades National Park, during the 2002-2003 wet season are documented in the report. Methods used to process the data are described. Daily mean flow velocities, water temperatures, and specific conductance values are presented in the appendices. The quality-checked and edited data have been compiled and stored on the USGS South Florida Information Access (SOFIA) website http://sofia.usgs.gov.

  12. Representation and display of vector field topology in fluid flow data sets

    NASA Technical Reports Server (NTRS)

    Helman, James; Hesselink, Lambertus

    1989-01-01

    The visualization of physical processes in general and of vector fields in particular is discussed. An approach to visualizing flow topology that is based on the physics and mathematics underlying the physical phenomenon is presented. It involves determining critical points in the flow where the velocity vector vanishes. The critical points, connected by principal lines or planes, determine the topology of the flow. The complexity of the data is reduced without sacrificing the quantitative nature of the data set. By reducing the original vector field to a set of critical points and their connections, a representation of the topology of a two-dimensional vector field that is much smaller than the original data set but retains with full precision the information pertinent to the flow topology is obtained. This representation can be displayed as a set of points and tangent curves or as a graph. Analysis (including algorithms), display, interaction, and implementation aspects are discussed.

  13. Fast interactive exploration of 4D MRI flow data

    NASA Astrophysics Data System (ADS)

    Hennemuth, A.; Friman, O.; Schumann, C.; Bock, J.; Drexl, J.; Huellebrand, M.; Markl, M.; Peitgen, H.-O.

    2011-03-01

    1- or 2-directional MRI blood flow mapping sequences are an integral part of standard MR protocols for diagnosis and therapy control in heart diseases. Recent progress in rapid MRI has made it possible to acquire volumetric, 3-directional cine images in reasonable scan time. In addition to flow and velocity measurements relative to arbitrarily oriented image planes, the analysis of 3-dimensional trajectories enables the visualization of flow patterns, local features of flow trajectories or possible paths into specific regions. The anatomical and functional information allows for advanced hemodynamic analysis in different application areas like stroke risk assessment, congenital and acquired heart disease, aneurysms or abdominal collaterals and cranial blood flow. The complexity of the 4D MRI flow datasets and the flow related image analysis tasks makes the development of fast comprehensive data exploration software for advanced flow analysis a challenging task. Most existing tools address only individual aspects of the analysis pipeline such as pre-processing, quantification or visualization, or are difficult to use for clinicians. The goal of the presented work is to provide a software solution that supports the whole image analysis pipeline and enables data exploration with fast intuitive interaction and visualization methods. The implemented methods facilitate the segmentation and inspection of different vascular systems. Arbitrary 2- or 3-dimensional regions for quantitative analysis and particle tracing can be defined interactively. Synchronized views of animated 3D path lines, 2D velocity or flow overlays and flow curves offer a detailed insight into local hemodynamics. The application of the analysis pipeline is shown for 6 cases from clinical practice, illustrating the usefulness for different clinical questions. Initial user tests show that the software is intuitive to learn and even inexperienced users achieve good results within reasonable processing times.

  14. [The Information Flows System as an instrument for preventing technological illness].

    PubMed

    Saldutti, Elisa; Bindi, Luciano; Di Giacobbe, Andrea; Innocenzi, Mariano; Innocenzi, Ludovico

    2014-01-01

    This paper describes the project "Information Flows", its contents of INAIL data about accidents and occupational diseases reported and recognized and its usefulness for programs of preventive initiatives undertaken by INAIL and by the responsible structures in the single italian regions. We propose some processings of data and suggest how their collection, according to criteria based on occupational medicine, industrial hygiene and epidemiology and a careful analysis and processing of data from more sources could lead to an extension of the workers protection, relatively to "unrecognized" occupational diseases, diseases caused by the "old" risks and the identification of occupational diseases caused by "new" risks.

  15. Laser anemometry for hot flows

    NASA Astrophysics Data System (ADS)

    Kugler, P.; Langer, G.

    1987-07-01

    The fundamental principles, instrumentation, and practical operation of LDA and laser-transit-anemometry systems for measuring velocity profiles and the degree of turbulence in high-temperature flows are reviewed and illustrated with diagrams, drawings and graphs of typical data. Consideration is given to counter, tracker, spectrum-analyzer and correlation methods of LDA signal processing; multichannel analyzer and cross correlation methods for LTA data; LTA results for a small liquid fuel rocket motor; and experiments demonstrating the feasibility of an optoacoustic demodulation scheme for LDA signals from unsteady flows.

  16. A method for computation of inviscid three-dimensional flow over blunt bodies having large embedded subsonic regions

    NASA Technical Reports Server (NTRS)

    Weilmuenster, K. J.; Hamilton, H. H., II

    1981-01-01

    A computational technique for computing the three-dimensional inviscid flow over blunt bodies having large regions of embedded subsonic flow is detailed. Results, which were obtained using the CDC Cyber 203 vector processing computer, are presented for several analytic shapes with some comparison to experimental data. Finally, windward surface pressure computations over the first third of the Space Shuttle vehicle are compared with experimental data for angles of attack between 25 and 45 degrees.

  17. Accountability Policy Implementation and the Case of Smaller School District Capacity: Three Contrasting Cases that Examine the Flow and Use of NCLB Accountability Data

    ERIC Educational Resources Information Center

    Miller, Christopher L.

    2010-01-01

    The No Child Left Behind Act increases pressure on schools and districts to use standardized state test data. Seeking to learn about the process of turning accountability data into actionable information, this paper presents findings from three case studies of small to medium sized school districts. The study examines the flow of state science…

  18. Determination of secondary flow morphologies by wavelet analysis in a curved artery model with physiological inflow

    NASA Astrophysics Data System (ADS)

    Bulusu, Kartik V.; Hussain, Shadman; Plesniak, Michael W.

    2014-11-01

    Secondary flow vortical patterns in arterial curvatures have the potential to affect several cardiovascular phenomena, e.g., progression of atherosclerosis by altering wall shear stresses, carotid atheromatous disease, thoracic aortic aneurysms and Marfan's syndrome. Temporal characteristics of secondary flow structures vis-à-vis physiological (pulsatile) inflow waveform were explored by continuous wavelet transform (CWT) analysis of phase-locked, two-component, two-dimensional particle image velocimeter data. Measurements were made in a 180° curved artery test section upstream of the curvature and at the 90° cross-sectional plane. Streamwise, upstream flow rate measurements were analyzed using a one-dimensional antisymmetric wavelet. Cross-stream measurements at the 90° location of the curved artery revealed interesting multi-scale, multi-strength coherent secondary flow structures. An automated process for coherent structure detection and vortical feature quantification was applied to large ensembles of PIV data. Metrics such as the number of secondary flow structures, their sizes and strengths were generated at every discrete time instance of the physiological inflow waveform. An autonomous data post-processing method incorporating two-dimensional CWT for coherent structure detection was implemented. Loss of coherence in secondary flow structures during the systolic deceleration phase is observed in accordance with previous research. The algorithmic approach presented herein further elucidated the sensitivity and dependence of morphological changes in secondary flow structures on quasiperiodicity and magnitude of temporal gradients in physiological inflow conditions.

  19. HPC enabled real-time remote processing of laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Ronaghi, Zahra; Sapra, Karan; Izard, Ryan; Duffy, Edward; Smith, Melissa C.; Wang, Kuang-Ching; Kwartowitz, David M.

    2016-03-01

    Laparoscopic surgery is a minimally invasive surgical technique. The benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures. One particular laparoscopic system is the daVinci-si robotic surgical system. The video streams generate approximately 360 megabytes of data per second. Real-time processing this large stream of data on a bedside PC, single or dual node setup, has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. We have implement and compared performance of compression, segmentation and registration algorithms on Clemson's Palmetto supercomputer using dual NVIDIA K40 GPUs per node. Our computing framework will also enable reliability using replication of computation. We will securely transfer the files to remote HPC clusters utilizing an OpenFlow-based network service, Steroid OpenFlow Service (SOS) that can increase performance of large data transfers over long-distance and high bandwidth networks. As a result, utilizing high-speed OpenFlow- based network to access computing clusters with GPUs will improve surgical procedures by providing real-time medical image processing and laparoscopic data.

  20. Integration agent-based models and GIS as a virtual urban dynamic laboratory

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Liu, Miaolong

    2007-06-01

    Based on the Agent-based Model and spatial data model, a tight-coupling integrating method of GIS and Agent-based Model (ABM) is to be discussed in this paper. The use of object-orientation for both spatial data and spatial process models facilitates their integration, which can allow exploration and explanation of spatial-temporal phenomena such as urban dynamic. In order to better understand how tight coupling might proceed and to evaluate the possible functional and efficiency gains from such a tight coupling, the agent-based model and spatial data model are discussed, and then the relationships affecting spatial data model and agent-based process models interaction. After that, a realistic crowd flow simulation experiment is presented. Using some tools provided by general GIS systems and a few specific programming languages, a new software system integrating GIS and MAS as a virtual laboratory applicable for simulating pedestrian flows in a crowd activity centre has been developed successfully. Under the environment supported by the software system, as an applicable case, a dynamic evolution process of the pedestrian's flows (dispersed process for the spectators) in a crowds' activity center - The Shanghai Stadium has been simulated successfully. At the end of the paper, some new research problems have been pointed out for the future.

  1. Processes that initiate turbidity currents and their influence on turbidites: A marine geology perspective

    USGS Publications Warehouse

    Piper, David J.W.; Normark, William R.

    2009-01-01

    How the processes that initiate turbidity currents influence turbidite deposition is poorly understood, and many discussions in the literature rely on concepts that are overly simplistic. Marine geological studies provide information on the initiation and flow path of turbidity currents, including their response to gradient. In case studies of late Quaternary turbidites on the eastern Canadian and western U.S. margins, initiation processes are inferred either from real-time data for historical flows or indirectly from the age and contemporary paleogeography, erosional features, and depositional record. Three major types of initiation process are recognized: transformation of failed sediment, hyperpycnal flow from rivers or ice margins, and resuspension of sediment near the shelf edge by oceanographic processes. Many high-concentration flows result from hyperpycnal supply of hyperconcentrated bedload, or liquefaction failure of coarse-grained sediment, and most tend to deposit in slope conduits and on gradients < 0.5° at the base of slope and on the mid fan. Highly turbulent flows, from transformation of retrogressive failures and from ignitive flows that are triggered by oceanographic processes, tend to cannibalize these more proximal sediments and redeposit them on lower gradients on the basin plain. Such conduit flushing provides most of the sediment in large turbidites. Initiation mechanism exerts a strong control on the duration of turbidity flows. In most basins, there is a complex feedback between different types of turbidity-current initiation, the transformation of the flows, and the associated slope morphology. As a result, there is no simple relationship between initiating process and type of deposit.

  2. Data on conceptual design of cryogenic energy storage system combined with liquefied natural gas regasification process.

    PubMed

    Lee, Inkyu; Park, Jinwoo; Moon, Il

    2017-12-01

    This paper describes data of an integrated process, cryogenic energy storage system combined with liquefied natural gas (LNG) regasification process. The data in this paper is associated with the article entitled "Conceptual Design and Exergy Analysis of Combined Cryogenic Energy Storage and LNG Regasification Processes: Cold and Power Integration" (Lee et al., 2017) [1]. The data includes the sensitivity case study dataset of the air flow rate and the heat exchanging feasibility data by composite curves. The data is expected to be helpful to the cryogenic energy process development.

  3. USGS Streamgages Linked to the Medium Resolution NHD

    USGS Publications Warehouse

    Stewart, David W.; Rea, Alan; Wolock, David M.

    2006-01-01

    The locations of approximately 23,000 current and historical U.S. Geological Survey (USGS) streamgages in the United States and Puerto Rico (with the exception of Alaska) have been snapped to the medium resolution National Hydrography Dataset (NHD). The NHD contains geospatial information about mapped surface-water features, such as streams, lakes, and reservoirs, etc., creating a hydrologic network that can be used to determine what is upstream or downstream from a point of interest on the NHD network. An automated snapping process made the initial determination of the NHD location of each streamgage. These initial NHD locations were comprehensively reviewed by local USGS personnel to ensure that streamgages were snapped to the correct NHD reaches. About 75 percent of the streamgages snapped to the appropriate NHD reach location initially and 25 percent required adjustment and relocation. This process resulted in approximately 23,000 gages being successfully snapped to the NHD. This dataset contains the latitude and longitude coordinates of the point on the NHD to which the streamgage is snapped and the location of the gage house for each streamgage. A process known as indexing may be used to create reference points (event tables) to the NHD reaches, expressed as a reach code and measure (distance along the reach). Indexing is dependent on the version of NHD to which the indexing is referenced. These data are well suited for use in indexing because nearly all the streamgage NHD locations have been reviewed and adjusted if necessary, to ensure they will index to the appropriate NHD reach. Flow characteristics were computed from the daily streamflow data recorded at each streamgage for the period of record. The flow characteristics associated with each streamgage include: *First date (year, month, day) of streamflow data *Last date (year, month, day) of streamflow data *Number of days of streamflow data *Number of days of non-zero streamflow data *Minimum and maximum daily flow for the period of record (cubic feet per second) *Percentiles (1, 5, 10, 20, 25, 50, 75, 80, 90, 95, 99) of daily flow for the period of record (cubic feet per second) *Average and standard deviation of daily flow for the period of record (cubic feet per second) *Mean annual base-flow index (BFI) computed for the period of record (fraction, ranging from 0 to 1) *Year-to-year standard deviation of the annual base-flow index computed for the period of record (fraction) *Number of years of data used to compute the base-flow index (years) The streamflow data used to compute flow characteristics were copied from the NWIS-Web historical daily discharge archive (nadww01.er.usgs.gov:/www/htdocs/nwisweb/data/discharge) on June 15, 2005.

  4. Data processing workflow for time of flight polarized neutrons inelastic measurements

    DOE PAGES

    Savici, Andrei T.; Zaliznyak, Igor A.; Ovidiu Garlea, V.; ...

    2017-06-01

    We discuss the data processing workflow for polarized neutron scattering measurements performed at HYSPEC spectrometer at the Spallation Neutron Source, Oak Ridge National Laboratory. The effects of the focusing Heusler crystal polarizer and the wide-angle supermirror transmission polarization analyzer are added to the data processing flow of the non-polarized case. The implementation is done using the Mantid software package.

  5. CLARA: CLAS12 Reconstruction and Analysis Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gyurjyan, Vardan; Matta, Sebastian Mancilla; Oyarzun, Ricardo

    2016-11-01

    In this paper we present SOA based CLAS12 event Reconstruction and Analyses (CLARA) framework. CLARA design focus is on two main traits: real-time data stream processing, and service-oriented architecture (SOA) in a flow based programming (FBP) paradigm. Data driven and data centric architecture of CLARA presents an environment for developing agile, elastic, multilingual data processing applications. The CLARA framework presents solutions capable of processing large volumes of data interactively and substantially faster than batch systems.

  6. Debris Flow Process and Climate Controls on Steepland Valley Form and Evolution

    NASA Astrophysics Data System (ADS)

    Struble, W.; Roering, J. J.

    2017-12-01

    In unglaciated mountain ranges, steepland bedrock valleys often dominate relief structure and dictate landscape response to perturbations in tectonics or climate; drainage divides have been shown to be dynamic and drainage capture is common. Landscape evolution models often use the stream power model to simulate morphologic changes, but steepland valley networks exhibit trends that deviate from predictions of this model. The prevalence of debris flows in steep channels has motivated approaches that account for commonly observed curvature of slope-area data at small drainage areas. Debris flow deposits correspond with observed curvature in slope-area data, wherein slope increases slowly as drainage area decreases; debris flow incision is implied upstream of deposits. In addition, shallow landslides and in-channel sediment entrainment in humid and arid regions, respectively, have been identified as likely debris flow triggering mechanisms, but the extent to which they set the slope of steep channels is unclear. While an untested model exists for humid landscape debris flows, field observations and models are lacking for regions with lower mean annual precipitation. The Oregon Coastal Ranges are an ideal humid setting for observing how shallow landslide-initiated debris flows abrade channel beds and/or drive exposure-driven weathering. Preliminary field observations in the Lost River Range and the eastern Sierra Nevada - semi-arid and unglaciated environments - suggest that debris flows are pervasive in steep reaches. Evidence for fluvial incision is lacking and the presence of downstream debris flow deposits and a curved morphologic signature in slope-area space suggests stream power models are insufficient for predicting and interpreting landscape dynamics. Investigation of debris flow processes in both humid and arid sites such as these seeks to identify the linkage between sediment transport and the characteristic form of steepland valleys. Bedrock weathering, fracture density, recurrence interval, bulking, and grain size may determine process-form linkages in humid and arid settings. Evaluation of debris flow processes in sites of varying climate presents the opportunity to quantify the role of debris flow incision in the evolution of steepland valleys and improve landscape evolution models.

  7. Gravity flow rate of solids through orifices and pipes

    NASA Technical Reports Server (NTRS)

    Gardner, J. F.; Smith, J. E.; Hobday, J. M.

    1977-01-01

    Lock-hopper systems are the most common means for feeding solids to and from coal conversion reactor vessels. The rate at which crushed solids flow by gravity through the vertical pipes and valves in lock-hopper systems affects the size of pipes and valves needed to meet the solids-handling requirements of the coal conversion process. Methods used to predict flow rates are described and compared with experimental data. Preliminary indications are that solids-handling systems for coal conversion processes are over-designed by a factor of 2 or 3.

  8. The application of flow competence evaluations to the assessment of flood-flow velocities and stresses

    NASA Technical Reports Server (NTRS)

    Komar, Paul D.

    1987-01-01

    The concept of flow competence is generally employed to evaluate the velocities, discharges, and bottom stresses of river floods inferred from the size of the largest sediment particles transported. Flow competence has become an important tool for evaluating the hydraulics of exceptional floods on Earth, including those which eroded the Channeled Scabland of eastern Washington, and has potential for similar evaluations of the floods which carved the outflow channels on Mars. For the most part, flow-competence evaluations were empirical, based on data compiled from a variety of sources including major terrestrial floods caused by natural processes or dam failures. Such flow-competence relationships would appear to provide a straight-forward assessment of flood-flow stresses and velocities based on the maximum size of gravel and boulders transported. However, a re-examination of the data base and comparisons with measurements of selective entrainment and transport of gravel in rivers open to question such evaluations. Analyses of the forces acting on the grain during entrainment by pivoting, rolling, or sliding, an approach which focuses more on the physical processes than the purely empirical relationships can be demonstrated. These derived equations require further testing by flume and field measurements before being applied to flow-competence evaluations. Such tests are now underway.

  9. Seamless integration of dose-response screening and flow chemistry: efficient generation of structure-activity relationship data of β-secretase (BACE1) inhibitors.

    PubMed

    Werner, Michael; Kuratli, Christoph; Martin, Rainer E; Hochstrasser, Remo; Wechsler, David; Enderle, Thilo; Alanine, Alexander I; Vogel, Horst

    2014-02-03

    Drug discovery is a multifaceted endeavor encompassing as its core element the generation of structure-activity relationship (SAR) data by repeated chemical synthesis and biological testing of tailored molecules. Herein, we report on the development of a flow-based biochemical assay and its seamless integration into a fully automated system comprising flow chemical synthesis, purification and in-line quantification of compound concentration. This novel synthesis-screening platform enables to obtain SAR data on b-secretase (BACE1) inhibitors at an unprecedented cycle time of only 1 h instead of several days. Full integration and automation of industrial processes have always led to productivity gains and cost reductions, and this work demonstrates how applying these concepts to SAR generation may lead to a more efficient drug discovery process. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Analysis of Doppler radar windshear data

    NASA Technical Reports Server (NTRS)

    Williams, F.; Mckinney, P.; Ozmen, F.

    1989-01-01

    The objective of this analysis is to process Lincoln Laboratory Doppler radar data obtained during FLOWS testing at Huntsville, Alabama, in the summer of 1986, to characterize windshear events. The processing includes plotting velocity and F-factor profiles, histogram analysis to summarize statistics, and correlation analysis to demonstrate any correlation between different data fields.

  11. Dynamic Modelling of Erosion and Deposition Processes in Debris Flows With Application to Real Debris Flow Events in Switzerland

    NASA Astrophysics Data System (ADS)

    Deubelbeiss, Y.; McArdell, B. W.; Graf, C.

    2011-12-01

    The dynamics of a debris flow can be significantly influenced by erosion and deposition processes during an event because volume changes have a strong influence on flow properties such as flow velocity, flow heights and runout distances. It is therefore worth exploring how to include these processes in numerical models, which are used for hazard assessment and mitigation measure planning. However, it is still under debate, what mechanism drives the erosion of material at the base of a debris flow. There are different processes attributed to erosion: it has been proposed that erosion correlates with the stresses due to granular interactions at the front, which in turn strongly depend on particle size or it may be related to basal shear forces. Because it is expected that larger flow heights result in larger stresses one can additionally hypothesize that there is a correlation between erosion rate and flow height. To test different erosion laws in a numerical model and its influence on the flow behavior we implement different relationships and compare simulation results with field data. Herefore, we use the numerical model, RAMMS (Christen et al., 2010), employing the Voellmy-fluid friction law. While it has already been shown that a correlation of erosion with velocity does not lead to a satisfying result (too high entrainment in the tail) a correlation with flow height combined with velocity (momentum) has been successfully applied to ice-avalanches. Currently, we are testing the momentum-driven and for comparison we reconsider the simple velocity-driven erosion rate. However, these laws do not consider processes on a smaller scale such as particle fluctuations resulting in energy production, which might play an important role. Therefore, we additionally consider an erosion model that has potential to draw new insights on the erosion process in debris flows. The model is based on an extended Voellmy model, which additionally employs an equation, which is a measure of the random kinetic energy (RKE, equivalent to granular temperature) produced by the random movement of particles in a debris flow (Buser and Bartelt, 2009). Advantageous is that friction is dependent on the production of RKE and is decreasing with decreasing RKE. The amount of energy produced in the system, might therefore be a useful indicator for the erosion rate. While the erosion model using the Voellmy approach might be successfully applicable to cases where erosion and bulking are the main processes, such as in Illgraben (CH), it might be less straight forward in mountain torrents where we additionally observe a lot of deposition along the flow path such as in Dorfbach (CH). The extended Voellmy model is indirectly accounting for this process as friction is a function of RKE, which allows material to deposit earlier. At both locations we have debris flow observation stations including innovative new measurement techniques indication parameters such as flow velocity, height and volumes at specific locations (Illgraben, Dorfbach) as well as erosion rate measurements (Illgraben). These highly valuable data allow us good model calibration as well as verification of the newly implemented erosion models.

  12. FlowCal: A user-friendly, open source software tool for automatically converting flow cytometry data from arbitrary to calibrated units

    PubMed Central

    Castillo-Hair, Sebastian M.; Sexton, John T.; Landry, Brian P.; Olson, Evan J.; Igoshin, Oleg A.; Tabor, Jeffrey J.

    2017-01-01

    Flow cytometry is widely used to measure gene expression and other molecular biological processes with single cell resolution via fluorescent probes. Flow cytometers output data in arbitrary units (a.u.) that vary with the probe, instrument, and settings. Arbitrary units can be converted to the calibrated unit molecules of equivalent fluorophore (MEF) using commercially available calibration particles. However, there is no convenient, non-proprietary tool available to perform this calibration. Consequently, most researchers report data in a.u., limiting interpretation. Here, we report a software tool named FlowCal to overcome current limitations. FlowCal can be run using an intuitive Microsoft Excel interface, or customizable Python scripts. The software accepts Flow Cytometry Standard (FCS) files as inputs and is compatible with different calibration particles, fluorescent probes, and cell types. Additionally, FlowCal automatically gates data, calculates common statistics, and produces publication quality plots. We validate FlowCal by calibrating a.u. measurements of E. coli expressing superfolder GFP (sfGFP) collected at 10 different detector sensitivity (gain) settings to a single MEF value. Additionally, we reduce day-to-day variability in replicate E. coli sfGFP expression measurements due to instrument drift by 33%, and calibrate S. cerevisiae mVenus expression data to MEF units. Finally, we demonstrate a simple method for using FlowCal to calibrate fluorescence units across different cytometers. FlowCal should ease the quantitative analysis of flow cytometry data within and across laboratories and facilitate the adoption of standard fluorescence units in synthetic biology and beyond. PMID:27110723

  13. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E. Gonnenthal; N. Spyoher

    The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) 2000 [153447]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M and O 2000 [153309]). These models include the Drift Scale Test (DST) THCmore » Model and several THC seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: (1) Performance Assessment (PA); (2) Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); (3) UZ Flow and Transport Process Model Report (PMR); and (4) Near-Field Environment (NFE) PMR. The work scope for this activity is presented in the TWPs cited above, and summarized as follows: continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data, sensitivity and validation studies described in this AMR are required to fully document and address the requirements of the TWPs.« less

  14. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E. Sonnenthale

    The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M&O) 2000 [1534471]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M&O 2000 [153309]). These models include the Drift Scale Test (DST) THC Model and several THCmore » seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: Performance Assessment (PA); Near-Field Environment (NFE) PMR; Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); and UZ Flow and Transport Process Model Report (PMR). The work scope for this activity is presented in the TWPs cited above, and summarized as follows: Continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data, sensitivity and validation studies described in this AMR are required to fully document and address the requirements of the TWPs.« less

  15. Delineating wetland catchments and modeling hydrologic connectivity using lidar data and aerial imagery

    NASA Astrophysics Data System (ADS)

    Wu, Qiusheng; Lane, Charles R.

    2017-07-01

    In traditional watershed delineation and topographic modeling, surface depressions are generally treated as spurious features and simply removed from a digital elevation model (DEM) to enforce flow continuity of water across the topographic surface to the watershed outlets. In reality, however, many depressions in the DEM are actual wetland landscape features with seasonal to permanent inundation patterning characterized by nested hierarchical structures and dynamic filling-spilling-merging surface-water hydrological processes. Differentiating and appropriately processing such ecohydrologically meaningful features remains a major technical terrain-processing challenge, particularly as high-resolution spatial data are increasingly used to support modeling and geographic analysis needs. The objectives of this study were to delineate hierarchical wetland catchments and model their hydrologic connectivity using high-resolution lidar data and aerial imagery. The graph-theory-based contour tree method was used to delineate the hierarchical wetland catchments and characterize their geometric and topological properties. Potential hydrologic connectivity between wetlands and streams were simulated using the least-cost-path algorithm. The resulting flow network delineated potential flow paths connecting wetland depressions to each other or to the river network on scales finer than those available through the National Hydrography Dataset. The results demonstrated that our proposed framework is promising for improving overland flow simulation and hydrologic connectivity analysis.

  16. High frequency seismic monitoring of debris flows at Chalk Cliffs (CO), USA

    NASA Astrophysics Data System (ADS)

    Coviello, Velio; Kean, Jason; Smith, Joel; Coe, Jeffrey; Arattano, Massimo; McCoy, Scott

    2015-04-01

    A growing number of studies adopt passive seismic monitoring techniques to investigate slope instabilities and landslide processes. These techniques are attractive and convenient because large areas can be monitored from a safe distance. This is particularly true when the phenomena under investigation are rapid and infrequent mass movements like debris flows. Different types of devices are used to monitor debris flow processes, but among them ground vibration detectors (GVDs) present several, specific advantages that encourage their use. These advantages include: (i) the possibility to be installed outside the channel bed, (ii) the high adaptability to different and harsh field conditions, and (iii) the capability to detect the debris flow front arrival tens of seconds earlier than contact and stage sensors. Ground vibration data can provide relevant information on the dynamics of debris flows such as timing and velocity of the main surges. However, the processing of the raw seismic signal is usually needed, both to obtain a more effective representation of waveforms and to decrease the amount of data that need to be recorded and analyzed. With this objective, the methods of Amplitude and Impulses are commonly adopted to transform the raw signal to a 1-Hz signal that allows for a more useful representation of the phenomenon. In that way, peaks and other features become more visible and comparable with data obtained from other monitoring devices. In this work, we present the first debris flows seismic recordings gathered in the Chalk Cliffs instrumented basin, central Colorado, USA. In May 2014, two 4.5-Hz, three-axial geophones were installed in the upper part of the catchment. Seismic data are sampled at 333 Hz and then recorded by a standalone recording unit. One geophone is directly installed on bedrock, the other one mounted on a 1-m boulder partially buried in colluvium. This latter sensor integrates a heavily instrumented cross-section consisting of a 225 cm2 force plate recording basal impact forces at 333 Hz, a laser distance meter recording flow stage over the plate at 10 Hz, and a high definition video camera (24 frames per seconds). This combination of instrumentation allows for a comparison of the amplitude and spectral response of the geophones to flow depth, impact force, and video recordings. On July 4, 2014 a debris flow event occurred in the basin that was recorded by the whole monitoring system. Both geophone installation methods and channel bed characteristics largely influenced the seismic records. One geophone exhibits a broad frequency response during all debris flow surges, while the energy recorded by the other one is mainly concentrated in the 40-80 Hz band. Furthermore, erosion and entrainment processes have a crucial effect on the recorded waveforms. The presence of channel bed sediment damps the Amplitude waveforms during the first surges, when the flow is not yet erosive. The typical proportionality between the Amplitude curve and the flow stage is observed only after the entrainment of the channel bed sediment by the debris flow, when the flow is directly on bedrock. The processing of the signal with the Impulse transformation displays the same damping effect when a high threshold is adopted. However, the use of a high threshold entails the disappearance of the first surge and causes a less effective early detection of the flow. On the contrary, the adoption of a lower threshold impedes the observation of sediment damping effect.

  17. Vortex generating flow passage design for increased film-cooling effectiveness and surface coverage. [aircraft engine blade cooling

    NASA Technical Reports Server (NTRS)

    Papell, S. S.

    1984-01-01

    The fluid mechanics of the basic discrete hole film cooling process is described as an inclined jet in crossflow and a cusp shaped coolant flow channel contour that increases the efficiency of the film cooling process is hypothesized. The design concept requires the channel to generate a counter rotating vortex pair secondary flow within the jet stream by virture of flow passage geometry. The interaction of the vortex structures generated by both geometry and crossflow was examined in terms of film cooling effectiveness and surface coverage. Comparative data obtained with this vortex generating coolant passage showed up to factors of four increases in both effectiveness and surface coverage over that obtained with a standard round cross section flow passage. A streakline flow visualization technique was used to support the concept of the counter rotating vortex pair generating capability of the flow passage design.

  18. Vortex generating flow passage design for increased film-cooling effectiveness and surface coverage

    NASA Astrophysics Data System (ADS)

    Papell, S. S.

    The fluid mechanics of the basic discrete hole film cooling process is described as an inclined jet in crossflow and a cusp shaped coolant flow channel contour that increases the efficiency of the film cooling process is hypothesized. The design concept requires the channel to generate a counter rotating vortex pair secondary flow within the jet stream by virture of flow passage geometry. The interaction of the vortex structures generated by both geometry and crossflow was examined in terms of film cooling effectiveness and surface coverage. Comparative data obtained with this vortex generating coolant passage showed up to factors of four increases in both effectiveness and surface coverage over that obtained with a standard round cross section flow passage. A streakline flow visualization technique was used to support the concept of the counter rotating vortex pair generating capability of the flow passage design.

  19. Making Data Flow Diagrams Accessible for Visually Impaired Students Using Excel Tables

    ERIC Educational Resources Information Center

    Sauter, Vicki L.

    2015-01-01

    This paper addresses the use of Excel tables to convey information to blind students that would otherwise be presented using graphical tools, such as Data Flow Diagrams. These tables can supplement diagrams in the classroom when introducing their use to understand the scope of a system and its main sub-processes, on exams when answering questions…

  20. Data uncertainties in material flow analysis: Municipal solid waste management system in Maputo City, Mozambique.

    PubMed

    Dos Muchangos, Leticia Sarmento; Tokai, Akihiro; Hanashima, Atsuko

    2017-01-01

    Material flow analysis can effectively trace and quantify the flows and stocks of materials such as solid wastes in urban environments. However, the integrity of material flow analysis results is compromised by data uncertainties, an occurrence that is particularly acute in low-and-middle-income study contexts. This article investigates the uncertainties in the input data and their effects in a material flow analysis study of municipal solid waste management in Maputo City, the capital of Mozambique. The analysis is based on data collected in 2007 and 2014. Initially, the uncertainties and their ranges were identified by the data classification model of Hedbrant and Sörme, followed by the application of sensitivity analysis. The average lower and upper bounds were 29% and 71%, respectively, in 2007, increasing to 41% and 96%, respectively, in 2014. This indicates higher data quality in 2007 than in 2014. Results also show that not only data are partially missing from the established flows such as waste generation to final disposal, but also that they are limited and inconsistent in emerging flows and processes such as waste generation to material recovery (hence the wider variation in the 2014 parameters). The sensitivity analysis further clarified the most influencing parameter and the degree of influence of each parameter on the waste flows and the interrelations among the parameters. The findings highlight the need for an integrated municipal solid waste management approach to avoid transferring or worsening the negative impacts among the parameters and flows.

  1. Ground-based LiDAR Measurements of Actively Inflating Pahoehoe Flows, Kilauea Volcano, Hawaii: Implications for Emplacement of Basaltic Units on Mars

    NASA Astrophysics Data System (ADS)

    Byrnes, J. M.; Finnegan, D. C.; Nicoll, K.; Anderson, S. W.

    2007-05-01

    Remote sensing datasets enable planetary volcanologists to extract information regarding eruption processes. Long-lived effusive eruptions at sites such as Kilauea Volcano (HI) provide opportunities to collect rich observational data sets, including detailed measurements of topography and extrusion rates, that allow comparisons between lava flow surface morphologies and emplacement conditions for use in interpreting similar morphological features associated with planetary lava flows. On Mars, the emplacement of basaltic lava flows is a volumetrically and spatially important process, creating both large-scale and small-scale surface morphologies. On Earth, low effusion rate eruptions on relatively horizontal slopes tend to create inflated lava flows that display hummocky topography. To better understand the processes involved in creating observed surface characteristics, we repeatedly measured the surface topography of an actively flowing and inflating basaltic unit within the Pu'u O'o flow field over a 5-day period. We used a ground-based laser-scanner (LiDAR) system that provided vertical and horizontal accuracies of 4 mm. Comparing DEMs from repeated laser scans yielded the magnitudes and styles of constructional processes, allowing us to quantify the relationship between pre- and post-emplacement surface topography. Our study site (roughly 200 m x 200 m) experienced about 5 m of vertical inflation over a 3 day period and created a new hummocky surface containing several tumuli. The temporal and spatial patterns of inflation were complex and showed no obvious relationship with underlying topography. High-precision morphometric measurements acquired using ground-based LiDAR affords us the opportunity to capture the essential boundary conditions necessary for evaluating and comparing high-resolution planetary data sets, such as those acquired by the MOC, HRSC, and HiRISE instruments.

  2. Modeling of liquid and gas flows in the horizontal layer with evaporation

    NASA Astrophysics Data System (ADS)

    Lyulin, Yuri; Rezanova, Ekaterina

    2017-10-01

    Mathematical modeling of two-layer flows in the "ethanol-nitrogen" system on the basis of the exact solutions of a special type is carried out. The influence of the gas flow, temperature and Soret effect on the flow patterns and evaporating processes at the interface is investigated. The results of comparison of the experimental and theoretical data are presented; the dependence of the evaporation intensity at interface of the gas flow rate and temperature is studied.

  3. System model the processing of heterogeneous sensory information in robotized complex

    NASA Astrophysics Data System (ADS)

    Nikolaev, V.; Titov, V.; Syryamkin, V.

    2018-05-01

    Analyzed the scope and the types of robotic systems consisting of subsystems of the form "a heterogeneous sensors data processing subsystem". On the basis of the Queuing theory model is developed taking into account the unevenness of the intensity of information flow from the sensors to the subsystem of information processing. Analytical solution to assess the relationship of subsystem performance and uneven flows. The research of the obtained solution in the range of parameter values of practical interest.

  4. SPROC: A multiple-processor DSP IC

    NASA Technical Reports Server (NTRS)

    Davis, R.

    1991-01-01

    A large, single-chip, multiple-processor, digital signal processing (DSP) integrated circuit (IC) fabricated in HP-Cmos34 is presented. The innovative architecture is best suited for analog and real-time systems characterized by both parallel signal data flows and concurrent logic processing. The IC is supported by a powerful development system that transforms graphical signal flow graphs into production-ready systems in minutes. Automatic compiler partitioning of tasks among four on-chip processors gives the IC the signal processing power of several conventional DSP chips.

  5. Spectral features of solar plasma flows

    NASA Astrophysics Data System (ADS)

    Barkhatov, N. A.; Revunov, S. E.

    2014-11-01

    Research to the identification of plasma flows in the Solar wind by spectral characteristics of solar plasma flows in the range of magnetohydrodynamics is devoted. To do this, the wavelet skeleton pattern of Solar wind parameters recorded on Earth orbit by patrol spacecraft and then executed their neural network classification differentiated by bandwidths is carry out. This analysis of spectral features of Solar plasma flows in the form of magnetic clouds (MC), corotating interaction regions (CIR), shock waves (Shocks) and highspeed streams from coronal holes (HSS) was made. The proposed data processing and the original correlation-spectral method for processing information about the Solar wind flows for further classification as online monitoring of near space can be used. This approach will allow on early stages in the Solar wind flow detect geoeffective structure to predict global geomagnetic disturbances.

  6. Experiences in using DISCUS for visualizing human communication

    NASA Astrophysics Data System (ADS)

    Groehn, Matti; Nieminen, Marko; Haho, Paeivi; Smeds, Riitta

    2000-02-01

    In this paper, we present further improvement to the DISCUS software that can be used to record and analyze the flow and constants of business process simulation session discussion. The tool was initially introduced in 'visual data exploration and analysis IV' conference. The initial features of the tool enabled the visualization of discussion flow in business process simulation sessions and the creation of SOM analyses. The improvements of the tool consists of additional visualization possibilities that enable quick on-line analyses and improved graphical statistics. We have also created the very first interface to audio data and implemented two ways to visualize it. We also outline additional possibilities to use the tool in other application areas: these include usability testing and the possibility to use the tool for capturing design rationale in a product development process. The data gathered with DISCUS may be used in other applications, and further work may be done with data ming techniques.

  7. Daily river flow prediction based on Two-Phase Constructive Fuzzy Systems Modeling: A case of hydrological - meteorological measurements asymmetry

    NASA Astrophysics Data System (ADS)

    Bou-Fakhreddine, Bassam; Mougharbel, Imad; Faye, Alain; Abou Chakra, Sara; Pollet, Yann

    2018-03-01

    Accurate daily river flow forecast is essential in many applications of water resources such as hydropower operation, agricultural planning and flood control. This paper presents a forecasting approach to deal with a newly addressed situation where hydrological data exist for a period longer than that of meteorological data (measurements asymmetry). In fact, one of the potential solutions to resolve measurements asymmetry issue is data re-sampling. It is a matter of either considering only the hydrological data or the balanced part of the hydro-meteorological data set during the forecasting process. However, the main disadvantage is that we may lose potentially relevant information from the left-out data. In this research, the key output is a Two-Phase Constructive Fuzzy inference hybrid model that is implemented over the non re-sampled data. The introduced modeling approach must be capable of exploiting the available data efficiently with higher prediction efficiency relative to Constructive Fuzzy model trained over re-sampled data set. The study was applied to Litani River in the Bekaa Valley - Lebanon by using 4 years of rainfall and 24 years of river flow daily measurements. A Constructive Fuzzy System Model (C-FSM) and a Two-Phase Constructive Fuzzy System Model (TPC-FSM) are trained. Upon validating, the second model has shown a primarily competitive performance and accuracy with the ability to preserve a higher day-to-day variability for 1, 3 and 6 days ahead. In fact, for the longest lead period, the C-FSM and TPC-FSM were able of explaining respectively 84.6% and 86.5% of the actual river flow variation. Overall, the results indicate that TPC-FSM model has provided a better tool to capture extreme flows in the process of streamflow prediction.

  8. Inflated flows on Daedalia Planum (Mars)? Clues from a comparative analysis with the Payen volcanic complex (Argentina)

    NASA Astrophysics Data System (ADS)

    Giacomini, L.; Massironi, M.; Martellato, E.; Pasquarè, G.; Frigeri, A.; Cremonese, G.

    2009-05-01

    Inflation is an emplacement process of lava flows, where a thin visco-elastic layer, produced at an early stage, is later inflated by an underlying fluid core. The core remains hot and fluid for extended period of time due to the thermal-shield effect of the surface visco-elastic crust. Plentiful and widespread morphological fingerprints of inflation like tumuli and lava rises are found on the Payen volcanic complex (Argentina), where pahoehoe lava flows extend over the relatively flat surface of the Pampean foreland and reach at least 180 km in length. The morphology of the Argentinean Payen flows were compared with lava flows on Daedalia Planum (Mars), using Thermal Emission Imaging System (THEMIS), Mars Orbiter Laser Altimeter (MOLA), Mars Orbiter Camera (MOC), Mars Reconnaissance Orbiter (MRO)/High-Resolution Imaging Science Experiment (HiRISE). THEMIS images were used to map the main geological units of Daedalia Planum and determine their stratigraphic relationships. MOLA data were used to investigate the topographic surface over which the flows propagated and assess the thickness of lava flows. Finally, MOC and MRO/HIRISE images were used to identify inflations fingerprints and assess the cratering age of the Daedalia Planum' s youngest flow unit which were found to predate the caldera formation on top of the Arsia Mons. The identification of similar inflation features between the Daedalia Planum and the Payen lava fields suggests that moderate and long lasting effusion rates coupled with very efficient spreading processes could have cyclically occurred in the Arsia Mons volcano during its eruptive history. Consequently the effusion rates and rheological proprieties of Daedalia lava flows, which do not take into account the inflation process, can be overestimated. These findings raise some doubts about the effusion rates and lava rheological properties calculated on Martian flows and recommends that these should be used with caution if applied on flows not checked with high-resolution images and potentially affected by inflation. Further HiRISE data acquisition will permit additional analysis of the flow surfaces and will allow more accurate estimates of effusion rates and rheological properties of the lava flows on Mars particularly if this data is acquired under a favourable illumination.

  9. Laser doppler blood flow imaging using a CMOS imaging sensor with on-chip signal processing.

    PubMed

    He, Diwei; Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; Gill, Cally; Clough, Geraldine F; Morgan, Stephen P

    2013-09-18

    The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  10. A novel data reduction technique for single slanted hot-wire measurements used to study incompressible compressor tip leakage flows

    NASA Astrophysics Data System (ADS)

    Berdanier, Reid A.; Key, Nicole L.

    2016-03-01

    The single slanted hot-wire technique has been used extensively as a method for measuring three velocity components in turbomachinery applications. The cross-flow orientation of probes with respect to the mean flow in rotating machinery results in detrimental prong interference effects when using multi-wire probes. As a result, the single slanted hot-wire technique is often preferred. Typical data reduction techniques solve a set of nonlinear equations determined by curve fits to calibration data. A new method is proposed which utilizes a look-up table method applied to a simulated triple-wire sensor with application to turbomachinery environments having subsonic, incompressible flows. Specific discussion regarding corrections for temperature and density changes present in a multistage compressor application is included, and additional consideration is given to the experimental error which accompanies each data reduction process. Hot-wire data collected from a three-stage research compressor with two rotor tip clearances are used to compare the look-up table technique with the traditional nonlinear equation method. The look-up table approach yields velocity errors of less than 5 % for test conditions deviating by more than 20 °C from calibration conditions (on par with the nonlinear solver method), while requiring less than 10 % of the computational processing time.

  11. A source-controlled data center network model.

    PubMed

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS.

  12. A source-controlled data center network model

    PubMed Central

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS. PMID:28328925

  13. Coupling of Processes and Data in PennState Integrated Hydrologic Modeling (PIHM) System

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Duffy, C.

    2007-12-01

    Full physical coupling, "natural" numerical coupling and parsimonious but accurate data coupling is needed to comprehensively and accurately capture the interaction between different components of a hydrologic continuum. Here we present a physically based, spatially distributed hydrologic model that incorporates all the three coupling strategies. Physical coupling of interception, snow melt, transpiration, overland flow, subsurface flow, river flow, macropore based infiltration and stormflow, flow through and over hydraulic structures likes weirs and dams, and evaporation from interception, ground and overland flow is performed. All the physically coupled components are numerically coupled through semi-discrete form of ordinary differential equations, that define each hydrologic process, using Finite-Volume based approach. The fully implicit solution methodology using CVODE solver solves for all the state variables simultaneously at each adaptive time steps thus providing robustness, stability and accuracy. The accurate data coupling is aided by use of constrained unstructured meshes, flexible data model and use of PIHMgis. The spatial adaptivity of decomposed domain and temporal adaptivity of the numerical solver facilitates capture of varied spatio-temporal scales that are inherent in hydrologic process interactions. The implementation of the model has been performed on a meso-scale Little-Juniata Watershed. Model results are validated by comparison of streamflow at multiple locations. We discuss some of the interesting hydrologic interactions between surface, subsurface and atmosphere witnessed during the year long simulation such as a) inverse relationship between evaporation from interception storage and transpiration b) relative influence of forcing (precipitation, temperature and radiation) and source (soil moisture and overland flow) on evaporation c) influence of local topography on gaining, loosing or "flow-through" behavior of river-aquifer interactions d) role of macropores on base flow during wetting and drying conditions. In addition to its use as a potential predictive and exploratory science tool, we present a test case for the application of model in water management by mapping of water table decline index for the whole watershed. Also discussed will be the efficient parallelization strategy of the model for high spatio-temporal resolution simulations.

  14. Petri net model for analysis of concurrently processed complex algorithms

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1986-01-01

    This paper presents a Petri-net model suitable for analyzing the concurrent processing of computationally complex algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the model to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.

  15. An experimental study of entrainment and transport in the turbulent near wake of a circular cylinder

    NASA Technical Reports Server (NTRS)

    Cantwell, B.; Coles, D.

    1983-01-01

    Attention is given to an experimental investigation of transport processes in the near wake of a circular cylinder, for a Reynolds number of 140,000, in which an X-array of hot wire probes mounted on a pair of whirling arms was used for flow measurement. Rotation of the arms in a uniform flow applies a wide range of relative flow angles to these X-arrays, making them inherently self-calibrating in pitch. A phase signal synchronized with the vortex-shedding process allowed a sorting of the velocity data into 16 populations, each having essentially constant phase. An ensemble average for each population yielded a sequence of pictures of the instantaneous mean flow field in which the vortices are frozen, as they would be on a photograph. The measurements also yield nonsteady mean data for velocity, intermittency, vorticity, stress, and turbulent energy production, as a function of phase. Emphasis is given in a discussion of study results to the nonsteady mean flow, which emerges as a pattern of centers and saddles in a frame of reference that moves with the eddies. The kinematics of the vortex formation process are described in terms of the formation and evolution of saddle points between vortices in the first few diameters of the near wake.

  16. A Mobile Device for Measuring Regional Cerebral Circulation

    PubMed Central

    Howard, George; Griffith, David W.; Stump, David A.; Hinschelwood, Laura

    1980-01-01

    Immobility and costs of currently available regional cerebral blood flow (rCBF) equipment usually require having a single fixed blood flow lab, which cannot be used to study non-ambulatory patients who are often the most interesting to study. After careful study of the information flow between the steps involved in the collection, analysis and display of data, a new rCBF machine has been developed with a mobile satellite and a host processor. The satellite is equipped with a Z-80 microprocessor which controls the data collection, screen formating, data display and communications with the host. The host provides the processing power necessary for moderately complex curve fitting and data storage.

  17. Flow Velocity, Water Temperature, and Conductivity in Shark River Slough, Everglades National Park, Florida: August 2001-June 2002

    USGS Publications Warehouse

    Riscassi, Ami L.; Schaffranek, Raymond W.

    2003-01-01

    The data-collection effort described in this report is in support of the U.S. Geological Survey (USGS) Place-Based Studies project investigating 'Forcing Effects on Flow Structure in Vegetated Wetlands of the Everglades.' Data collected at four locations in Shark River Slough, Everglades National Park, during the 2001-2002 wet season are documented in the report and methods used to process the data are described. Daily mean flow velocities, water temperatures, and specific conductance values are presented in the appendices of the report. The quality-checked and edited data have been compiled and stored on the USGS South Florida Information Access (SOFIA) website http://sofia.usgs.gov.

  18. Experimental measurement of oil-water two-phase flow by data fusion of electrical tomography sensors and venturi tube

    NASA Astrophysics Data System (ADS)

    Liu, Yinyan; Deng, Yuchi; Zhang, Maomao; Yu, Peining; Li, Yi

    2017-09-01

    Oil-water two-phase flows are commonly found in the production processes of the petroleum industry. Accurate online measurement of flow rates is crucial to ensure the safety and efficiency of oil exploration and production. A research team from Tsinghua University has developed an experimental apparatus for multiphase flow measurement based on an electrical capacitance tomography (ECT) sensor, an electrical resistance tomography (ERT) sensor, and a venturi tube. This work presents the phase fraction and flow rate measurements of oil-water two-phase flows based on the developed apparatus. Full-range phase fraction can be obtained by the combination of the ECT sensor and the ERT sensor. By data fusion of differential pressures measured by venturi tube and the phase fraction, the total flow rate and single-phase flow rate can be calculated. Dynamic experiments were conducted on the multiphase flow loop in horizontal and vertical pipelines and at various flow rates.

  19. Analysis of Particle Image Velocimetry (PIV) Data for Application to Subsonic Jet Noise Studies

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    Global velocimetry measurements were taken using Particle Image Velocimetry (PIV) in the subsonic flow exiting a 1 inch circular nozzle in an attempt to better understand the turbulence characteristics of its shear layer region. This report presents the results of the PIV analysis and data reduction portions of the test and details the processing that was done. Custom data analysis and data validation algorithms were developed and applied to a data ensemble consisting of over 750 PIV 70 mm photographs taken in the 0.85 mach flow facility. Results are presented detailing spatial characteristics of the flow including ensemble mean and standard deviation, turbulence intensities and Reynold's stress levels, and 2-point spatial correlations.

  20. Distributed Processing of Sentinel-2 Products using the BIGEARTH Platform

    NASA Astrophysics Data System (ADS)

    Bacu, Victor; Stefanut, Teodor; Nandra, Constantin; Mihon, Danut; Gorgan, Dorian

    2017-04-01

    The constellation of observational satellites orbiting around Earth is constantly increasing, providing more data that need to be processed in order to extract meaningful information and knowledge from it. Sentinel-2 satellites, part of the Copernicus Earth Observation program, aim to be used in agriculture, forestry and many other land management applications. ESA's SNAP toolbox can be used to process data gathered by Sentinel-2 satellites but is limited to the resources provided by a stand-alone computer. In this paper we present a cloud based software platform that makes use of this toolbox together with other remote sensing software applications to process Sentinel-2 products. The BIGEARTH software platform [1] offers an integrated solution for processing Earth Observation data coming from different sources (such as satellites or on-site sensors). The flow of processing is defined as a chain of tasks based on the WorDeL description language [2]. Each task could rely on a different software technology (such as Grass GIS and ESA's SNAP) in order to process the input data. One important feature of the BIGEARTH platform comes from this possibility of interconnection and integration, throughout the same flow of processing, of the various well known software technologies. All this integration is transparent from the user perspective. The proposed platform extends the SNAP capabilities by enabling specialists to easily scale the processing over distributed architectures, according to their specific needs and resources. The software platform [3] can be used in multiple configurations. In the basic one the software platform runs as a standalone application inside a virtual machine. Obviously in this case the computational resources are limited but it will give an overview of the functionalities of the software platform, and also the possibility to define the flow of processing and later on to execute it on a more complex infrastructure. The most complex and robust configuration is based on cloud computing and allows the installation on a private or public cloud infrastructure. In this configuration, the processing resources can be dynamically allocated and the execution time can be considerably improved by the available virtual resources and the number of parallelizable sequences in the processing flow. The presentation highlights the benefits and issues of the proposed solution by analyzing some significant experimental use cases. Main references for further information: [1] BigEarth project, http://cgis.utcluj.ro/projects/bigearth [2] Constantin Nandra, Dorian Gorgan: "Defining Earth data batch processing tasks by means of a flexible workflow description language", ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., III-4, 59-66, (2016). [3] Victor Bacu, Teodor Stefanut, Dorian Gorgan, "Adaptive Processing of Earth Observation Data on Cloud Infrastructures Based on Workflow Description", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp.444-454, (2015).

  1. Structured Analysis and the Data Flow Diagram: Tools for Library Analysis.

    ERIC Educational Resources Information Center

    Carlson, David H.

    1986-01-01

    This article discusses tools developed to aid the systems analysis process (program evaluation and review technique, Gantt charts, organizational charts, decision tables, flowcharts, hierarchy plus input-process-output). Similarities and differences among techniques, library applications of analysis, structured systems analysis, and the data flow…

  2. Statistical validation and an empirical model of hydrogen production enhancement found by utilizing passive flow disturbance in the steam-reformation process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Paul A.; Liao, Chang-hsien

    2007-11-15

    A passive flow disturbance has been proven to enhance the conversion of fuel in a methanol-steam reformer. This study presents a statistical validation of the experiment based on a standard 2{sup k} factorial experiment design and the resulting empirical model of the enhanced hydrogen producing process. A factorial experiment design was used to statistically analyze the effects and interactions of various input factors in the experiment. Three input factors, including the number of flow disturbers, catalyst size, and reactant flow rate were investigated for their effects on the fuel conversion in the steam-reformation process. Based on the experimental results, anmore » empirical model was developed and further evaluated with an uncertainty analysis and interior point data. (author)« less

  3. Simulation of patient flow in multiple healthcare units using process and data mining techniques for model identification.

    PubMed

    Kovalchuk, Sergey V; Funkner, Anastasia A; Metsker, Oleg G; Yakovlev, Aleksey N

    2018-06-01

    An approach to building a hybrid simulation of patient flow is introduced with a combination of data-driven methods for automation of model identification. The approach is described with a conceptual framework and basic methods for combination of different techniques. The implementation of the proposed approach for simulation of the acute coronary syndrome (ACS) was developed and used in an experimental study. A combination of data, text, process mining techniques, and machine learning approaches for the analysis of electronic health records (EHRs) with discrete-event simulation (DES) and queueing theory for the simulation of patient flow was proposed. The performed analysis of EHRs for ACS patients enabled identification of several classes of clinical pathways (CPs) which were used to implement a more realistic simulation of the patient flow. The developed solution was implemented using Python libraries (SimPy, SciPy, and others). The proposed approach enables more a realistic and detailed simulation of the patient flow within a group of related departments. An experimental study shows an improved simulation of patient length of stay for ACS patient flow obtained from EHRs in Almazov National Medical Research Centre in Saint Petersburg, Russia. The proposed approach, methods, and solutions provide a conceptual, methodological, and programming framework for the implementation of a simulation of complex and diverse scenarios within a flow of patients for different purposes: decision making, training, management optimization, and others. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Influence of georeference for saturated excess overland flow modelling using 3D volumetric soft geo-objects

    NASA Astrophysics Data System (ADS)

    Izham, Mohamad Yusoff; Muhamad Uznir, Ujang; Alias, Abdul Rahman; Ayob, Katimon; Wan Ruslan, Ismail

    2011-04-01

    Existing 2D data structures are often insufficient for analysing the dynamism of saturation excess overland flow (SEOF) within a basin. Moreover, all stream networks and soil surface structures in GIS must be preserved within appropriate projection plane fitting techniques known as georeferencing. Inclusion of 3D volumetric structure of the current soft geo-objects simulation model would offer a substantial effort towards representing 3D soft geo-objects of SEOF dynamically within a basin by visualising saturated flow and overland flow volume. This research attempts to visualise the influence of a georeference system towards the dynamism of overland flow coverage and total overland flow volume generated from the SEOF process using VSG data structure. The data structure is driven by Green-Ampt methods and the Topographic Wetness Index (TWI). VSGs are analysed by focusing on spatial object preservation techniques of the conformal-based Malaysian Rectified Skew Orthomorphic (MRSO) and the equidistant-based Cassini-Soldner projection plane under the existing geodetic Malaysian Revised Triangulation 1948 (MRT48) and the newly implemented Geocentric Datum for Malaysia (GDM2000) datum. The simulated result visualises deformation of SEOF coverage under different georeference systems via its projection planes, which delineate dissimilar computation of SEOF areas and overland flow volumes. The integration of Georeference, 3D GIS and the saturation excess mechanism provides unifying evidence towards successful landslide and flood disaster management through envisioning the streamflow generating process (mainly SEOF) in a 3D environment.

  5. Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.

    2008-06-01

    An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.

  6. Multiple runoff processes and multiple thresholds control agricultural runoff generation

    NASA Astrophysics Data System (ADS)

    Saffarpour, Shabnam; Western, Andrew W.; Adams, Russell; McDonnell, Jeffrey J.

    2016-11-01

    Thresholds and hydrologic connectivity associated with runoff processes are a critical concept for understanding catchment hydrologic response at the event timescale. To date, most attention has focused on single runoff response types, and the role of multiple thresholds and flow path connectivities has not been made explicit. Here we first summarise existing knowledge on the interplay between thresholds, connectivity and runoff processes at the hillslope-small catchment scale into a single figure and use it in examining how runoff response and the catchment threshold response to rainfall affect a suite of runoff generation mechanisms in a small agricultural catchment. A 1.37 ha catchment in the Lang Lang River catchment, Victoria, Australia, was instrumented and hourly data of rainfall, runoff, shallow groundwater level and isotope water samples were collected. The rainfall, runoff and antecedent soil moisture data together with water levels at several shallow piezometers are used to identify runoff processes in the study site. We use isotope and major ion results to further support the findings of the hydrometric data. We analyse 60 rainfall events that produced 38 runoff events over two runoff seasons. Our results show that the catchment hydrologic response was typically controlled by the Antecedent Soil Moisture Index and rainfall characteristics. There was a strong seasonal effect in the antecedent moisture conditions that led to marked seasonal-scale changes in runoff response. Analysis of shallow well data revealed that streamflows early in the runoff season were dominated primarily by saturation excess overland flow from the riparian area. As the runoff season progressed, the catchment soil water storage increased and the hillslopes connected to the riparian area. The hillslopes transferred a significant amount of water to the riparian zone during and following events. Then, during a particularly wet period, this connectivity to the riparian zone, and ultimately to the stream, persisted between events for a period of 1 month. These findings are supported by isotope results which showed the dominance of pre-event water, together with significant contributions of event water early (rising limb and peak) in the event hydrograph. Based on a combination of various hydrometric analyses and some isotope and major ion data, we conclude that event runoff at this site is typically a combination of subsurface event flow and saturation excess overland flow. However, during high intensity rainfall events, flashy catchment flow was observed even though the soil moisture threshold for activation of subsurface flow was not exceeded. We hypothesise that this was due to the activation of infiltration excess overland flow and/or fast lateral flow through preferential pathways on the hillslope and saturation overland flow from the riparian zone.

  7. A Holistic Framework for Environmental Flows Determination in Hydropower Contexts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McManamay, Ryan A; Bevelhimer, Mark S

    2013-05-01

    Among the ecological science community, the consensus view is that the natural flow regime sustains the ecological integrity of river systems. This prevailing viewpoint by many environmental stakeholders has progressively led to increased pressure on hydropower dam owners to change plant operations to affect downstream river flows with the intention of providing better conditions for aquatic biological communities. Identifying the neccessary magnitude, frequency, duration, timing, or rate of change of stream flows to meet ecological needs in a hydropower context is challenging because the ecological responses to changes in flows may not be fully known, there are usually a multitudemore » of competing users of flow, and implementing environmental flows usually comes at a price to energy production. Realistically, hydropower managers must develop a reduced set of goals that provide the most benefit to the identified ecological needs. As a part of the Department of Energy (DOE) Water Power Program, the Instream Flow Project (IFP) was carried out by Oak Ridge National Laboratory (ORNL), Pacific Northwest National Laboratory (PNNL), and Argon National Laboratory (ANL) as an attempt to develop tools aimed at defining environmental flow needs for hydropower operations. The application of these tools ranges from national to site-specific scales; thus, the utility of each tool will depend on various phases of the environmental flow process. Given the complexity and sheer volume of applications used to determine environmentally acceptable flows for hydropower, a framework is needed to organize efforts into a staged process dependent upon spatial, temporal, and functional attributes. By far, the predominant domain for determining environmental flows related to hydropower is within the Federal Energy Regulatory Commission (FERC) relicensing process. This process can take multiple years and can be very expensive depending on the scale of each hydropower project. The utility of such a framework is that it can expedite the environmental flow process by 1) organizing data and applications to identify predictable relationships between flows and ecology, and 2) suggesting when and where tools should be used in the environmental flow process. In addition to regulatory procedures, a framework should also provide the coordination for a comprehensive research agenda to guide the science of environmental flows. This research program has further reaching benefits than just environmental flow determination by providing modeling applications, data, and geospatial layers to inform potential hydropower development. We address several objectives within this document that highlight the limitations of existing environmental flow paradigms and their applications to hydropower while presenting a new framework catered towards hydropower needs. Herein, we address the following objectives: 1) Provide a brief overview of the Natural Flow Regime paradigm and existing environmental flow frameworks that have been used to determine ecologically sensitive stream flows for hydropower operations. 2) Describe a new conceptual framework to aid in determining flows needed to meet ecological objectives with regard to hydropower operations. The framework is centralized around determining predictable relationships between flow and ecological responses. 3) Provide evidence of how efforts from ORNL, PNNL, and ANL have filled some of the gaps in this broader framework, and suggest how the framework can be used to set the stage for a research agenda for environmental flow.« less

  8. Seismic depth imaging of sequence boundaries beneath the New Jersey shelf

    NASA Astrophysics Data System (ADS)

    Riedel, M.; Reiche, S.; Aßhoff, K.; Buske, S.

    2018-06-01

    Numerical modelling of fluid flow and transport processes relies on a well-constrained geological model, which is usually provided by seismic reflection surveys. In the New Jersey shelf area a large number of 2D seismic profiles provide an extensive database for constructing a reliable geological model. However, for the purpose of modelling groundwater flow, the seismic data need to be depth-converted which is usually accomplished using complementary data from borehole logs. Due to the limited availability of such data in the New Jersey shelf, we propose a two-stage processing strategy with particular emphasis on reflection tomography and pre-stack depth imaging. We apply this workflow to a seismic section crossing the entire New Jersey shelf. Due to the tomography-based velocity modelling, the processing flow does not depend on the availability of borehole logging data. Nonetheless, we validate our results by comparing the migrated depths of selected geological horizons to borehole core data from the IODP expedition 313 drill sites, located at three positions along our seismic line. The comparison yields that in the top 450 m of the migrated section, most of the selected reflectors were positioned with an accuracy close to the seismic resolution limit (≈ 4 m) for that data. For deeper layers the accuracy still remains within one seismic wavelength for the majority of the tested horizons. These results demonstrate that the processed seismic data provide a reliable basis for constructing a hydrogeological model. Furthermore, the proposed workflow can be applied to other seismic profiles in the New Jersey shelf, which will lead to an even better constrained model.

  9. Evaluation of process excellence tools in improving donor flow management in a tertiary care hospital in South India

    PubMed Central

    Venugopal, Divya; Rafi, Aboobacker Mohamed; Innah, Susheela Jacob; Puthayath, Bibin T.

    2017-01-01

    BACKGROUND: Process Excellence is a value based approach and focuses on standardizing work processes by eliminating the non-value added processes, identify process improving methodologies and maximize capacity and expertise of the staff. AIM AND OBJECTIVES: To Evaluate the utility of Process Excellence Tools in improving Donor Flow Management in a Tertiary care Hospital by studying the current state of donor movement within the blood bank and providing recommendations for eliminating the wait times and to improve the process and workflow. MATERIALS AND METHODS: The work was done in two phases; The First Phase comprised of on-site observations with the help of an expert trained in Process Excellence Methodology who observed and documented various aspects of donor flow, donor turn around time, total staff details and operator process flow. The Second Phase comprised of constitution of a Team to analyse the data collected. The analyzed data along with the recommendations were presented before an expert hospital committee and the management. RESULTS: Our analysis put forward our strengths and identified potential problems. Donor wait time was reduced by 50% after lean due to better donor management with reorganization of the infrastructure of the donor area. Receptionist tracking showed that 62% of the total time the staff wastes in walking and 22% in other non-value added activities. Defining Duties for each staff reduced the time spent by them in non-value added activities. Implementation of the token system, generation of unique identification code for donors and bar code labeling of the tubes and bags are among the other recommendations. CONCLUSION: Process Excellence is not a programme; it's a culture that transforms an organization and improves its Quality and Efficiency through new attitudes, elimination of wastes and reduction in costs. PMID:28970681

  10. Evaluation of process excellence tools in improving donor flow management in a tertiary care hospital in South India.

    PubMed

    Venugopal, Divya; Rafi, Aboobacker Mohamed; Innah, Susheela Jacob; Puthayath, Bibin T

    2017-01-01

    Process Excellence is a value based approach and focuses on standardizing work processes by eliminating the non-value added processes, identify process improving methodologies and maximize capacity and expertise of the staff. To Evaluate the utility of Process Excellence Tools in improving Donor Flow Management in a Tertiary care Hospital by studying the current state of donor movement within the blood bank and providing recommendations for eliminating the wait times and to improve the process and workflow. The work was done in two phases; The First Phase comprised of on-site observations with the help of an expert trained in Process Excellence Methodology who observed and documented various aspects of donor flow, donor turn around time, total staff details and operator process flow. The Second Phase comprised of constitution of a Team to analyse the data collected. The analyzed data along with the recommendations were presented before an expert hospital committee and the management. Our analysis put forward our strengths and identified potential problems. Donor wait time was reduced by 50% after lean due to better donor management with reorganization of the infrastructure of the donor area. Receptionist tracking showed that 62% of the total time the staff wastes in walking and 22% in other non-value added activities. Defining Duties for each staff reduced the time spent by them in non-value added activities. Implementation of the token system, generation of unique identification code for donors and bar code labeling of the tubes and bags are among the other recommendations. Process Excellence is not a programme; it's a culture that transforms an organization and improves its Quality and Efficiency through new attitudes, elimination of wastes and reduction in costs.

  11. Computing Optic Flow with ArduEye Vision Sensor

    DTIC Science & Technology

    2013-01-01

    processing algorithm that can be applied to the flight control of other robotic platforms. 15. SUBJECT TERMS Optical flow, ArduEye, vision based ...2 Figure 2. ArduEye vision chip on Stonyman breakout board connected to Arduino Mega (8) (left) and the Stonyman vision chips (7...robotic platforms. There is a significant need for small, light , less power-hungry sensors and sensory data processing algorithms in order to control the

  12. Analyzing Vehicle Operator Deviations

    DTIC Science & Technology

    2008-07-01

    Organization Report No. Scarborough A, Bailey L, Pounds J 9. Performing Organization Name and Address 10. Work Unit No. ( TRAIS ) FAA Civil...Investigation Reporting Form Instructions Use the accompanying flow charts (D4-D14) and Data Reporting Form (D15-D18) to document the results of the...happened, use the Entry Level Flow Chart (D3) to identify the relevant mental processes that were involved in the VOD. For each mental process identified

  13. Dynamic stall reattachment revisited

    NASA Astrophysics Data System (ADS)

    Mulleners, Karen

    2017-11-01

    Dynamic stall on pitching airfoils is an important practical problem that affects for example rotary wing aircraft and wind turbines. It also comprises a number of interesting fundamental fluid dynamical phenomena such as unsteady flow separation, vortex formation and shedding, unsteady flow reattachment, and dynamic hysteresis. Following up on past efforts focussing on the separation development, we now revisited the flow reattachment or stall recovery process. Experimental time-resolved velocity field and surface pressure data for a two-dimensional sinusoidally pitching airfoil with various reduced frequencies was analysed using different Eulerian, Lagrangian, and modal decomposition methods. This complementary analysis resulted in the identification of the chain of events that play a role in the flow reattachment process, a detailed description of that role, and characterisation of the individual events by the governing time-scales and flow features.

  14. Direct match data flow machine apparatus and process for data driven computing

    DOEpatents

    Davidson, G.S.; Grafe, V.G.

    1997-08-12

    A data flow computer and method of computing are disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status but to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a ``fire`` signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor. 11 figs.

  15. Direct match data flow machine apparatus and process for data driven computing

    DOEpatents

    Davidson, George S.; Grafe, Victor Gerald

    1997-01-01

    A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status but to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a "fire" signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor.

  16. Modeling erosion and sedimentation coupled with hydrological and overland flow processes at the watershed scale

    NASA Astrophysics Data System (ADS)

    Kim, Jongho; Ivanov, Valeriy Y.; Katopodes, Nikolaos D.

    2013-09-01

    A novel two-dimensional, physically based model of soil erosion and sediment transport coupled to models of hydrological and overland flow processes has been developed. The Hairsine-Rose formulation of erosion and deposition processes is used to account for size-selective sediment transport and differentiate bed material into original and deposited soil layers. The formulation is integrated within the framework of the hydrologic and hydrodynamic model tRIBS-OFM, Triangulated irregular network-based, Real-time Integrated Basin Simulator-Overland Flow Model. The integrated model explicitly couples the hydrodynamic formulation with the advection-dominated transport equations for sediment of multiple particle sizes. To solve the system of equations including both the Saint-Venant and the Hairsine-Rose equations, the finite volume method is employed based on Roe's approximate Riemann solver on an unstructured grid. The formulation yields space-time dynamics of flow, erosion, and sediment transport at fine scale. The integrated model has been successfully verified with analytical solutions and empirical data for two benchmark cases. Sensitivity tests to grid resolution and the number of used particle sizes have been carried out. The model has been validated at the catchment scale for the Lucky Hills watershed located in southeastern Arizona, USA, using 10 events for which catchment-scale streamflow and sediment yield data were available. Since the model is based on physical laws and explicitly uses multiple types of watershed information, satisfactory results were obtained. The spatial output has been analyzed and the driving role of topography in erosion processes has been discussed. It is expected that the integrated formulation of the model has the promise to reduce uncertainties associated with typical parameterizations of flow and erosion processes. A potential for more credible modeling of earth-surface processes is thus anticipated.

  17. TRACC: An open source software for processing sap flux data from thermal dissipation probes

    DOE PAGES

    Ward, Eric J.; Domec, Jean-Christophe; King, John; ...

    2017-05-02

    Here, thermal dissipation probes (TDPs) have become a widely used method of monitoring plant water use in recent years. The use of TDPs requires calibration to a theoretical zero-flow value (ΔT0); usually based upon the assumption that at least some nighttime measurements represent zero-flow conditions. Fully automating the processing of data from TDPs is made exceedingly difficult due to errors arising from many sources. However, it is desirable to minimize variation arising from different researchers’ processing data, and thus, a common platform for processing data, including editing raw data and determination of ΔT0, is useful and increases the transparency andmore » replicability of TDP-based research. Here, we present the TDP data processing software TRACC (Thermal dissipation Review Assessment Cleaning and Conversion) to serve this purpose. TRACC is an open-source software written in the language R, using graphical presentation of data and on screen prompts with yes/no or simple numerical responses. It allows the user to select several important options, such as calibration coefficients and the exclusion of nights when vapor pressure deficit does not approach zero. Although it is designed for users with no coding experience, the outputs of TRACC could be easily incorporated into more complex models or software.« less

  18. TRACC: An open source software for processing sap flux data from thermal dissipation probes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, Eric J.; Domec, Jean-Christophe; King, John

    Here, thermal dissipation probes (TDPs) have become a widely used method of monitoring plant water use in recent years. The use of TDPs requires calibration to a theoretical zero-flow value (ΔT0); usually based upon the assumption that at least some nighttime measurements represent zero-flow conditions. Fully automating the processing of data from TDPs is made exceedingly difficult due to errors arising from many sources. However, it is desirable to minimize variation arising from different researchers’ processing data, and thus, a common platform for processing data, including editing raw data and determination of ΔT0, is useful and increases the transparency andmore » replicability of TDP-based research. Here, we present the TDP data processing software TRACC (Thermal dissipation Review Assessment Cleaning and Conversion) to serve this purpose. TRACC is an open-source software written in the language R, using graphical presentation of data and on screen prompts with yes/no or simple numerical responses. It allows the user to select several important options, such as calibration coefficients and the exclusion of nights when vapor pressure deficit does not approach zero. Although it is designed for users with no coding experience, the outputs of TRACC could be easily incorporated into more complex models or software.« less

  19. Interfacial Area Development in Two-Phase Fluid Flow: Transient vs. Quasi-Static Flow Conditions

    NASA Astrophysics Data System (ADS)

    Meisenheimer, D. E.; Wildenschild, D.

    2017-12-01

    Fluid-fluid interfaces are important in multiphase flow systems in the environment (e.g. groundwater remediation, geologic CO2 sequestration) and industry (e.g. air stripping, fuel cells). Interfacial area controls mass transfer, and therefore reaction efficiency, between the different phases in these systems but they also influence fluid flow processes. There is a need to better understand this relationship between interfacial area and fluid flow processes so that more robust theories and models can be built for engineers and policy makers to improve the efficacy of many multiphase flow systems important to society. Two-phase flow experiments were performed in glass bead packs under transient and quasi-static flow conditions. Specific interfacial area was calculated from 3D images of the porous media obtained using the fast x-ray microtomography capability at the Advanced Photon Source. We present data suggesting a direct relationship between the transient nature of the fluid-flow experiment (fewer equilibrium points) and increased specific interfacial area. The effect of flow condition on Euler characteristic (a representative measure of fluid topology) will also be presented.

  20. Go With the Flow, on Jupiter and Snow. Coherence from Model-Free Video Data Without Trajectories

    NASA Astrophysics Data System (ADS)

    AlMomani, Abd AlRahman R.; Bollt, Erik

    2018-06-01

    Viewing a data set such as the clouds of Jupiter, coherence is readily apparent to human observers, especially the Great Red Spot, but also other great storms and persistent structures. There are now many different definitions and perspectives mathematically describing coherent structures, but we will take an image processing perspective here. We describe an image processing perspective inference of coherent sets from a fluidic system directly from image data, without attempting to first model underlying flow fields, related to a concept in image processing called motion tracking. In contrast to standard spectral methods for image processing which are generally related to a symmetric affinity matrix, leading to standard spectral graph theory, we need a not symmetric affinity which arises naturally from the underlying arrow of time. We develop an anisotropic, directed diffusion operator corresponding to flow on a directed graph, from a directed affinity matrix developed with coherence in mind, and corresponding spectral graph theory from the graph Laplacian. Our methodology is not offered as more accurate than other traditional methods of finding coherent sets, but rather our approach works with alternative kinds of data sets, in the absence of vector field. Our examples will include partitioning the weather and cloud structures of Jupiter, and a local to Potsdam, NY, lake effect snow event on Earth, as well as the benchmark test double-gyre system.

  1. As-built design specification for CAMS Development Dot Data System (CDDDS)

    NASA Technical Reports Server (NTRS)

    Wehmanen, O. A.

    1979-01-01

    The CAMS development dot data system is described. Listings and flow charts of the eight programs used to maintain the data base and the 15 subroutines used in FORTRAN programs to process the data are presented.

  2. User guide for MODPATH version 6 - A particle-tracking model for MODFLOW

    USGS Publications Warehouse

    Pollock, David W.

    2012-01-01

    MODPATH is a particle-tracking post-processing model that computes three-dimensional flow paths using output from groundwater flow simulations based on MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. This report documents MODPATH version 6. Previous versions were documented in USGS Open-File Reports 89-381 and 94-464. The program uses a semianalytical particle-tracking scheme that allows an analytical expression of a particle's flow path to be obtained within each finite-difference grid cell. A particle's path is computed by tracking the particle from one cell to the next until it reaches a boundary, an internal sink/source, or satisfies another termination criterion. Data input to MODPATH consists of a combination of MODFLOW input data files, MODFLOW head and flow output files, and other input files specific to MODPATH. Output from MODPATH consists of several output files, including a number of particle coordinate output files intended to serve as input data for other programs that process, analyze, and display the results in various ways. MODPATH is written in FORTRAN and can be compiled by any FORTRAN compiler that fully supports FORTRAN-2003 or by most commercially available FORTRAN-95 compilers that support the major FORTRAN-2003 language extensions.

  3. On the impact of the elastic-plastic flow upon the process of destruction of the solenoid in a super strong pulsed magnetic field

    NASA Astrophysics Data System (ADS)

    Krivosheev, S. I.; Magazinov, S. G.; Alekseev, D. I.

    2018-01-01

    At interaction of super strong magnetic fields with a solenoid material, a specific mode of the material flow forms. To describe this process, magnetohydrodynamic approximation is traditionally used. The formation of plastic shock-waves in material in a rapidly increasing pressure of 100 GPa/μs, can significantly alter the distribution of the physical parameters in the medium and affect the flow modes. In this paper, an analysis of supporting results of numerical simulations in comparison with available experimental data is presented.

  4. Long-lasting Science Returns from the Apollo Heat Flow Experiments

    NASA Astrophysics Data System (ADS)

    Nagihara, S.; Taylor, P. T.; Williams, D. R.; Zacny, K.; Hedlund, M.; Nakamura, Y.

    2012-12-01

    The Apollo astronauts deployed geothermal heat flow instruments at landing sites 15 and 17 as part of the Apollo Lunar Surface Experiments Packages (ALSEP) in July 1971 and December 1972, respectively. These instruments continuously transmitted data to the Earth until September 1977. Four decades later, the data from the two Apollo sites remain the only set of in-situ heat flow measurements obtained on an extra-terrestrial body. Researchers continue to extract additional knowledge from this dataset by utilizing new analytical techniques and by synthesizing it with data from more recent lunar orbital missions such as the Lunar Reconnaissance Orbiter. In addition, lessons learned from the Apollo experiments help contemporary researchers in designing heat flow instruments for future missions to the Moon and other planetary bodies. For example, the data from both Apollo sites showed gradual warming trends in the subsurface from 1971 to 1977. The cause of this warming has been debated in recent years. It may have resulted from fluctuation in insolation associated with the 18.6-year-cycle precession of the Moon, or sudden changes in surface thermal environment/properties resulting from the installation of the instruments and the astronauts' activities. These types of re-analyses of the Apollo data have lead a panel of scientists to recommend that a heat flow probe carried on a future lunar mission reach 3 m into the subsurface, ~0.6 m deeper than the depths reached by the Apollo 17 experiment. This presentation describes the authors' current efforts for (1) restoring a part of the Apollo heat flow data that were left unprocessed by the original investigators and (2) designing a compact heat flow instrument for future robotic missions to the Moon. First, at the conclusion of the ALSEP program in 1977, heat flow data obtained at the two Apollo sites after December 1974 were left unprocessed and not properly archived through NASA. In the following decades, heat flow data from January 1975 through February 1976, as well as the metadata necessary for processing the data (the data reduction algorithm, instrument calibration data, etc.), were somehow lost. In 2010, we located 450 original master archival tapes of unprocessed data from all the ALSEP instruments for a period of April through June 1975 at the Washington National Records Center. We are currently extracting the heat flow data packets from these tapes and processing them. Second, on future lunar missions, heat flow probes will likely be deployed by a network of small robotic landers, as recommended by the latest Decadal Survey of the National Academy of Science. In such a scenario, the heat flow probe must be a compact system, and that precludes use of heavy excavation equipment such as a rotary drill for reaching the 3-m target depth. The new heat flow system under development uses a pneumatically driven penetrator. It utilizes a stem that winds out of a reel and pushes its conical tip into the regolith. Simultaneously, gas jets, emitted from the cone tip, loosen and blow away the soil. Lab experiments have demonstrated its effectiveness in lunar vacuum.

  5. Long-Lasting Science Returns from the Apollo Heat Flow Experiments

    NASA Technical Reports Server (NTRS)

    Nagihara, S.; Taylor, P. T.; Williams, D. R.; Zacny, K.; Hedlund, M.; Nakamura, Y.

    2012-01-01

    The Apollo astronauts deployed geothermal heat flow instruments at landing sites 15 and 17 as part of the Apollo Lunar Surface Experiments Packages (ALSEP) in July 1971 and December 1972, respectively. These instruments continuously transmitted data to the Earth until September 1977. Four decades later, the data from the two Apollo sites remain the only set of in-situ heat flow measurements obtained on an extra-terrestrial body. Researchers continue to extract additional knowledge from this dataset by utilizing new analytical techniques and by synthesizing it with data from more recent lunar orbital missions such as the Lunar Reconnaissance Orbiter. In addition, lessons learned from the Apollo experiments help contemporary researchers in designing heat flow instruments for future missions to the Moon and other planetary bodies. For example, the data from both Apollo sites showed gradual warming trends in the subsurface from 1971 to 1977. The cause of this warming has been debated in recent years. It may have resulted from fluctuation in insolation associated with the 18.6-year-cycle precession of the Moon, or sudden changes in surface thermal environment/properties resulting from the installation of the instruments and the astronauts' activities. These types of reanalyses of the Apollo data have lead a panel of scientists to recommend that a heat flow probe carried on a future lunar mission reach 3 m into the subsurface, approx 0.6 m deeper than the depths reached by the Apollo 17 experiment. This presentation describes the authors current efforts for (1) restoring a part of the Apollo heat flow data that were left unprocessed by the original investigators and (2) designing a compact heat flow instrument for future robotic missions to the Moon. First, at the conclusion of the ALSEP program in 1977, heat flow data obtained at the two Apollo sites after December 1974 were left unprocessed and not properly archived through NASA. In the following decades, heat flow data from January 1975 through February 1976, as well as the metadata necessary for processing the data (the data reduction algorithm, instrument calibration data, etc.), were somehow lost. In 2010, we located 450 original master archival tapes of unprocessed data from all the ALSEP instruments for a period of April through June 1975 at the Washington National Records Center. We are currently extracting the heat flow data packets from these tapes and processing them. Second, on future lunar missions, heat flow probes will likely be deployed by a network of small robotic landers, as recommended by the latest Decadal Survey of the National Academy of Science. In such a scenario, the heat flow probe must be a compact system, and that precludes use of heavy excavation equipment such as a rotary drill for reaching the 3-m target depth. The new heat flow system under development uses a pneumatically driven penetrator. It utilizes a stem that winds out of a reel and pushes its conical tip into the regolith. Simultaneously, gas jets, emitted from the cone tip, loosen and blow away the soil. Lab experiments have demonstrated its effectiveness in lunar vacuum.

  6. A framework for the modeling of gut blood flow regulation and postprandial hyperaemia

    PubMed Central

    Jeays, Adam David; Lawford, Patricia Veronica; Gillott, Richard; Spencer, Paul A; Bardhan, Karna Dev; Hose, David Rodney

    2007-01-01

    After a meal the activity of the gut increases markedly as digestion takes place. Associated with this increase in activity is an increase in blood flow, which has been shown to be dependent on factors such as caloric content and constitution of the meal. Much qualitative work has been carried out regarding mechanisms for the presence of food in a section of gut producing increased blood flow to that section, but there are still many aspects of this process that are not fully understood. In this paper we briefly review current knowledge on several relevant areas relating to gut blood flow, focusing on quantitative data where available and highlighting areas where further research is needed. We then present new data on the effect of feeding on flow in the superior mesenteric artery. Finally, we describe a framework for combining this data to produce a single model describing the mechanisms involved in postprandial hyperaemia. For a section of the model, where appropriate data are available, preliminary results are presented. PMID:17457971

  7. 4D flow mri post-processing strategies for neuropathologies

    NASA Astrophysics Data System (ADS)

    Schrauben, Eric Mathew

    4D flow MRI allows for the measurement of a dynamic 3D velocity vector field. Blood flow velocities in large vascular territories can be qualitatively visualized with the added benefit of quantitative probing. Within cranial pathologies theorized to have vascular-based contributions or effects, 4D flow MRI provides a unique platform for comprehensive assessment of hemodynamic parameters. Targeted blood flow derived measurements, such as flow rate, pulsatility, retrograde flow, or wall shear stress may provide insight into the onset or characterization of more complex neuropathologies. Therefore, the thorough assessment of each parameter within the context of a given disease has important medical implications. Not surprisingly, the last decade has seen rapid growth in the use of 4D flow MRI. Data acquisition sequences are available to researchers on all major scanner platforms. However, the use has been limited mostly to small research trials. One major reason that has hindered the more widespread use and application in larger clinical trials is the complexity of the post-processing tasks and the lack of adequate tools for these tasks. Post-processing of 4D flow MRI must be semi-automated, fast, user-independent, robust, and reliably consistent for use in a clinical setting, within large patient studies, or across a multicenter trial. Development of proper post-processing methods coupled with systematic investigation in normal and patient populations pushes 4D flow MRI closer to clinical realization while elucidating potential underlying neuropathological origins. Within this framework, the work in this thesis assesses venous flow reproducibility and internal consistency in a healthy population. A preliminary analysis of venous flow parameters in healthy controls and multiple sclerosis patients is performed in a large study employing 4D flow MRI. These studies are performed in the context of the chronic cerebrospinal venous insufficiency hypothesis. Additionally, a double-gated flow acquisition and reconstruction scheme demonstrates respiratory-induced changes in internal jugular vein flow. Finally, a semi-automated intracranial vessel segmentation and flow parameter measurement software tool for fast and consistent 4D flow post-processing analysis is developed, validated, and exhibited an in-vivo.

  8. Groundwater flow and transport modeling

    USGS Publications Warehouse

    Konikow, Leonard F.; Mercer, J.W.

    1988-01-01

    Deterministic, distributed-parameter, numerical simulation models for analyzing groundwater flow and transport problems have come to be used almost routinely during the past decade. A review of the theoretical basis and practical use of groundwater flow and solute transport models is used to illustrate the state-of-the-art. Because of errors and uncertainty in defining model parameters, models must be calibrated to obtain a best estimate of the parameters. For flow modeling, data generally are sufficient to allow calibration. For solute-transport modeling, lack of data not only limits calibration, but also causes uncertainty in process description. Where data are available, model reliability should be assessed on the basis of sensitivity tests and measures of goodness-of-fit. Some of these concepts are demonstrated by using two case histories. ?? 1988.

  9. Batch Scheduling for Hybrid Assembly Differentiation Flow Shop to Minimize Total Actual Flow Time

    NASA Astrophysics Data System (ADS)

    Maulidya, R.; Suprayogi; Wangsaputra, R.; Halim, A. H.

    2018-03-01

    A hybrid assembly differentiation flow shop is a three-stage flow shop consisting of Machining, Assembly and Differentiation Stages and producing different types of products. In the machining stage, parts are processed in batches on different (unrelated) machines. In the assembly stage, each part of the different parts is assembled into an assembly product. Finally, the assembled products will further be processed into different types of final products in the differentiation stage. In this paper, we develop a batch scheduling model for a hybrid assembly differentiation flow shop to minimize the total actual flow time defined as the total times part spent in the shop floor from the arrival times until its due date. We also proposed a heuristic algorithm for solving the problems. The proposed algorithm is tested using a set of hypothetic data. The solution shows that the algorithm can solve the problems effectively.

  10. An Optical Study of Processes in Hydrogen Flame in a Tube

    DTIC Science & Technology

    2002-07-01

    growth of the hydrogen- flame length with the hydrogen flow rate was observed, whereas for a turbulent hydrogen jet (Reynolds number Re > 104 [5]), the... flame length remained almost constant and varied only weakly with the flow rate of hydrogen. For a subsonic jet flow, flame images display an...There are some data in the literature which show how the diffusive- flame length varies with the rate of hydrogen flow [4, 7]. The length of a

  11. A criterion for maximum resin flow in composite materials curing process

    NASA Astrophysics Data System (ADS)

    Lee, Woo I.; Um, Moon-Kwang

    1993-06-01

    On the basis of Springer's resin flow model, a criterion for maximum resin flow in autoclave curing is proposed. Validity of the criterion was proved for two resin systems (Fiberite 976 and Hercules 3501-6 epoxy resin). The parameter required for the criterion can be easily estimated from the measured resin viscosity data. The proposed criterion can be used in establishing the proper cure cycle to ensure maximum resin flow and, thus, the maximum compaction.

  12. Accounting for heterogeneity of nutrient dynamics in riverscapes through spatially distributed models

    NASA Astrophysics Data System (ADS)

    Wollheim, W. M.; Stewart, R. J.

    2011-12-01

    Numerous types of heterogeneity exist within river systems, leading to hotspots of nutrient sources, sinks, and impacts embedded within an underlying gradient defined by river size. This heterogeneity influences the downstream propagation of anthropogenic impacts across flow conditions. We applied a river network model to explore how nitrogen saturation at river network scales is influenced by the abundance and distribution of potential nutrient processing hotspots (lakes, beaver ponds, tributary junctions, hyporheic zones) under different flow conditions. We determined that under low flow conditions, whole network nutrient removal is relatively insensitive to the number of hotspots because the underlying river network structure has sufficient nutrient processing capacity. However, hotspots become more important at higher flows and greatly influence the spatial distribution of removal within the network at all flows, suggesting that identification of heterogeneity is critical to develop predictive understanding of nutrient removal processes under changing loading and climate conditions. New temporally intensive data from in situ sensors can potentially help to better understand and constrain these dynamics.

  13. A century of studying effusive eruptions in Hawai'i: Chapter 9 in Characteristics of Hawaiian volcanoes

    USGS Publications Warehouse

    Cashman, Katherine V.; Mangan, Margaret T.; Poland, Michael P.; Takahashi, T. Jane; Landowski, Claire M.

    2014-01-01

    The Hawaiian Volcano Observatory (HVO) was established as a natural laboratory to study volcanic processes. Since the most frequent form of volcanic activity in Hawai‘i is effusive, a major contribution of the past century of research at HVO has been to describe and quantify lava flow emplacement processes. Lava flow research has taken many forms; first and foremost it has been a collection of basic observational data on active lava flows from both Mauna Loa and Kīlauea volcanoes that have occurred over the past 100 years. Both the types and quantities of observational data have changed with changing technology; thus, another important contribution of HVO to lava flow studies has been the application of new observational techniques. Also important has been a long-term effort to measure the physical properties (temperature, viscosity, crystallinity, and so on) of flowing lava. Field measurements of these properties have both motivated laboratory experiments and presaged the results of those experiments, particularly with respect to understanding the rheology of complex fluids. Finally, studies of the dynamics of lava flow emplacement have combined detailed field measurements with theoretical models to build a framework for the interpretation of lava flows in numerous other terrestrial, submarine, and planetary environments. Here, we attempt to review all these aspects of lava flow studies and place them into a coherent framework that we hope will motivate future research.

  14. Testing Small CPAS Parachutes Using HIVAS

    NASA Technical Reports Server (NTRS)

    Ray, Eric S.; Hennings, Elsa; Bernatovich, Michael A.

    2013-01-01

    The High Velocity Airflow System (HIVAS) facility at the Naval Air Warfare Center (NAWC) at China Lake was successfully used as an alternative to flight test to determine parachute drag performance of two small Capsule Parachute Assembly System (CPAS) canopies. A similar parachute with known performance was also tested as a control. Realtime computations of drag coefficient were unrealistically low. This is because HIVAS produces a non-uniform flow which rapidly decays from a high central core flow. Additional calibration runs were performed to characterize this flow assuming radial symmetry from the centerline. The flow field was used to post-process effective flow velocities at each throttle setting and parachute diameter using the definition of the momentum flux factor. Because one parachute had significant oscillations, additional calculations were required to estimate the projected flow at off-axis angles. The resulting drag data from HIVAS compared favorably to previously estimated parachute performance based on scaled data from analogous CPAS parachutes. The data will improve drag area distributions in the next version of the CPAS Model Memo.

  15. Building Common Ground for Environmental Flows using Traditional Techniques and Novel Engagement Approaches.

    PubMed

    Mott Lacroix, Kelly E; Xiu, Brittany C; Megdal, Sharon B

    2016-04-01

    Despite increased understanding of the science of environmental flows, identification and implementation of effective environmental flow policies remains elusive. Perhaps the greatest barrier to implementing flow policies is the framework for water management. An alternative management approach is needed when legal rights for environmental flows do not exist, or are ineffective at protecting ecosystems. The research presented here, conducted in the U.S. state of Arizona, provides an empirical example of engagement to promote social learning as an approach to finding ways to provide water for the environment where legal rights for environmental flows are inadequate. Based on our engagement process we propose that identifying and then building common ground require attention to the process of analyzing qualitative data and the methods for displaying complex information, two aspects not frequently discussed in the social learning or stakeholder engagement literature. The results and methods from this study can help communities develop an engagement process that will find and build common ground, increase stakeholder involvement, and identify innovative solutions to provide water for the environment that reflect the concerns of current water users.

  16. Standards for the Analysis and Processing of Surface-Water Data and Information Using Electronic Methods

    USGS Publications Warehouse

    Sauer, Vernon B.

    2002-01-01

    Surface-water computation methods and procedures are described in this report to provide standards from which a completely automated electronic processing system can be developed. To the greatest extent possible, the traditional U. S. Geological Survey (USGS) methodology and standards for streamflow data collection and analysis have been incorporated into these standards. Although USGS methodology and standards are the basis for this report, the report is applicable to other organizations doing similar work. The proposed electronic processing system allows field measurement data, including data stored on automatic field recording devices and data recorded by the field hydrographer (a person who collects streamflow and other surface-water data) in electronic field notebooks, to be input easily and automatically. A user of the electronic processing system easily can monitor the incoming data and verify and edit the data, if necessary. Input of the computational procedures, rating curves, shift requirements, and other special methods are interactive processes between the user and the electronic processing system, with much of this processing being automatic. Special computation procedures are provided for complex stations such as velocity-index, slope, control structures, and unsteady-flow models, such as the Branch-Network Dynamic Flow Model (BRANCH). Navigation paths are designed to lead the user through the computational steps for each type of gaging station (stage-only, stagedischarge, velocity-index, slope, rate-of-change in stage, reservoir, tide, structure, and hydraulic model stations). The proposed electronic processing system emphasizes the use of interactive graphics to provide good visual tools for unit values editing, rating curve and shift analysis, hydrograph comparisons, data-estimation procedures, data review, and other needs. Documentation, review, finalization, and publication of records are provided for with the electronic processing system, as well as archiving, quality assurance, and quality control.

  17. LANDSAT-D ground segment operations plan, revision A

    NASA Technical Reports Server (NTRS)

    Evans, B.

    1982-01-01

    The basic concept for the utilization of LANDSAT ground processing resources is described. Only the steady state activities that support normal ground processing are addressed. This ground segment operations plan covers all processing of the multispectral scanner and the processing of thematic mapper through data acquisition and payload correction data generation for the LANDSAT 4 mission. The capabilities embedded in the hardware and software elements are presented from an operations viewpoint. The personnel assignments associated with each functional process and the mechanisms available for controlling the overall data flow are identified.

  18. On-line Monitoring of Continuous Flow Chemical Synthesis Using a Portable, Small Footprint Mass Spectrometer

    NASA Astrophysics Data System (ADS)

    Bristow, Tony W. T.; Ray, Andrew D.; O'Kearney-McMullan, Anne; Lim, Louise; McCullough, Bryan; Zammataro, Alessio

    2014-10-01

    For on-line monitoring of chemical reactions (batch or continuous flow), mass spectrometry (MS) can provide data to (1) determine the fate of starting materials and reagents, (2) confirm the presence of the desired product, (3) identify intermediates and impurities, (4) determine steady state conditions and point of completion, and (5) speed up process optimization. Recent developments in small footprint atmospheric pressure ionization portable mass spectrometers further enable this coupling, as the mass spectrometer can be easily positioned with the reaction system to be studied. A major issue for this combination is the transfer of a sample that is representative of the reaction and also compatible with the mass spectrometer. This is particularly challenging as high concentrations of reagents and products can be encountered in organic synthesis. The application of a portable mass spectrometer for on-line characterization of flow chemical synthesis has been evaluated by coupling a Microsaic 4000 MiD to the Future Chemistry Flow Start EVO chemistry system. Specifically, the Hofmann rearrangement has been studied using the on-line mass spectrometry approach. Sample transfer from the flow reactor is achieved using a mass rate attenuator (MRA) and a sampling make-up flow from a high pressure pump. This enables the appropriate sample dilution, transfer, and preparation for electrospray ionization. The capability of this approach to provide process understanding is described using an industrial pharmaceutical process that is currently under development. The effect of a number of key experimental parameters, such as the composition of the sampling make-up flow and the dilution factor on the mass spectrometry data, is also discussed.

  19. Experimental quantification of the fluid dynamics in blood-processing devices through 4D-flow imaging: A pilot study on a real oxygenator/heat-exchanger module.

    PubMed

    Piatti, Filippo; Palumbo, Maria Chiara; Consolo, Filippo; Pluchinotta, Francesca; Greiser, Andreas; Sturla, Francesco; Votta, Emiliano; Siryk, Sergii V; Vismara, Riccardo; Fiore, Gianfranco Beniamino; Lombardi, Massimo; Redaelli, Alberto

    2018-02-08

    The performance of blood-processing devices largely depends on the associated fluid dynamics, which hence represents a key aspect in their design and optimization. To this aim, two approaches are currently adopted: computational fluid-dynamics, which yields highly resolved three-dimensional data but relies on simplifying assumptions, and in vitro experiments, which typically involve the direct video-acquisition of the flow field and provide 2D data only. We propose a novel method that exploits space- and time-resolved magnetic resonance imaging (4D-flow) to quantify the complex 3D flow field in blood-processing devices and to overcome these limitations. We tested our method on a real device that integrates an oxygenator and a heat exchanger. A dedicated mock loop was implemented, and novel 4D-flow sequences with sub-millimetric spatial resolution and region-dependent velocity encodings were defined. Automated in house software was developed to quantify the complex 3D flow field within the different regions of the device: region-dependent flow rates, pressure drops, paths of the working fluid and wall shear stresses were computed. Our analysis highlighted the effects of fine geometrical features of the device on the local fluid-dynamics, which would be unlikely observed by current in vitro approaches. Also, the effects of non-idealities on the flow field distribution were captured, thanks to the absence of the simplifying assumptions that typically characterize numerical models. To the best of our knowledge, our approach is the first of its kind and could be extended to the analysis of a broad range of clinically relevant devices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Experimental Investigation of the Flow Structure over a Delta Wing Via Flow Visualization Methods.

    PubMed

    Shen, Lu; Chen, Zong-Nan; Wen, Chihyung

    2018-04-23

    It is well known that the flow field over a delta wing is dominated by a pair of counter rotating leading edge vortices (LEV). However, their mechanism is not well understood. The flow visualization technique is a promising non-intrusive method to illustrate the complex flow field spatially and temporally. A basic flow visualization setup consists of a high-powered laser and optic lenses to generate the laser sheet, a camera, a tracer particle generator, and a data processor. The wind tunnel setup, the specifications of devices involved, and the corresponding parameter settings are dependent on the flow features to be obtained. Normal smoke wire flow visualization uses a smoke wire to demonstrate the flow streaklines. However, the performance of this method is limited by poor spatial resolution when it is conducted in a complex flow field. Therefore, an improved smoke flow visualization technique has been developed. This technique illustrates the large-scale global LEV flow field and the small-scale shear layer flow structure at the same time, providing a valuable reference for later detailed particle image velocimetry (PIV) measurement. In this paper, the application of the improved smoke flow visualization and PIV measurement to study the unsteady flow phenomena over a delta wing is demonstrated. The procedure and cautions for conducting the experiment are listed, including wind tunnel setup, data acquisition, and data processing. The representative results show that these two flow visualization methods are effective techniques for investigating the three-dimensional flow field qualitatively and quantitatively.

  1. Dynamic Feed Control For Injection Molding

    DOEpatents

    Kazmer, David O.

    1996-09-17

    The invention provides methods and apparatus in which mold material flows through a gate into a mold cavity that defines the shape of a desired part. An adjustable valve is provided that is operable to change dynamically the effective size of the gate to control the flow of mold material through the gate. The valve is adjustable while the mold material is flowing through the gate into the mold cavity. A sensor is provided for sensing a process condition while the part is being molded. During molding, the valve is adjusted based at least in part on information from the sensor. In the preferred embodiment, the adjustable valve is controlled by a digital computer, which includes circuitry for acquiring data from the sensor, processing circuitry for computing a desired position of the valve based on the data from the sensor and a control data file containing target process conditions, and control circuitry for generating signals to control a valve driver to adjust the position of the valve. More complex embodiments include a plurality of gates, sensors, and controllable valves. Each valve is individually controllable so that process conditions corresponding to each gate can be adjusted independently. This allows for great flexibility in the control of injection molding to produce complex, high-quality parts.

  2. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 6: Specification for EOS Central Data Processing Facility (CDPF)

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The specifications and functions of the Central Data Processing (CDPF) Facility which supports the Earth Observatory Satellite (EOS) are discussed. The CDPF will receive the EOS sensor data and spacecraft data through the Spaceflight Tracking and Data Network (STDN) and the Operations Control Center (OCC). The CDPF will process the data and produce high density digital tapes, computer compatible tapes, film and paper print images, and other data products. The specific aspects of data inputs and data processing are identified. A block diagram of the CDPF to show the data flow and interfaces of the subsystems is provided.

  3. Real Time Global Tests of the ALICE High Level Trigger Data Transport Framework

    NASA Astrophysics Data System (ADS)

    Becker, B.; Chattopadhyay, S.; Cicalo, C.; Cleymans, J.; de Vaux, G.; Fearick, R. W.; Lindenstruth, V.; Richter, M.; Rohrich, D.; Staley, F.; Steinbeck, T. M.; Szostak, A.; Tilsner, H.; Weis, R.; Vilakazi, Z. Z.

    2008-04-01

    The High Level Trigger (HLT) system of the ALICE experiment is an online event filter and trigger system designed for input bandwidths of up to 25 GB/s at event rates of up to 1 kHz. The system is designed as a scalable PC cluster, implementing several hundred nodes. The transport of data in the system is handled by an object-oriented data flow framework operating on the basis of the publisher-subscriber principle, being designed fully pipelined with lowest processing overhead and communication latency in the cluster. In this paper, we report the latest measurements where this framework has been operated on five different sites over a global north-south link extending more than 10,000 km, processing a ldquoreal-timerdquo data flow.

  4. Visualization of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.

  5. PIV Measurement of Transient 3-D (Liquid and Gas Phases) Flow Structures Created by a Spreading Flame over 1-Propanol

    NASA Technical Reports Server (NTRS)

    Hassan, M. I.; Kuwana, K.; Saito, K.

    2001-01-01

    In the past, we measured three-D flow structure in the liquid and gas phases that were created by a spreading flame over liquid fuels. In that effort, we employed several different techniques including our original laser sheet particle tracking (LSPT) technique, which is capable of measuring transient 2-D flow structures. Recently we obtained a state-of-the-art integrated particle image velocimetry (IPIV), whose function is similar to LSPT, but it has an integrated data recording and processing system. To evaluate the accuracy of our IPIV system, we conducted a series of flame spread tests using the same experimental apparatus that we used in our previous flame spread studies and obtained a series of 2-D flow profiles corresponding to our previous LSPT measurements. We confirmed that both LSPT and IPIV techniques produced similar data, but IPIV data contains more detailed flow structures than LSPT data. Here we present some of newly obtained IPIV flow structure data, and discuss the role of gravity in the flame-induced flow structures. Note that the application of IPIV to our flame spread problems is not straightforward, and it required several preliminary tests for its accuracy including this IPIV comparison to LSPT.

  6. Data sharing system for lithography APC

    NASA Astrophysics Data System (ADS)

    Kawamura, Eiichi; Teranishi, Yoshiharu; Shimabara, Masanori

    2007-03-01

    We have developed a simple and cost-effective data sharing system between fabs for lithography advanced process control (APC). Lithography APC requires process flow, inter-layer information, history information, mask information and so on. So, inter-APC data sharing system has become necessary when lots are to be processed in multiple fabs (usually two fabs). The development cost and maintenance cost also have to be taken into account. The system handles minimum information necessary to make trend prediction for the lots. Three types of data have to be shared for precise trend prediction. First one is device information of the lots, e.g., process flow of the device and inter-layer information. Second one is mask information from mask suppliers, e.g., pattern characteristics and pattern widths. Last one is history data of the lots. Device information is electronic file and easy to handle. The electronic file is common between APCs and uploaded into the database. As for mask information sharing, mask information described in common format is obtained via Wide Area Network (WAN) from mask-vender will be stored in the mask-information data server. This information is periodically transferred to one specific lithography-APC server and compiled into the database. This lithography-APC server periodically delivers the mask-information to every other lithography-APC server. Process-history data sharing system mainly consists of function of delivering process-history data. In shipping production lots to another fab, the product-related process-history data is delivered by the lithography-APC server from the shipping site. We have confirmed the function and effectiveness of data sharing systems.

  7. Measuring Flow With Laser-Speckle Velocimetry

    NASA Technical Reports Server (NTRS)

    Smith, C. A.; Lourenco, L. M. M.; Krothapalli, A.

    1988-01-01

    Spatial resolution sufficient for calculation of vorticity.In laser-speckle velocimetry, pulsed or chopped laser beam expanded in one dimension by cylindrical lens to illuminate thin, fan-shaped region of flow measured. Flow seeded by small particles. Lens with optical axis perpendicular to illuminating beam forms image of illuminated particles on photographic plate. Speckle pattern of laser-illuminiated, seeded flow recorded in multiple-exposure photographs and processed to extract data on velocity field. Technique suited for study of vortical flows like those about helicopter rotor blades or airplane wings at high angles of attack.

  8. Rotor wake characteristics of a transonic axial flow fan

    NASA Technical Reports Server (NTRS)

    Hathaway, M. D.; Gertz, J.; Epstein, A.; Strazisar, A. J.

    1985-01-01

    State of the art turbomachinery flow analysis codes are not capable of predicting the viscous flow features within turbomachinery blade wakes. Until efficient 3D viscous flow analysis codes become a reality there is therefore a need for models which can describe the generation and transport of blade wakes and the mixing process within the wake. To address the need for experimental data to support the development of such models, high response pressure measurements and laser anemometer velocity measurements were obtained in the wake of a transonic axial flow fan rotor.

  9. A new data-processing approach to study particle motion using ultrafast X-ray tomography scanner: case study of gravitational mass flow

    NASA Astrophysics Data System (ADS)

    Waktola, Selam; Bieberle, Andre; Barthel, Frank; Bieberle, Martina; Hampel, Uwe; Grudzień, Krzysztof; Babout, Laurent

    2018-04-01

    In most industrial products, granular materials are often required to flow under gravity in various kinds of silo shapes and usually through an outlet in the bottom. There are several interrelated parameters which affect the flow, such as internal friction, bulk and packing density, hopper geometry, and material type. Due to the low-spatial resolution of electrical capacitance tomography or scanning speed limitation of standard X-ray CT systems, it is extremely challenging to measure the flow velocity and possible centrifugal effects of granular materials flow effectively. However, ROFEX (ROssendorf Fast Electron beam X-ray tomography) opens new avenues of granular flow investigation due to its very high temporal resolution. This paper aims to track particle movements and evaluate the local grain velocity during silo discharging process in the case of mass flow. The study has considered the use of the Seramis material, which can also serve as a type of tracer particles after impregnation, due to its porous nature. The presented novel image processing and analysis approach allows satisfyingly measuring individual particle velocities but also tracking their lateral movement and three-dimensional rotations.

  10. Chimera Grid Tools

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Rogers, Stuart E.; Nash, Steven M.; Buning, Pieter G.; Meakin, Robert

    2005-01-01

    Chimera Grid Tools (CGT) is a software package for performing computational fluid dynamics (CFD) analysis utilizing the Chimera-overset-grid method. For modeling flows with viscosity about geometrically complex bodies in relative motion, the Chimera-overset-grid method is among the most computationally cost-effective methods for obtaining accurate aerodynamic results. CGT contains a large collection of tools for generating overset grids, preparing inputs for computer programs that solve equations of flow on the grids, and post-processing of flow-solution data. The tools in CGT include grid editing tools, surface-grid-generation tools, volume-grid-generation tools, utility scripts, configuration scripts, and tools for post-processing (including generation of animated images of flows and calculating forces and moments exerted on affected bodies). One of the tools, denoted OVERGRID, is a graphical user interface (GUI) that serves to visualize the grids and flow solutions and provides central access to many other tools. The GUI facilitates the generation of grids for a new flow-field configuration. Scripts that follow the grid generation process can then be constructed to mostly automate grid generation for similar configurations. CGT is designed for use in conjunction with a computer-aided-design program that provides the geometry description of the bodies, and a flow-solver program.

  11. Fault-tolerant corrector/detector chip for high-speed data processing

    DOEpatents

    Andaleon, David D.; Napolitano, Jr., Leonard M.; Redinbo, G. Robert; Shreeve, William O.

    1994-01-01

    An internally fault-tolerant data error detection and correction integrated circuit device (10) and a method of operating same. The device functions as a bidirectional data buffer between a 32-bit data processor and the remainder of a data processing system and provides a 32-bit datum is provided with a relatively short eight bits of data-protecting parity. The 32-bits of data by eight bits of parity is partitioned into eight 4-bit nibbles and two 4-bit nibbles, respectively. For data flowing towards the processor the data and parity nibbles are checked in parallel and in a single operation employing a dual orthogonal basis technique. The dual orthogonal basis increase the efficiency of the implementation. Any one of ten (eight data, two parity) nibbles are correctable if erroneous, or two different erroneous nibbles are detectable. For data flowing away from the processor the appropriate parity nibble values are calculated and transmitted to the system along with the data. The device regenerates parity values for data flowing in either direction and compares regenerated to generated parity with a totally self-checking equality checker. As such, the device is self-validating and enabled to both detect and indicate an occurrence of an internal failure. A generalization of the device to protect 64-bit data with 16-bit parity to protect against byte-wide errors is also presented.

  12. Fault-tolerant corrector/detector chip for high-speed data processing

    DOEpatents

    Andaleon, D.D.; Napolitano, L.M. Jr.; Redinbo, G.R.; Shreeve, W.O.

    1994-03-01

    An internally fault-tolerant data error detection and correction integrated circuit device and a method of operating same is described. The device functions as a bidirectional data buffer between a 32-bit data processor and the remainder of a data processing system and provides a 32-bit datum with a relatively short eight bits of data-protecting parity. The 32-bits of data by eight bits of parity is partitioned into eight 4-bit nibbles and two 4-bit nibbles, respectively. For data flowing towards the processor the data and parity nibbles are checked in parallel and in a single operation employing a dual orthogonal basis technique. The dual orthogonal basis increase the efficiency of the implementation. Any one of ten (eight data, two parity) nibbles are correctable if erroneous, or two different erroneous nibbles are detectable. For data flowing away from the processor the appropriate parity nibble values are calculated and transmitted to the system along with the data. The device regenerates parity values for data flowing in either direction and compares regenerated to generated parity with a totally self-checking equality checker. As such, the device is self-validating and enabled to both detect and indicate an occurrence of an internal failure. A generalization of the device to protect 64-bit data with 16-bit parity to protect against byte-wide errors is also presented. 8 figures.

  13. Seasonal rainfall-runoff relationships in a lowland forested watershed in the southeastern USA

    Treesearch

    Ileana La Torre Torres; Devendra Amatya; Ge Sun; Timothy Callahan

    2011-01-01

    Hydrological processes of lowland watersheds of the southern USA are not well understood compared to a hilly landscape due to their unique topography, soil compositions, and climate. This study describes the seasonal relationships between rainfall patterns and runoff (sum of storm flow and base flow) using 13 years (1964–1976) of rainfall and stream flow data for a low...

  14. Evaluation of infrared thermography as a diagnostic tool in CVD applications

    NASA Astrophysics Data System (ADS)

    Johnson, E. J.; Hyer, P. V.; Culotta, P. W.; Clark, I. O.

    1998-05-01

    This research is focused on the feasibility of using infrared temperature measurements on the exterior of a chemical vapor deposition (CVD) reactor to ascertain both real-time information on the operating characteristics of a CVD system and provide data which could be post-processed to provide quantitative information for research and development on CVD processes. Infrared thermography techniques were used to measure temperatures on a horizontal CVD reactor of rectangular cross section which were correlated with the internal gas flow field, as measured with the laser velocimetry (LV) techniques. For the reactor tested, thermal profiles were well correlated with the gas flow field inside the reactor. Correlations are presented for nitrogen and hydrogen carrier gas flows. The infrared data were available to the operators in real time with sufficient sensitivity to the internal flow field so that small variations such as misalignment of the reactor inlet could be observed. The same data were post-processed to yield temperature measurements at known locations on the reactor surface. For the experiments described herein, temperatures associated with approximately 3.3 mm 2 areas on the reactor surface were obtained with a precision of ±2°C. These temperature measurements were well suited for monitoring a CVD production reactor, development of improved thermal boundary conditions for use in CFD models of reactors, and for verification of expected thermal conditions.

  15. Predictable turn-around time for post tape-out flow

    NASA Astrophysics Data System (ADS)

    Endo, Toshikazu; Park, Minyoung; Ghosh, Pradiptya

    2012-03-01

    A typical post-out flow data path at the IC Fabrication has following major components of software based processing - Boolean operations before the application of resolution enhancement techniques (RET) and optical proximity correctin (OPC), the RET and OPC step [etch retargeting, sub-resolution assist feature insertion (SRAF) and OPC], post-OPCRET Boolean operations and sometimes in the same flow simulation based verification. There are two objectives that an IC Fabrication tapeout flow manager wants to achieve with the flow - predictable completion time and fastest turn-around time (TAT). At times they may be competing. There have been studies in the literature modeling the turnaround time from historical data for runs with the same recipe and later using that to derive the resource allocation for subsequent runs. [3]. This approach is more feasible in predominantly simulation dominated tools but for edge operation dominated flow it may not be possible especially if some processing acceleration methods like pattern matching or hierarchical processing is involved. In this paper, we suggest an alternative method of providing target turnaround time and managing the priority of jobs while not doing any upfront resource modeling and resource planning. The methodology then systematically either meets the turnaround time need and potentially lets the user know if it will not as soon as possible. This builds on top of the Calibre Cluster Management (CalCM) resource management work previously published [1][2]. The paper describes the initial demonstration of the concept.

  16. Complete data preparation flow for Massively Parallel E-Beam lithography on 28nm node full-field design

    NASA Astrophysics Data System (ADS)

    Fay, Aurélien; Browning, Clyde; Brandt, Pieter; Chartoire, Jacky; Bérard-Bergery, Sébastien; Hazart, Jérôme; Chagoya, Alexandre; Postnikov, Sergei; Saib, Mohamed; Lattard, Ludovic; Schavione, Patrick

    2016-03-01

    Massively parallel mask-less electron beam lithography (MP-EBL) offers a large intrinsic flexibility at a low cost of ownership in comparison to conventional optical lithography tools. This attractive direct-write technique needs a dedicated data preparation flow to correct both electronic and resist processes. Moreover, Data Prep has to be completed in a short enough time to preserve the flexibility advantage of MP-EBL. While the MP-EBL tools have currently entered an advanced stage of development, this paper will focus on the data preparation side of the work for specifically the MAPPER Lithography FLX-1200 tool [1]-[4], using the ASELTA Nanographics Inscale software. The complete flow as well as the methodology used to achieve a full-field layout data preparation, within an acceptable cycle time, will be presented. Layout used for Data Prep evaluation was one of a 28 nm technology node Metal1 chip with a field size of 26x33mm2, compatible with typical stepper/scanner field sizes and wafer stepping plans. Proximity Effect Correction (PEC) was applied to the entire field, which was then exported as a single file to MAPPER Lithography's machine format, containing fractured shapes and dose assignments. The Soft Edge beam to beam stitching method was employed in the specific overlap regions defined by the machine format as well. In addition to PEC, verification of the correction was included as part of the overall data preparation cycle time. This verification step was executed on the machine file format to ensure pattern fidelity and accuracy as late in the flow as possible. Verification over the full chip, involving billions of evaluation points, is performed both at nominal conditions and at Process Window corners in order to ensure proper exposure and process latitude. The complete MP-EBL data preparation flow was demonstrated for a 28 nm node Metal1 layout in 37 hours. The final verification step shows that the Edge Placement Error (EPE) is kept below 2.25 nm over an exposure dose variation of 8%.

  17. Study of the fluid flow characteristics in a porous medium for CO2 geological storage using MRI.

    PubMed

    Song, Yongchen; Jiang, Lanlan; Liu, Yu; Yang, Mingjun; Zhou, Xinhuan; Zhao, Yuechao; Dou, Binlin; Abudula, Abuliti; Xue, Ziqiu

    2014-06-01

    The objective of this study was to understand fluid flow in porous media. Understanding of fluid flow process in porous media is important for the geological storage of CO2. The high-resolution magnetic resonance imaging (MRI) technique was used to measure fluid flow in a porous medium (glass beads BZ-02). First, the permeability was obtained from velocity images. Next, CO2-water immiscible displacement experiments using different flow rates were investigated. Three stages were obtained from the MR intensity plot. With increasing CO2 flow rate, a relatively uniform CO2 distribution and a uniform CO2 front were observed. Subsequently, the final water saturation decreased. Using core analysis methods, the CO2 velocities were obtained during the CO2-water immiscible displacement process, which were applied to evaluate the capillary dispersion rate, viscous dominated fractional flow, and gravity flow function. The capillary dispersion rate dominated the effects of capillary, which was largest at water saturations of 0.5 and 0.6. The viscous-dominant fractional flow function varied with the saturation of water. The gravity fractional flow reached peak values at the saturation of 0.6. The gravity forces played a positive role in the downward displacements because they thus tended to stabilize the displacement process, thereby producing increased breakthrough times and correspondingly high recoveries. Finally, the relative permeability was also reconstructed. The study provides useful data regarding the transport processes in the geological storage of CO2. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.

  18. Hybrid Stochastic Forecasting Model for Management of Large Open Water Reservoir with Storage Function

    NASA Astrophysics Data System (ADS)

    Kozel, Tomas; Stary, Milos

    2017-12-01

    The main advantage of stochastic forecasting is fan of possible value whose deterministic method of forecasting could not give us. Future development of random process is described better by stochastic then deterministic forecasting. Discharge in measurement profile could be categorized as random process. Content of article is construction and application of forecasting model for managed large open water reservoir with supply function. Model is based on neural networks (NS) and zone models, which forecasting values of average monthly flow from inputs values of average monthly flow, learned neural network and random numbers. Part of data was sorted to one moving zone. The zone is created around last measurement average monthly flow. Matrix of correlation was assembled only from data belonging to zone. The model was compiled for forecast of 1 to 12 month with using backward month flows (NS inputs) from 2 to 11 months for model construction. Data was got ridded of asymmetry with help of Box-Cox rule (Box, Cox, 1964), value r was found by optimization. In next step were data transform to standard normal distribution. The data were with monthly step and forecast is not recurring. 90 years long real flow series was used for compile of the model. First 75 years were used for calibration of model (matrix input-output relationship), last 15 years were used only for validation. Outputs of model were compared with real flow series. For comparison between real flow series (100% successfully of forecast) and forecasts, was used application to management of artificially made reservoir. Course of water reservoir management using Genetic algorithm (GE) + real flow series was compared with Fuzzy model (Fuzzy) + forecast made by Moving zone model. During evaluation process was founding the best size of zone. Results show that the highest number of input did not give the best results and ideal size of zone is in interval from 25 to 35, when course of management was almost same for all numbers from interval. Resulted course of management was compared with course, which was obtained from using GE + real flow series. Comparing results showed that fuzzy model with forecasted values has been able to manage main malfunction and artificially disorders made by model were founded essential, after values of water volume during management were evaluated. Forecasting model in combination with fuzzy model provide very good results in management of water reservoir with storage function and can be recommended for this purpose.

  19. Operations planning simulation model extension study. Volume 1: Long duration exposure facility ST-01-A automated payload

    NASA Technical Reports Server (NTRS)

    Marks, D. A.; Gendiellee, R. E.; Kelly, T. M.; Giovannello, M. A.

    1974-01-01

    Ground processing and operation activities for selected automated and sortie payloads are evaluated. Functional flow activities are expanded to identify payload launch site facility and support requirements. Payload definitions are analyzed from the launch site ground processing viewpoint and then processed through the expanded functional flow activities. The requirements generated from the evaluation are compared with those contained in the data sheets. The following payloads were included in the evaluation: Long Duration Exposure Facility; Life Sciences Shuttle Laboratory; Biomedical Experiments Scientific Satellite; Dedicated Solar Sortie Mission; Magnetic Spectrometer; and Mariner Jupiter Orbiter. The expanded functional flow activities and descriptions for the automated and sortie payloads at the launch site are presented.

  20. Proper Orthogonal Decomposition on Experimental Multi-phase Flow in a Pipe

    NASA Astrophysics Data System (ADS)

    Viggiano, Bianca; Tutkun, Murat; Cal, Raúl Bayoán

    2016-11-01

    Multi-phase flow in a 10 cm diameter pipe is analyzed using proper orthogonal decomposition. The data were obtained using X-ray computed tomography in the Well Flow Loop at the Institute for Energy Technology in Kjeller, Norway. The system consists of two sources and two detectors; one camera records the vertical beams and one camera records the horizontal beams. The X-ray system allows measurement of phase holdup, cross-sectional phase distributions and gas-liquid interface characteristics within the pipe. The mathematical framework in the context of multi-phase flows is developed. Phase fractions of a two-phase (gas-liquid) flow are analyzed and a reduced order description of the flow is generated. Experimental data deepens the complexity of the analysis with limited known quantities for reconstruction. Comparison between the reconstructed fields and the full data set allows observation of the important features. The mathematical description obtained from the decomposition will deepen the understanding of multi-phase flow characteristics and is applicable to fluidized beds, hydroelectric power and nuclear processes to name a few.

  1. Mapping lava flow textures using three-dimensional measures of surface roughness

    NASA Astrophysics Data System (ADS)

    Mallonee, H. C.; Kobs-Nawotniak, S. E.; McGregor, M.; Hughes, S. S.; Neish, C.; Downs, M.; Delparte, D.; Lim, D. S. S.; Heldmann, J. L.

    2016-12-01

    Lava flow emplacement conditions are reflected in the surface textures of a lava flow; unravelling these conditions is crucial to understanding the eruptive history and characteristics of basaltic volcanoes. Mapping lava flow textures using visual imagery alone is an inherently subjective process, as these images generally lack the resolution needed to make these determinations. Our team has begun mapping lava flow textures using visual spectrum imagery, which is an inherently subjective process involving the challenge of identifying transitional textures such as rubbly and slabby pāhoehoe, as these textures are similar in appearance and defined qualitatively. This is particularly problematic for interpreting planetary lava flow textures, where we have more limited data. We present a tool to objectively classify lava flow textures based on quantitative measures of roughness, including the 2D Hurst exponent, RMS height, and 2D:3D surface area ratio. We collected aerial images at Craters of the Moon National Monument (COTM) using Unmanned Aerial Vehicles (UAVs) in 2015 and 2016 as part of the FINESSE (Field Investigations to Enable Solar System Science and Exploration) and BASALT (Biologic Analog Science Associated with Lava Terrains) research projects. The aerial images were stitched together to create Digital Terrain Models (DTMs) with resolutions on the order of centimeters. The DTMs were evaluated by the classification tool described above, with output compared against field assessment of the texture. Further, the DTMs were downsampled and reevaluated to assess the efficacy of the classification tool at data resolutions similar to current datasets from other planetary bodies. This tool allows objective classification of lava flow texture, which enables more accurate interpretations of flow characteristics. This work also gives context for interpretations of flows with comparatively low data resolutions, such as those on the Moon and Mars. Textural maps based on quantitative measures of roughness are a valuable asset for studies of lava flows on Earth and other planetary bodies.

  2. Documentation of South Dakota's ITS/CVO data architecture

    DOT National Transportation Integrated Search

    1999-09-15

    This report documents the Intelligent Transportation Systems/Commercial Vehicle Operations (ITS/CVO) data architecture for the State of South Dakota. It details the current state of affairs in terms of CVO business areas, processes, data flow linkage...

  3. Overland Flow Analysis Using Time Series of Suas-Derived Elevation Models

    NASA Astrophysics Data System (ADS)

    Jeziorska, J.; Mitasova, H.; Petrasova, A.; Petras, V.; Divakaran, D.; Zajkowski, T.

    2016-06-01

    With the advent of the innovative techniques for generating high temporal and spatial resolution terrain models from Unmanned Aerial Systems (UAS) imagery, it has become possible to precisely map overland flow patterns. Furthermore, the process has become more affordable and efficient through the coupling of small UAS (sUAS) that are easily deployed with Structure from Motion (SfM) algorithms that can efficiently derive 3D data from RGB imagery captured with consumer grade cameras. We propose applying the robust overland flow algorithm based on the path sampling technique for mapping flow paths in the arable land on a small test site in Raleigh, North Carolina. By comparing a time series of five flights in 2015 with the results of a simulation based on the most recent lidar derived DEM (2013), we show that the sUAS based data is suitable for overland flow predictions and has several advantages over the lidar data. The sUAS based data captures preferential flow along tillage and more accurately represents gullies. Furthermore the simulated water flow patterns over the sUAS based terrain models are consistent throughout the year. When terrain models are reconstructed only from sUAS captured RGB imagery, however, water flow modeling is only appropriate in areas with sparse or no vegetation cover.

  4. 40 CFR 161.162 - Description of production process.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements...) A flow chart of the chemical equations of each intended reaction occurring at each step of the...

  5. A novel instrument for studying the flow behaviour of erythrocytes through microchannels simulating human blood capillaries.

    PubMed

    Sutton, N; Tracey, M C; Johnston, I D; Greenaway, R S; Rampling, M W

    1997-05-01

    A novel instrument has been developed to study the microrheology of erythrocytes as they flow through channels of dimensions similar to human blood capillaries. The channels are produced in silicon substrates using microengineering technology. Accurately defined, physiological driving pressures and temperatures are employed whilst precise, real-time image processing allows individual cells to be monitored continuously during their transit. The instrument characterises each cell in a sample of ca. 1000 in terms of its volume and flow velocity profile during its transit through a channel. The unique representation of the data in volume/velocity space provides new insight into the microrheological behaviour of blood. The image processing and subsequent data analysis enable the system to reject anomalous events such as multiple cell transits, thereby ensuring integrity of the resulting data. By employing an array of microfluidic flow channels we can integrate a number of different but precise and highly reproducible channel sizes and geometries within one array, thereby allowing multiple, concurrent isobaric measurements on one sample. As an illustration of the performance of the system, volume/velocity data sets recorded in a microfluidic device incorporating multiple channels of 100 microns length and individual widths ranging between 3.0 and 4.0 microns are presented.

  6. Dynamics of nonreactive solute transport in the permafrost environment

    NASA Astrophysics Data System (ADS)

    Svyatskiy, D.; Coon, E. T.; Moulton, J. D.

    2017-12-01

    As part of the DOE Office of Science Next Generation Ecosystem Experiment, NGEE-Arctic, researchers are developing process-rich models to understand and predict the evolution of water sources and hydrologic flow pathways resulting from degrading permafrost. The sources and interaction of surface and subsurface water and flow paths are complex in space and time due to strong interplay between heterogeneous subsurface parameters, the seasonal to decadal evolution of the flow domain, climate driven melting and release of permafrost ice as a liquid water source, evolving surface topography and highly variable meteorological data. In this study, we seek to characterize the magnitude of vertical and lateral subsurface flows in a cold, wet tundra, polygonal landscape characteristic of the Barrow Peninsula, AK. To better understand the factors controlling water flux partitioning in these low gradient landscapes, NGEE researchers developed and are applying the Advanced Terrestrial Simulator (ATS), which fully couples surface and subsurface flow and energy processes, snow distribution and atmospheric forcing. Here we demonstrate the integration of a new solute transport model within the ATS, which enables the interpretation of applied and natural tracer experiments and observations aimed at quantifying water sources and flux partitioning. We examine the role of ice wedge polygon structure, freeze-thaw processes and soil properties on the seasonal transport of water within and through polygons features, and compare results to tracer experiments on 2D low-centered and high-centered transects corresponding to artificial as well as realistic topographical data from sites in polygonal tundra. These simulations demonstrate significant difference between flow patterns between permafrost and non-permafrost environments due to active layer freeze-thaw processes.

  7. A PROTOCOL FOR DETERMINING WWF SETTLING VELOCITIES FOR TREATMENT PROCESS DESIGN ENHANCEMENT

    EPA Science Inventory

    Urban wet weather flows (WWF) contain a high proportion of suspended solids (SS) which must be rapidly reduced before release to receiving waters. Site specific, storm-event data evaluations for designing WWF-treatment facilities differs from dry-weather flow design. WWF-sett...

  8. Fluorescence lifetime measurements in flow cytometry

    NASA Astrophysics Data System (ADS)

    Beisker, Wolfgang; Klocke, Axel

    1997-05-01

    Fluorescence lifetime measurements provide insights int eh dynamic and structural properties of dyes and their micro- environment. The implementation of fluorescence lifetime measurements in flow cytometric systems allows to monitor large cell and particle populations with high statistical significance. In our system, a modulated laser beam is used for excitation and the phase shift of the fluorescence signal recorded with a fast computer controlled digital oscilloscope is processed digitally to determine the phase shift with respect to a reference beam by fast fourier transform. Total fluorescence intensity as well as other parameters can be determined simultaneously from the same fluorescence signal. We use the epi-illumination design to allow the use of high numerical apertures to collect as much light as possible to ensure detection of even weak fluorescence. Data storage and processing is done comparable to slit-scan flow cytometric data using data analysis system. The results are stored, displayed, combined with other parameters and analyzed as normal listmode data. In our report we discuss carefully the signal to noise ratio for analog and digital processed lifetime signals to evaluate the theoretical minimum fluorescence intensity for lifetime measurements. Applications to be presented include DNA staining, parameters of cell functions as well as different applications in non-mammalian cells such as algae.

  9. Assessment of bridge abutment scour and sediment transport under various flow conditions

    NASA Astrophysics Data System (ADS)

    Gilja, Gordon; Valyrakis, Manousos; Michalis, Panagiotis; Bekić, Damir; Kuspilić, Neven; McKeogh, Eamon

    2017-04-01

    Safety of bridges over watercourses can be compromised by flow characteristics and bridge hydraulics. Scour process around bridge foundations can develop rapidly during low-recurrence interval floods when structural elements are exposed to increased flows. Variations in riverbed geometry, as a result of sediment removal and deposition processes, can increase flood-induced hazard at bridge sites with catastrophic failures and destructive consequences for civil infrastructure. The quantification of flood induced hazard on bridge safety generally involves coupled hydrodynamic and sediment transport models (i.e. 2D numerical or physical models) for a range of hydrological events covering both high and low flows. Modelled boundary conditions are usually estimated for their probability of occurrence using frequency analysis of long-term recordings at gauging stations. At smaller rivers gauging station records are scarce, especially in upper courses of rivers where weirs, drops and rapids are common elements of river bathymetry. As a result, boundary conditions that accurately represent flow patterns on modelled river reach cannot be often reliably acquired. Sediment transport process is also more complicated to describe due to its complexity and dependence to local flow field making scour hazard assessment a particularly challenging issue. This study investigates the influence of flow characteristics to the development of scour and sedimentation processes around bridge abutments of a single span masonry arch bridge in south Ireland. The impact of downstream weirs on bridge hydraulics through variation of downstream model domain type is also considered in this study. The numerical model is established based on detailed bathymetry data surveyed along a rectangular grid of 50cm spacing. Acquired data also consist of riverbed morphology and water level variations which are monitored continuously on bridge site. The obtained data are then used to compare and calibrate numerical models for several flood scenarios. The determination of the boundary conditions is followed by physical modelling to investigate the development of scour around bridge elements. The comparison of surveyed data with the obtained numerical and physical modelling results provide an insight of various flow patterns and their influence on riverbed morphology. This can deliver important information needed for assessment of structural risk associated with flood events. Acknowledgement: The authors wish to acknowledge the financial support of the European Commission, through the Marie Curie action Industry-Academia Partnership and Pathways Network BRIDGE SMS (Intelligent Bridge Assessment Maintenance and Management System) - FP7-People-2013-IAPP- 612517.

  10. Collective network routing

    DOEpatents

    Hoenicke, Dirk

    2014-12-02

    Disclosed are a unified method and apparatus to classify, route, and process injected data packets into a network so as to belong to a plurality of logical networks, each implementing a specific flow of data on top of a common physical network. The method allows to locally identify collectives of packets for local processing, such as the computation of the sum, difference, maximum, minimum, or other logical operations among the identified packet collective. Packets are injected together with a class-attribute and an opcode attribute. Network routers, employing the described method, use the packet attributes to look-up the class-specific route information from a local route table, which contains the local incoming and outgoing directions as part of the specifically implemented global data flow of the particular virtual network.

  11. Phase I Hydrologic Data for the Groundwater Flow and Contaminant Transport Model of Corrective Action Unit 97: Yucca Flat/Climax Mine, Nevada Test Site, Nye County, Nevada, Rev. No.: 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John McCord

    2006-06-01

    The U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office (NNSA/NSO) initiated the Underground Test Area (UGTA) Project to assess and evaluate the effects of the underground nuclear weapons tests on groundwater beneath the Nevada Test Site (NTS) and vicinity. The framework for this evaluation is provided in Appendix VI, Revision No. 1 (December 7, 2000) of the Federal Facility Agreement and Consent Order (FFACO, 1996). Section 3.0 of Appendix VI ''Corrective Action Strategy'' of the FFACO describes the process that will be used to complete corrective actions specifically for the UGTA Project. The objective of themore » UGTA corrective action strategy is to define contaminant boundaries for each UGTA corrective action unit (CAU) where groundwater may have become contaminated from the underground nuclear weapons tests. The contaminant boundaries are determined based on modeling of groundwater flow and contaminant transport. A summary of the FFACO corrective action process and the UGTA corrective action strategy is provided in Section 1.5. The FFACO (1996) corrective action process for the Yucca Flat/Climax Mine CAU 97 was initiated with the Corrective Action Investigation Plan (CAIP) (DOE/NV, 2000a). The CAIP included a review of existing data on the CAU and proposed a set of data collection activities to collect additional characterization data. These recommendations were based on a value of information analysis (VOIA) (IT, 1999), which evaluated the value of different possible data collection activities, with respect to reduction in uncertainty of the contaminant boundary, through simplified transport modeling. The Yucca Flat/Climax Mine CAIP identifies a three-step model development process to evaluate the impact of underground nuclear testing on groundwater to determine a contaminant boundary (DOE/NV, 2000a). The three steps are as follows: (1) Data compilation and analysis that provides the necessary modeling data that is completed in two parts: the first addressing the groundwater flow model, and the second the transport model. (2) Development of a groundwater flow model. (3) Development of a groundwater transport model. This report presents the results of the first part of the first step, documenting the data compilation, evaluation, and analysis for the groundwater flow model. The second part, documentation of transport model data will be the subject of a separate report. The purpose of this document is to present the compilation and evaluation of the available hydrologic data and information relevant to the development of the Yucca Flat/Climax Mine CAU groundwater flow model, which is a fundamental tool in the prediction of the extent of contaminant migration. Where appropriate, data and information documented elsewhere are summarized with reference to the complete documentation. The specific task objectives for hydrologic data documentation are as follows: (1) Identify and compile available hydrologic data and supporting information required to develop and validate the groundwater flow model for the Yucca Flat/Climax Mine CAU. (2) Assess the quality of the data and associated documentation, and assign qualifiers to denote levels of quality. (3) Analyze the data to derive expected values or spatial distributions and estimates of the associated uncertainty and variability.« less

  12. Short-term and long-term evapotranspiration rates at ecological restoration sites along a large river receiving rare flow events

    USGS Publications Warehouse

    Shanafield, Margaret; Jurado, Hugo Gutierrez; Burgueño, Jesús Eliana Rodríguez; Hernández, Jorge Ramírez; Jarchow, Christopher; Nagler, Pamela L.

    2017-01-01

    Many large rivers around the world no longer flow to their deltas, due to ever greater water withdrawals and diversions for human needs. However, the importance of riparian ecosystems is drawing increasing recognition, leading to the allocation of environmental flows to restore river processes. Accurate estimates of riparian plant evapotranspiration (ET) are needed to understand how the riverine system responds to these rare events and achieve the goals of environmental flows. In 2014, historic environmental flows were released into the Lower Colorado River at Morelos Dam (Mexico); this once perennial but now dry reach is the final stretch to the mighty Colorado River Delta. One of the primary goals was to supply native vegetation restoration sites along the reach with water to help seedlings establish and boost groundwater levels to foster the planted saplings. Patterns in ET before, during, and after the flows are useful for evaluating whether this goal was met and understanding the role that ET plays in this now ephemeral river system. Here, diurnal fluctuations in groundwater levels and MODIS data were used to compare estimates of ET specifically at three native vegetation restoration sites during 2014 planned flow events, while MODIS data was used to evaluate long-term (2002 – 2016) ET responses to restoration efforts at these sites. Overall, ET was generally 0 - 10 mm d-1 across sites and although daily ET values from groundwater data were highly variable, weekly averaged estimates were highly correlated with MODIS-derived estimates at most sites. The influence of the 2014 flow events was not immediately apparent in the results, although the process of clearing vegetation and planting native vegetation at the restoration sites was clearly visible in the results.

  13. Proposal for adaptive management to conserve biotic integrity in a regulated segment of the Tallapoosa River, Alabama, U.S.A

    USGS Publications Warehouse

    Irwin, Elise R.; Freeman, Mary C.

    2002-01-01

    Conserving river biota will require innovative approaches that foster and utilize scientific understanding of ecosystem responses to alternative river-management scenarios. We describe ecological and societal issues involved in flow management of a section of the Tallapoosa River (Alabama, U.S.A.) in which a species-rich native fauna is adversely affected by flow alteration by an upstream hydropower dam. We hypothesize that depleted Iow flows, flow instability and thermal alteration resulting from pulsed flow releases at the hydropower dam are most responsible for changes in the Tallapoosa River biota. However, existing data are insufficient to prescribe with certainty minimum flow levels or the frequency and duration of stable flow periods that would be necessary or sufficient to protect riverine biotic integrity. Rather than negotiate a specific change in the flow regime, we propose that stakeholders--including management agencies, the power utility, and river advocates--engage in a process of adaptive-flow management. This process would require that stakeholders (1) develop and agree to management objectives; (2) model hypothesized relations between dam operations and management objectives; (3) implement a change in dam operations; and (4) evaluate biological responses and other stakeholder benefits through an externally reviewed monitoring program. Models would be updated with monitoring data and stakeholders would agree to further modify flow regimes as necessary to achieve management objectives. A primary obstacle to adaptive management will be a perceived uncertainty of future costs for the power utility and other stakeholders. However, an adaptive, iterative approach offers the best opportunity for improving flow regimes for native biota while gaining information critical to guiding management decisions in other flow-regulated rivers.

  14. Sedimentary architecture of a sub-lacustrine debris fan: Eocene Dongying Depression, Bohai Bay Basin, east China

    NASA Astrophysics Data System (ADS)

    Liu, Jianping; Xian, Benzhong; Wang, Junhui; Ji, Youliang; Lu, Zhiyong; Liu, Saijun

    2017-12-01

    The sedimentary architectures of submarine/sublacustrine fans are controlled by sedimentary processes, geomorphology and sediment composition in sediment gravity flows. To advance understanding of sedimentary architecture of debris fans formed predominantly by debris flows in deep-water environments, a sub-lacustrine fan (Y11 fan) within a lacustrine succession has been identified and studied through the integration of core data, well logging data and 3D seismic data in the Eocene Dongying Depression, Bohai Bay Basin, east China. Six types of resedimented lithofacies can be recognized, which are further grouped into five broad lithofacies associations. Quantification of gravity flow processes on the Y11 fan is suggested by quantitative lithofacies analysis, which demonstrates that the fan is dominated by debris flows, while turbidity currents and sandy slumps are less important. The distribution, geometry and sedimentary architecture are documented using well data and 3D seismic data. A well-developed depositional lobe with a high aspect ratio is identified based on a sandstone isopach map. Canyons and/or channels are absent, which is probably due to the unsteady sediment supply from delta-front collapse. Distributary tongue-shaped debris flow deposits can be observed at different stages of fan growth, suggesting a lobe constructed by debrite tongue complexes. Within each stage of the tongue complexes, architectural elements are interpreted by wireline log motifs showing amalgamated debrite tongues, which constitute the primary fan elements. Based on lateral lithofacies distribution and vertical sequence analysis, it is proposed that lakefloor erosion, entrainment and dilution in the flow direction lead to an organized distribution of sandy debrites, muddy debrites and turbidites on individual debrite tongues. Plastic rheology of debris flows combined with fault-related topography are considered the major factors that control sediment distribution and fan architecture. An important implication of this study is that a deep-water depositional model for debrite-dominated systems was proposed, which may be applicable to other similar deep-water environments.

  15. Methodology for the Elimination of Reflection and System Vibration Effects in Particle Image Velocimetry Data Processing

    NASA Technical Reports Server (NTRS)

    Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.

    2005-01-01

    A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and validation of these algorithms are presented.

  16. Investigation of column flotation process on sulphide ore using 2-electrode capacitance sensor: The effect of air flow rate and solid percentage

    NASA Astrophysics Data System (ADS)

    Haryono, Didied; Harjanto, Sri; Wijaya, Rifky; Oediyani, Soesaptri; Nugraha, Harisma; Huda, Mahfudz Al; Taruno, Warsito Purwo

    2018-04-01

    Investigation of column flotation process on sulphide ore using 2-electrode capacitance sensor is presented in this paper. The effect of air flow rate and solid percentage on column flotation process has been experimentally investigated. The purpose of this paper is to understand the capacitance signal characteristic affected by the air flow rate and the solid percentage which can be used to determine the metallurgical performance. Experiments were performed using a laboratory column flotation cell which has a diameter of 5 cm and the total height of 140 cm. The sintered ceramic sparger and wash water were installed at the bottom and above of the column. Two-electrode concave type capacitance sensor was also installed at a distance of 50 cm from the sparger. The sensor was attached to the outer wall of the column, connected to data acquisition system, manufactured by CTECH Labs Edwar Technology and personal computer for further data processing. Feed consisting ZnS and SiO2 with the ratio of 3:2 was mixed with some reagents to make 1 litre of slurry. The slurry was fed into the aerated column at 100 cm above the sparger with a constant rate and the capacitance signals were captured during the process. In this paper, 7.5 and 10% of solid and 2-4 L/min of air flow rate with 0.5 L/min intervals were used as independent variables. The results show that the capacitance signal characteristics between the 7.5 and 10% of solid are different at any given air flow rate in which the 10% solid produced signals higher than those of 7.5%. Metallurgical performance and capacitance signal exhibit a good correlation.

  17. Preliminary interpretation of thermal data from the Nevada Test Site

    USGS Publications Warehouse

    Sass, John Harvey; Lachenbruch, Arthur H.

    1982-01-01

    Analysis of data from 60 wells in and around the Nevada Test Site, including 16 in the Yucca Mountain area, indicates a thermal regime characterized by large vertical and lateral gradients in heat flow. Estimates of heat flow indicate considerable variation on both regional and local scales. The variations are attributable primarily to hydrologic processes involving interbasin flow with a vertical component of (seepage) velocity (volume flux) of a few mm/yr. Apart from indicating a general downward movement of water at a few mm/yr, the results from Yucca Mountain are as yet inconclusive.

  18. Turbofan forced mixer-nozzle internal flowfield. Volume 1: A benchmark experimental study

    NASA Technical Reports Server (NTRS)

    Paterson, R. W.

    1982-01-01

    An experimental investigation of the flow field within a model turbofan forced mixer nozzle is described. Velocity and thermodynamic state variable data for use in assessing the accuracy and assisting the further development of computational procedures for predicting the flow field within mixer nozzles are provided. Velocity and temperature data suggested that the nozzle mixing process was dominated by circulations (secondary flows) of a length scale on the order the lobe dimensions which were associated with strong radial velocities observed near the lobe exit plane. The 'benchmark' model mixer experiment conducted for code assessment purposes is discussed.

  19. Time Dependent Simulation of Turbopump Flows

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin C.; Kwak, Dochan; Chan, William; Williams, Robert

    2001-01-01

    The objective of this viewgraph presentation is to enhance incompressible flow simulation capability for developing aerospace vehicle components, especially unsteady flow phenomena associated with high speed turbo pumps. Unsteady Space Shuttle Main Engine (SSME)-rig1 1 1/2 rotations are completed for the 34.3 million grid points model. The moving boundary capability is obtained by using the DCF module. MLP shared memory parallelism has been implemented and benchmarked in INS3D. The scripting capability from CAD geometry to solution is developed. Data compression is applied to reduce data size in post processing and fluid/structure coupling is initiated.

  20. A PROPOSED CHEMICAL INFORMATION AND DATA SYSTEM. VOLUME I.

    DTIC Science & Technology

    CHEMICAL COMPOUNDS, *DATA PROCESSING, *INFORMATION RETRIEVAL, * CHEMICAL ANALYSIS, INPUT OUTPUT DEVICES, COMPUTER PROGRAMMING, CLASSIFICATION...CONFIGURATIONS, DATA STORAGE SYSTEMS, ATOMS, MOLECULES, PERFORMANCE( ENGINEERING ), MAINTENANCE, SUBJECT INDEXING, MAGNETIC TAPE, AUTOMATIC, MILITARY REQUIREMENTS, TYPEWRITERS, OPTICS, TOPOLOGY, STATISTICAL ANALYSIS, FLOW CHARTING.

  1. Adaptive Conditioning of Multiple-Point Geostatistical Facies Simulation to Flow Data with Facies Probability Maps

    NASA Astrophysics Data System (ADS)

    Khodabakhshi, M.; Jafarpour, B.

    2013-12-01

    Characterization of complex geologic patterns that create preferential flow paths in certain reservoir systems requires higher-order geostatistical modeling techniques. Multipoint statistics (MPS) provides a flexible grid-based approach for simulating such complex geologic patterns from a conceptual prior model known as a training image (TI). In this approach, a stationary TI that encodes the higher-order spatial statistics of the expected geologic patterns is used to represent the shape and connectivity of the underlying lithofacies. While MPS is quite powerful for describing complex geologic facies connectivity, the nonlinear and complex relation between the flow data and facies distribution makes flow data conditioning quite challenging. We propose an adaptive technique for conditioning facies simulation from a prior TI to nonlinear flow data. Non-adaptive strategies for conditioning facies simulation to flow data can involves many forward flow model solutions that can be computationally very demanding. To improve the conditioning efficiency, we develop an adaptive sampling approach through a data feedback mechanism based on the sampling history. In this approach, after a short period of sampling burn-in time where unconditional samples are generated and passed through an acceptance/rejection test, an ensemble of accepted samples is identified and used to generate a facies probability map. This facies probability map contains the common features of the accepted samples and provides conditioning information about facies occurrence in each grid block, which is used to guide the conditional facies simulation process. As the sampling progresses, the initial probability map is updated according to the collective information about the facies distribution in the chain of accepted samples to increase the acceptance rate and efficiency of the conditioning. This conditioning process can be viewed as an optimization approach where each new sample is proposed based on the sampling history to improve the data mismatch objective function. We extend the application of this adaptive conditioning approach to the case where multiple training images are proposed to describe the geologic scenario in a given formation. We discuss the advantages and limitations of the proposed adaptive conditioning scheme and use numerical experiments from fluvial channel formations to demonstrate its applicability and performance compared to non-adaptive conditioning techniques.

  2. Simulation of heat and mass transfer processes in the experimental section of the air-condensing unit of Scientific Production Company "Turbocon"

    NASA Astrophysics Data System (ADS)

    Artemov, V. I.; Minko, K. B.; Yan'kov, G. G.; Kiryukhin, A. V.

    2016-05-01

    A mathematical model was developed to be used for numerical analysis of heat and mass transfer processes in the experimental section of the air condenser (ESAC) created in the Scientific Production Company (SPC) "Turbocon" and mounted on the territory of the All-Russia Thermal Engineering Institute. The simulations were performed using the author's CFD code ANES. The verification of the models was carried out involving the experimental data obtained in the tests of ESAC. The operational capability of the proposed models to calculate the processes in steam-air mixture and cooling air and algorithms to take into account the maldistribution in the various rows of tube bundle was shown. Data on the influence of temperature and flow rate of the cooling air on the pressure in the upper header of ESAC, effective heat transfer coefficient, steam flow distribution by tube rows, and the dimensions of the ineffectively operating zones of tube bundle for two schemes of steam-air mixture flow (one-pass and two-pass ones) were presented. It was shown that the pressure behind the turbine (in the upper header) increases significantly at increase of the steam flow rate and reduction of the flow rate of cooling air and its temperature rise, and the maximum value of heat transfer coefficient is fully determined by the flow rate of cooling air. Furthermore, the steam flow rate corresponding to the maximum value of heat transfer coefficient substantially depends on the ambient temperature. The analysis of the effectiveness of the considered schemes of internal coolant flow was carried out, which showed that the two-pass scheme is more effective because it provides lower pressure in the upper header, despite the fact that its hydraulic resistance at fixed flow rate of steam-air mixture is considerably higher than at using the one-pass schema. This result is a consequence of the fact that, in the two-pass scheme, the condensation process involves the larger internal surface of tubes, results in lower values of Δ t (the temperature difference between internal and external coolant) for a given heat load.

  3. Laser Doppler, velocimeter system for turbine stator cascade studies and analysis of statistical biasing errors

    NASA Technical Reports Server (NTRS)

    Seasholtz, R. G.

    1977-01-01

    A laser Doppler velocimeter (LDV) built for use in the Lewis Research Center's turbine stator cascade facilities is described. The signal processing and self contained data processing are based on a computing counter. A procedure is given for mode matching the laser to the probe volume. An analysis is presented of biasing errors that were observed in turbulent flow when the mean flow was not normal to the fringes.

  4. Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems

    NASA Astrophysics Data System (ADS)

    Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard

    Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.

  5. Convergence of the flow of a chemically reacting gaseous mixture to incompressible Euler equations in a unbounded domain

    NASA Astrophysics Data System (ADS)

    Kwon, Young-Sam

    2017-12-01

    The flow of chemically reacting gaseous mixture is associated with a variety of phenomena and processes. We study the combined quasineutral and inviscid limit from the flow of chemically reacting gaseous mixture governed by Poisson equation to incompressible Euler equations with the ill-prepared initial data in the unbounded domain R^2× T. Furthermore, the convergence rates are obtained.

  6. Tracing Nitrate Contributions to Streams During Varying Flow Regimes at the Sleepers River Research Watershed, Vermont, USA

    NASA Astrophysics Data System (ADS)

    Sebestyen, S. D.; Shanley, J. B.; Boyer, E. W.; Ohte, N.; Doctor, D. H.; Kendall, C.

    2003-12-01

    Quantifying sources and transformations of nitrate in headwater catchments is fundamental to understanding the movement of nitrogen to streams. At the Sleepers River Research Watershed in northeastern Vermont (USA), we are using multiple chemical tracer and mixing model approaches to quantify sources and transport of nitrate to streams under varying flow regimes. We sampled streams, lysimeters, and wells at nested locations from the headwaters to the outlet of the 41 ha W-9 watershed under the entire range of flow regimes observed throughout 2002-2003, including baseflow and multiple events (stormflow and snowmelt). Our results suggest that nitrogen sources, and consequently stream nitrate concentrations, are rapidly regenerated during several weeks of baseflow and nitrogen is flushed from the watershed by stormflow events that follow baseflow periods. Both basic chemistry data (anions, cations, & dissolved organic carbon) and isotopic data (nitrate, dissolved organic carbon, and dissolved inorganic carbon) indicate that nitrogen source contributions vary depending upon the extent of saturation in the watershed, the initiation of shallow subsurface water inputs, and other hydrological processes. Stream nitrate concentrations typically peak with discharge and are higher on the falling than the rising limb of the hydrograph. Our data also indicate the importance of terrestrial and aquatic biogeochemical processes, in addition to hydrological connectivity in controlling how nitrate moves from the terrestrial landscape to streams. Our detailed sampling data from multiple flow regimes are helping to identify and quantify the "hot spots" and "hot moments" of biogeochemical and hydrological processes that control nitrogen fluxes in streams.

  7. Utility accommodation and conflict tracker (UACT) : user manual

    DOT National Transportation Integrated Search

    2009-02-01

    Project 0-5475 performed a comprehensive analysis of utility conflict data/information flows between utility : accommodation stakeholders in the Texas Department of Transportation project development process, : developed data models to accommodate wo...

  8. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.

  9. Technical Manual for the Geospatial Stream Flow Model (GeoSFM)

    USGS Publications Warehouse

    Asante, Kwabena O.; Artan, Guleid A.; Pervez, Md Shahriar; Bandaragoda, Christina; Verdin, James P.

    2008-01-01

    The monitoring of wide-area hydrologic events requires the use of geospatial and time series data available in near-real time. These data sets must be manipulated into information products that speak to the location and magnitude of the event. Scientists at the U.S. Geological Survey Earth Resources Observation and Science (USGS EROS) Center have implemented a hydrologic modeling system which consists of an operational data processing system and the Geospatial Stream Flow Model (GeoSFM). The data processing system generates daily forcing evapotranspiration and precipitation data from various remotely sensed and ground-based data sources. To allow for rapid implementation in data scarce environments, widely available terrain, soil, and land cover data sets are used for model setup and initial parameter estimation. GeoSFM performs geospatial preprocessing and postprocessing tasks as well as hydrologic modeling tasks within an ArcView GIS environment. The integration of GIS routines and time series processing routines is achieved seamlessly through the use of dynamically linked libraries (DLLs) embedded within Avenue scripts. GeoSFM is run operationally to identify and map wide-area streamflow anomalies. Daily model results including daily streamflow and soil water maps are disseminated through Internet map servers, flood hazard bulletins and other media.

  10. Optimizing a Laser Process for Making Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Arepalli, Sivaram; Nikolaev, Pavel; Holmes, William

    2010-01-01

    A systematic experimental study has been performed to determine the effects of each of the operating conditions in a double-pulse laser ablation process that is used to produce single-wall carbon nanotubes (SWCNTs). The comprehensive data compiled in this study have been analyzed to recommend conditions for optimizing the process and scaling up the process for mass production. The double-pulse laser ablation process for making SWCNTs was developed by Rice University researchers. Of all currently known nanotube-synthesizing processes (arc and chemical vapor deposition), this process yields the greatest proportion of SWCNTs in the product material. The aforementioned process conditions are important for optimizing the production of SWCNTs and scaling up production. Reports of previous research (mostly at Rice University) toward optimization of process conditions mention effects of oven temperature and briefly mention effects of flow conditions, but no systematic, comprehensive study of the effects of process conditions was done prior to the study described here. This was a parametric study, in which several production runs were carried out, changing one operating condition for each run. The study involved variation of a total of nine parameters: the sequence of the laser pulses, pulse-separation time, laser pulse energy density, buffer gas (helium or nitrogen instead of argon), oven temperature, pressure, flow speed, inner diameter of the flow tube, and flow-tube material.

  11. Analysis of the three-dimensional structure of a bubble wake using PIV and Galilean decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hassan, Y.A.; Schmidl, W.D.; Ortiz-Villafuerte, J.

    1999-07-01

    Bubbly flow plays a key role in a variety of natural and industrial processes. An accurate and complete description of the phase interactions in two-phase bubbly flow is not available at this time. These phase interactions are, in general, always three-dimensional and unsteady. Therefore, measurement techniques utilized to obtain qualitative and quantitative data from two-phase flow should be able to acquire transient and three-dimensional data, in order to provide information to test theoretical models and numerical simulations. Even for dilute bubble flows, in which bubble interaction is at a minimum, the turbulent motion of the liquid generated by the bubblemore » is yet to be completely understood. For many years, the design of systems with bubbly flows was based primarily on empiricism. Dilute bubbly flows are an extension of single bubble dynamics, and therefore improvements in the description and modeling of single bubble motion, the flow field around the bubble, and the dynamical interactions between the bubble and the flow will consequently improve bubbly flow modeling. The improved understanding of the physical phenomena will have far-reaching benefits in upgrading the operation and efficiency of current processes and in supporting the development of new and innovative approaches. A stereoscopic particle image velocimetry measurement of the flow generated by the passage of a single air-bubble rising in stagnant water, in a circular pipe is presented. Three-dimensional velocity fields within the measurement zone were obtained. Ensemble-averaged instantaneous velocities for a specific bubble path were calculated and interpolated to obtain mean three-dimensional velocity fields. A Galilean velocity decomposition is used to study the vorticity generated in the flow.« less

  12. The Overgrid Interface for Computational Simulations on Overset Grids

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    Computational simulations using overset grids typically involve multiple steps and a variety of software modules. A graphical interface called OVERGRID has been specially designed for such purposes. Data required and created by the different steps include geometry, grids, domain connectivity information and flow solver input parameters. The interface provides a unified environment for the visualization, processing, generation and diagnosis of such data. General modules are available for the manipulation of structured grids and unstructured surface triangulations. Modules more specific for the overset approach include surface curve generators, hyperbolic and algebraic surface grid generators, a hyperbolic volume grid generator, Cartesian box grid generators, and domain connectivity: pre-processing tools. An interface provides automatic selection and viewing of flow solver boundary conditions, and various other flow solver inputs. For problems involving multiple components in relative motion, a module is available to build the component/grid relationships and to prescribe and animate the dynamics of the different components.

  13. Carolinas Coastal Change Processes Project data report for nearshore observations at Cape Hatteras, North Carolina

    USGS Publications Warehouse

    Armstrong, Brandy N.; Warner, John C.; Voulgaris, George; List, Jeffrey H.; Thieler, Robert; Martini, Marinna A.; Montgomery, Ellyn T.; McNinch, Jesse E.; Book, Jeffrey W.; Haas, Kevin

    2013-01-01

    An oceanographic field study conducted in February 2010 investigated processes that control nearshore flow and sediment transport dynamics at Cape Hatteras, North Carolina. This report describes the project background, field program, instrumentation setup, and locations of the sensor deployments. The data collected, and supporting meteorological and streamflow observations, are presented as time-series plots for data visualization. Additionally, the data are available as part of this report.

  14. Investigation of the jet-wake flow of a highly loaded centrifugal compressor impeller

    NASA Technical Reports Server (NTRS)

    Eckardt, D.

    1978-01-01

    Investigations, aimed at developing a better understanding of the complex flow field in high performance centrifugal compressors were performed. Newly developed measuring techniques for unsteady static and total pressures as well as flow directions, and a digital data analysis system for fluctuating signals were thoroughly tested. The loss-affected mixing process of the distorted impeller discharge flow was investigated in detail, in the absolute and relative system, at impeller tip speeds up to 380 m/s. A theoretical analysis proved good coincidence of the test results with the DEAN-SENOO theory, which was extended to compressible flows.

  15. Numerical and Experimental study of secondary flows in a rotating two-phase flow: the tea leaf paradox

    NASA Astrophysics Data System (ADS)

    Calderer, Antoni; Neal, Douglas; Prevost, Richard; Mayrhofer, Arno; Lawrenz, Alan; Foss, John; Sotiropoulos, Fotis

    2015-11-01

    Secondary flows in a rotating flow in a cylinder, resulting in the so called ``tea leaf paradox'', are fundamental for understanding atmospheric pressure systems, developing techniques for separating red blood cells from the plasma, and even separating coagulated trub in the beer brewing process. We seek to gain deeper insights in this phenomenon by integrating numerical simulations and experiments. We employ the Curvilinear Immersed boundary method (CURVIB) of Calderer et al. (J. Comp. Physics 2014), which is a two-phase flow solver based on the level set method, to simulate rotating free-surface flow in a cylinder partially filled with water as in the tea leave paradox flow. We first demonstrate the validity of the numerical model by simulating a cylinder with a rotating base filled with a single fluid, obtaining results in excellent agreement with available experimental data. Then, we present results for the cylinder case with free surface, investigate the complex formation of secondary flow patterns, and show comparisons with new experimental data for this flow obtained by Lavision. Computational resources were provided by the Minnesota Supercomputing Institute.

  16. Mathematical Models of Continuous Flow Electrophoresis

    NASA Technical Reports Server (NTRS)

    Saville, D. A.; Snyder, R. S.

    1985-01-01

    Development of high resolution continuous flow electrophoresis devices ultimately requires comprehensive understanding of the ways various phenomena and processes facilitate or hinder separation. A comprehensive model of the actual three dimensional flow, temperature and electric fields was developed to provide guidance in the design of electrophoresis chambers for specific tasks and means of interpreting test data on a given chamber. Part of the process of model development includes experimental and theoretical studies of hydrodynamic stability. This is necessary to understand the origin of mixing flows observed with wide gap gravitational effects. To insure that the model accurately reflects the flow field and particle motion requires extensive experimental work. Another part of the investigation is concerned with the behavior of concentrated sample suspensions with regard to sample stream stability particle-particle interactions which might affect separation in an electric field, especially at high field strengths. Mathematical models will be developed and tested to establish the roles of the various interactions.

  17. Estimating Flow-Through Balance Momentum Tares with CFD

    NASA Technical Reports Server (NTRS)

    Melton, John E.; James, Kevin D.; Long, Kurtis R.; Flamm, Jeffrey D.

    2016-01-01

    This paper describes the process used for estimating flow-through balance momentum tares. The interaction of jet engine exhausts on the BOEINGERA Hybrid Wing Body (HWB) was simulated in the NFAC 40x80 wind tunnel at NASA Ames using a pair of turbine powered simulators (TPS). High-pressure air was passed through a flow-through balance and manifold before being delivered to the TPS units. The force and moment tares that result from the internal shear and pressure distribution were estimated using CFD. Validation of the CFD simulations for these complex internal flows is a challenge, given limited experimental data due to the complications of the internal geometry. Two CFD validation efforts are documented, and comparisons with experimental data from the final model installation are provided.

  18. 3D Sedimentological and geophysical studies of clastic reservoir analogs: Facies architecture, reservoir properties, and flow behavior within delta front facies elements of the Cretaceous Wall Creek Member, Frontier Formation, Wyoming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christopher D. White

    2009-12-21

    Significant volumes of oil and gas occur in reservoirs formed by ancient river deltas. This has implications for the spatial distribution of rock types and the variation of transport properties. A between mudstones and sandstones may form baffles that influence productivity and recovery efficiency. Diagenetic processes such as compaction, dissolution, and cementation can also alter flow properties. A better understanding of these properties and improved methods will allow improved reservoir development planning and increased recovery of oil and gas from deltaic reservoirs. Surface exposures of ancient deltaic rocks provide a high-resolution view of variability. Insights gleaned from these exposures canmore » be used to model analogous reservoirs, for which data is sparser. The Frontier Formation in central Wyoming provides an opportunity for high-resolution models. The same rocks exposed in the Tisdale anticline are productive in nearby oil fields. Kilometers of exposure are accessible, and bedding-plane exposures allow use of high-resolution ground-penetrating radar. This study combined geologic interpretations, maps, vertical sections, core data, and ground-penetrating radar to construct geostatistical and flow models. Strata-conforming grids were use to reproduce the observed geometries. A new Bayesian method integrates outcrop, core, and radar amplitude and phase data. The proposed method propagates measurement uncertainty and yields an ensemble of plausible models for calcite concretions. These concretions affect flow significantly. Models which integrate more have different flow responses from simpler models, as demonstrated an exhaustive two-dimensional reference image and in three dimensions. This method is simple to implement within widely available geostatistics packages. Significant volumes of oil and gas occur in reservoirs that are inferred to have been formed by ancient river deltas. This geologic setting has implications for the spatial distribution of rock types (\\Eg sandstones and mudstones) and the variation of transport properties (\\Eg permeability and porosity) within bodies of a particular rock type. Both basin-wide processes such as sea-level change and the autocyclicity of deltaic processes commonly cause deltaic reservoirs to have large variability in rock properties; in particular, alternations between mudstones and sandstones may form baffles and trends in rock body permeability can influence productivity and recovery efficiency. In addition, diagenetic processes such as compaction, dissolution, and cementation can alter the spatial pattern of flow properties. A better understanding of these properties, and improved methods to model the properties and their effects, will allow improved reservoir development planning and increased recovery of oil and gas from deltaic reservoirs. Surface exposures of ancient deltaic rocks provide a high resolution, low uncertainty view of subsurface variability. Patterns and insights gleaned from these exposures can be used to model analogous reservoirs, for which data is much sparser. This approach is particularly attractive when reservoir formations are exposed at the surface. The Frontier Formation in central Wyoming provides an opportunity for high resolution characterization. The same rocks exposed in the vicinity of the Tisdale anticline are productive in nearby oil fields, including Salt Creek. Many kilometers of good-quality exposure are accessible, and the common bedding-plane exposures allow use of shallow-penetration, high-resolution electromagnetic methods known as ground-penetrating radar. This study combined geologic interpretations, maps, vertical sections, core data, and ground-penetrating radar to construct high-resolution geostatistical and flow models for the Wall Creek Member of the Frontier Formation. Stratal-conforming grids were use to reproduce the progradational and aggradational geometries observed in outcrop and radar data. A new, Bayesian method integrates outcrop--derived statistics, core observations of concretions, and radar amplitude and phase data. The proposed method consistently propagates measurement uncertainty through the model-building process, and yields an ensemble of plausible models for diagenetic calcite concretions. These concretions have a statistically significant on flow. Furthermore, neither geostatistical data from the outcrops nor geophysical data from radar is sufficient: models which integrate these data have significantly different flow responses. This was demonstrated both for an exhaustive two-dimensional reference image and in three dimensions, using flow simulations. This project wholly supported one PhD student and part of the education of an additional MS and PhD student. It helped to sponsor 6 refereed articles and 8 conference or similar presentations.« less

  19. Background-Oriented Schlieren (BOS) for Scramjet Inlet-isolator Investigation

    NASA Astrophysics Data System (ADS)

    Che Idris, Azam; Rashdan Saad, Mohd; Hing Lo, Kin; Kontis, Konstantinos

    2018-05-01

    Background-oriented Schlieren (BOS) technique is a recently invented non-intrusive flow diagnostic method which has yet to be fully explored in its capabilities. In this paper, BOS technique has been applied for investigating the general flow field characteristics inside a generic scramjet inlet-isolator with Mach 5 flow. The difficulty in finding the delicate balance between measurement sensitivity and measurement area image focusing has been demonstrated. The differences between direct cross-correlation (DCC) and Fast Fourier Transform (FFT) raw data processing algorithm have also been demonstrated. As an exploratory study of BOS capability, this paper found that BOS is simple yet robust enough to be used to visualize complex flow in a scramjet inlet in hypersonic flow. However, in this case its quantitative data can be strongly affected by 3-dimensionality thus obscuring the density value with significant errors.

  20. Numerical 3D flow simulation of attached cavitation structures at ultrasonic horn tips and statistical evaluation of flow aggressiveness via load collectives

    NASA Astrophysics Data System (ADS)

    Mottyll, S.; Skoda, R.

    2015-12-01

    A compressible inviscid flow solver with barotropic cavitation model is applied to two different ultrasonic horn set-ups and compared to hydrophone, shadowgraphy as well as erosion test data. The statistical analysis of single collapse events in wall-adjacent flow regions allows the determination of the flow aggressiveness via load collectives (cumulative event rate vs collapse pressure), which show an exponential decrease in agreement to studies on hydrodynamic cavitation [1]. A post-processing projection of event rate and collapse pressure on a reference grid reduces the grid dependency significantly. In order to evaluate the erosion-sensitive areas a statistical analysis of transient wall loads is utilised. Predicted erosion sensitive areas as well as temporal pressure and vapour volume evolution are in good agreement to the experimental data.

  1. A physically-based Distributed Hydrologic Model for Tropical Catchments

    NASA Astrophysics Data System (ADS)

    Abebe, N. A.; Ogden, F. L.

    2010-12-01

    Hydrological models are mathematical formulations intended to represent observed hydrological processes in a watershed. Simulated watersheds in turn vary in their nature based on their geographic location, altitude, climatic variables and geology and soil formation. Due to these variations, available hydrologic models vary in process formulation, spatial and temporal resolution and data demand. Many tropical watersheds are characterized by extensive and persistent biological activity and a large amount of rain. The Agua Salud catchments located within the Panama Canal Watershed, Panama, are such catchments identified by steep rolling topography, deep soils derived from weathered bedrock, and limited exposed bedrock. Tropical soils are highly affected by soil cracks, decayed tree roots and earthworm burrows forming a network of preferential flow paths that drain to a perched water table, which forms at a depth where the vertical hydraulic conductivity is significantly reduced near the bottom of the bioturbation layer. We have developed a physics-based, spatially distributed, multi-layered hydrologic model to simulate the dominant processes in these tropical watersheds. The model incorporates the major flow processes including overland flow, channel flow, matrix and non-Richards film flow infiltration, lateral downslope saturated matrix and non-Darcian pipe flow in the bioturbation layer, and deep saturated groundwater flow. Emphasis is given to the modeling of subsurface unsaturated zone soil moisture dynamics and the saturated preferential lateral flow from the network of macrospores. Preliminary results indicate that the model has the capability to simulate the complex hydrological processes in the catchment and will be a useful tool in the ongoing comprehensive ecohydrological studies in tropical catchments, and help improve our understanding of the hydrological effects of deforestation and aforestation.

  2. Advances in Software Tools for Pre-processing and Post-processing of Overset Grid Computations

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    2004-01-01

    Recent developments in three pieces of software for performing pre-processing and post-processing work on numerical computations using overset grids are presented. The first is the OVERGRID graphical interface which provides a unified environment for the visualization, manipulation, generation and diagnostics of geometry and grids. Modules are also available for automatic boundary conditions detection, flow solver input preparation, multiple component dynamics input preparation and dynamics animation, simple solution viewing for moving components, and debris trajectory analysis input preparation. The second is a grid generation script library that enables rapid creation of grid generation scripts. A sample of recent applications will be described. The third is the OVERPLOT graphical interface for displaying and analyzing history files generated by the flow solver. Data displayed include residuals, component forces and moments, number of supersonic and reverse flow points, and various dynamics parameters.

  3. Realistic Data-Driven Traffic Flow Animation Using Texture Synthesis.

    PubMed

    Chao, Qianwen; Deng, Zhigang; Ren, Jiaping; Ye, Qianqian; Jin, Xiaogang

    2018-02-01

    We present a novel data-driven approach to populate virtual road networks with realistic traffic flows. Specifically, given a limited set of vehicle trajectories as the input samples, our approach first synthesizes a large set of vehicle trajectories. By taking the spatio-temporal information of traffic flows as a 2D texture, the generation of new traffic flows can be formulated as a texture synthesis process, which is solved by minimizing a newly developed traffic texture energy. The synthesized output captures the spatio-temporal dynamics of the input traffic flows, and the vehicle interactions in it strictly follow traffic rules. After that, we position the synthesized vehicle trajectory data to virtual road networks using a cage-based registration scheme, where a few traffic-specific constraints are enforced to maintain each vehicle's original spatial location and synchronize its motion in concert with its neighboring vehicles. Our approach is intuitive to control and scalable to the complexity of virtual road networks. We validated our approach through many experiments and paired comparison user studies.

  4. Technical Parameters Modeling of a Gas Probe Foaming Using an Active Experimental Type Research

    NASA Astrophysics Data System (ADS)

    Tîtu, A. M.; Sandu, A. V.; Pop, A. B.; Ceocea, C.; Tîtu, S.

    2018-06-01

    The present paper deals with a current and complex topic, namely - a technical problem solving regarding the modeling and then optimization of some technical parameters related to the natural gas extraction process. The study subject is to optimize the gas probe sputtering using experimental research methods and data processing by regular probe intervention with different sputtering agents. This procedure makes that the hydrostatic pressure to be reduced by the foam formation from the water deposit and the scrubbing agent which can be removed from the surface by the produced gas flow. The probe production data was analyzed and the so-called candidate for the research itself emerged. This is an extremely complex study and it was carried out on the field works, finding that due to the severe gas field depletion the wells flow decreases and the start of their loading with deposit water, was registered. It was required the regular wells foaming, to optimize the daily production flow and the disposal of the wellbore accumulated water. In order to analyze the process of natural gas production, the factorial experiment and other methods were used. The reason of this choice is that the method can offer very good research results with a small number of experimental data. Finally, through this study the extraction process problems were identified by analyzing and optimizing the technical parameters, which led to a quality improvement of the extraction process.

  5. Making SAR Data Accessible - ASF's ALOS PALSAR Radiometric Terrain Correction Project

    NASA Astrophysics Data System (ADS)

    Meyer, F. J.; Arko, S. A.; Gens, R.

    2015-12-01

    While SAR data have proven valuable for a wide range of geophysical research questions, so far, largely only the SAR-educated science communities have been able to fully exploit the information content of internationally available SAR archives. The main issues that have been preventing a more widespread utilization of SAR are related to (1) the diversity and complexity of SAR data formats, (2) the complexity of the processing flows needed to extract geophysical information from SAR, (3) the lack of standardization and automation of these processing flows, and (4) the often ignored geocoding procedures, leaving the data in image coordinate space. In order to improve upon this situation, ASF's radiometric terrain-correction (RTC) project is generating uniformly formatted and easily accessible value-added products from the ASF Distributed Active Archive Center's (DAAC) five-year archive of JAXA's ALOS PALSAR sensor. Specifically, the project applies geometric and radiometric corrections to SAR data to allow for an easy and direct combination of obliquely acquired SAR data with remote sensing imagery acquired in nadir observation geometries. Finally, the value-added data is provided to the user in the broadly accepted Geotiff format, in order to support the easy integration of SAR data into GIS environments. The goal of ASF's RTC project is to make SAR data more accessible and more attractive to the broader SAR applications community, especially to those users that currently have limited SAR expertise. Production of RTC products commenced October 2014 and will conclude late in 2015. As of July 2015, processing of 71% of ASF's ALOS PALSAR archive was completed. Adding to the utility of this dataset are recent changes to the data access policy that allow the full-resolution RTC products to be provided to the public, without restriction. In this paper we will introduce the processing flow that was developed for the RTC project and summarize the calibration and validation procedures that were implemented to determine and monitor system performance. The paper will also show the current progress of RTC processing, provide examples of generated data sets, and demonstrate the benefit of the RTC archives for applications such as land-use classification and change detection.

  6. Variational optical flow estimation based on stick tensor voting.

    PubMed

    Rashwan, Hatem A; Garcia, Miguel A; Puig, Domenec

    2013-07-01

    Variational optical flow techniques allow the estimation of flow fields from spatio-temporal derivatives. They are based on minimizing a functional that contains a data term and a regularization term. Recently, numerous approaches have been presented for improving the accuracy of the estimated flow fields. Among them, tensor voting has been shown to be particularly effective in the preservation of flow discontinuities. This paper presents an adaptation of the data term by using anisotropic stick tensor voting in order to gain robustness against noise and outliers with significantly lower computational cost than (full) tensor voting. In addition, an anisotropic complementary smoothness term depending on directional information estimated through stick tensor voting is utilized in order to preserve discontinuity capabilities of the estimated flow fields. Finally, a weighted non-local term that depends on both the estimated directional information and the occlusion state of pixels is integrated during the optimization process in order to denoise the final flow field. The proposed approach yields state-of-the-art results on the Middlebury benchmark.

  7. Unsaturated flow processes in structurally-variable pathways in wildfire-affected soils and ash

    NASA Astrophysics Data System (ADS)

    Ebel, B. A.

    2016-12-01

    Prediction of flash flood and debris flow generation in wildfire-affected soils and ash hinges on understanding unsaturated flow processes. Water resources issues, such as groundwater recharge, also rely on our ability to quantify subsurface flow. Soil-hydraulic property data provide insight into unsaturated flow processes and timescales. A literature review and synthesis of existing data from the literature for wildfire-affected soils, including ash and unburned soils, facilitated calculating metrics and timescales of hydrologic response related to infiltration and surface runoff generation. Sorptivity (S) and the Green-Ampt wetting front parameter (Ψf) were significantly lower in burned soils compared to unburned soils, while field-saturated hydraulic conductivity (Kfs) was not significantly different. The magnitude and duration of the influence of capillarity was substantially reduced in burned soils, leading to faster ponding times in response to rainfall. Ash had large values of S and Kfs compared to unburned and burned soils but intermediate values of Ψf, suggesting that ash has long ponding times in response to rainfall. The ratio of S2/Kfs was nearly constant ( 100 mm) for unburned soils, but was more variable in burned soils. Post-wildfire changes in this ratio suggested that unburned soils had a balance between gravity and capillarity contributions to infiltration, which may depend on soil organic matter, while burning shifted infiltration more towards gravity contributions by reducing S. Taken together, the changes in post-wildfire soil-hydraulic properties increased the propensity for surface runoff generation and may have enhanced subsurface preferential flow through pathways altered by wildfire.

  8. Real-time acquisition and display of flow contrast using speckle variance optical coherence tomography in a graphics processing unit.

    PubMed

    Xu, Jing; Wong, Kevin; Jian, Yifan; Sarunic, Marinko V

    2014-02-01

    In this report, we describe a graphics processing unit (GPU)-accelerated processing platform for real-time acquisition and display of flow contrast images with Fourier domain optical coherence tomography (FDOCT) in mouse and human eyes in vivo. Motion contrast from blood flow is processed using the speckle variance OCT (svOCT) technique, which relies on the acquisition of multiple B-scan frames at the same location and tracking the change of the speckle pattern. Real-time mouse and human retinal imaging using two different custom-built OCT systems with processing and display performed on GPU are presented with an in-depth analysis of performance metrics. The display output included structural OCT data, en face projections of the intensity data, and the svOCT en face projections of retinal microvasculature; these results compare projections with and without speckle variance in the different retinal layers to reveal significant contrast improvements. As a demonstration, videos of real-time svOCT for in vivo human and mouse retinal imaging are included in our results. The capability of performing real-time svOCT imaging of the retinal vasculature may be a useful tool in a clinical environment for monitoring disease-related pathological changes in the microcirculation such as diabetic retinopathy.

  9. Geochemistry and the understanding of ground-water systems

    USGS Publications Warehouse

    Glynn, Pierre D.; Plummer, Niel

    2005-01-01

    Geochemistry has contributed significantly to the understanding of ground-water systems over the last 50 years. Historic advances include development of the hydrochemical facies concept, application of equilibrium theory, investigation of redox processes, and radiocarbon dating. Other hydrochemical concepts, tools, and techniques have helped elucidate mechanisms of flow and transport in ground-water systems, and have helped unlock an archive of paleoenvironmental information. Hydrochemical and isotopic information can be used to interpret the origin and mode of ground-water recharge, refine estimates of time scales of recharge and ground-water flow, decipher reactive processes, provide paleohydrological information, and calibrate ground-water flow models. Progress needs to be made in obtaining representative samples. Improvements are needed in the interpretation of the information obtained, and in the construction and interpretation of numerical models utilizing hydrochemical data. The best approach will ensure an optimized iterative process between field data collection and analysis, interpretation, and the application of forward, inverse, and statistical modeling tools. Advances are anticipated from microbiological investigations, the characterization of natural organics, isotopic fingerprinting, applications of dissolved gas measurements, and the fields of reaction kinetics and coupled processes. A thermodynamic perspective is offered that could facilitate the comparison and understanding of the multiple physical, chemical, and biological processes affecting ground-water systems.

  10. The indication of Martian gully formation processes by slope-area analysis

    USGS Publications Warehouse

    Conway, S.J.; Balme, M.R.; Murray, J.B.; Towner, M.C.; Okubo, C.H.; Grindrod, P.M.

    2011-01-01

    The formation process of recent gullies on Mars is currently under debate. This study aims to discriminate between the proposed formation processes - pure water flow, debris flow and dry mass wasting - through the application of geomorphological indices commonly used in terrestrial geomorphology. High-resolution digital elevation models (DEMs) of Earth and Mars were used to evaluate the drainage characteristics of small slope sections. Data from Earth were used to validate the hillslope, debris-flow and alluvial process domains previously found for large fluvial catchments on Earth, and these domains were applied to gullied and ungullied slopes on Mars. In accordance with other studies, our results indicate that debris flow is one of the main processes forming the Martian gullies that were being examined. The source of the water is predominantly distributed surface melting, not an underground aquifer. Evidence is also presented indicating that other processes may have shaped Martian crater slopes, such as ice-assisted creep and solifluction, in agreement with the proposed recent Martian glacial and periglacial climate. Our results suggest that, within impact craters, different processes are acting on differently oriented slopes, but further work is needed to investigate the potential link between these observations and changes in Martian climate. ?? The Geological Society of London 2011.

  11. The Characteristics of Turbulent Flows on Forested Floodplains

    NASA Astrophysics Data System (ADS)

    Darby, S. E.; Richardson, K.; Sear, D. A.

    2008-12-01

    Forested floodplain environments represent the undisturbed land cover of most river systems, but they are under threat from human activities. An understanding of forest floodplain processes therefore has relevance to ecosystem conservation and restoration, and the interpretation of pre-historic river and floodplain evolution. However, relatively little research has been undertaken within forested floodplain environments, a particular limitation being an absence of empirical data regarding the hydraulic characteristics of over bank flows, which inhibits the development of flow, sediment and solute transport models. Forest floodplain flows are strongly modified by floodplain topography and the presence of vegetation and organic debris on the woodland floor. In such instances flow blockage and diversions are common, and there is the possibility of intense turbulence generation, both by wakes and by shear. To address this gap we have undertaken a study based on a floodplain reach located in the Highland Water Research Catchment (southern England), a UK national reference site for lowland floodplain forest streams. Given the difficulties of acquiring spatially-distributed hydraulic data sets during floods, our methodological approach has been to attempt to replicate over bank flow observed at the study site within a laboratory flume. This is necessary to acquire flow velocity data at sufficiently high spatial resolution to evaluate the underlying flow mechanics and has been achieved using (i) a large (21m) flume to achieve 1:1 hydraulic scaling and (ii) a novel method of precisely replicating the floodplain topography within the flume. Specifically, accurate replication of a representative floodplain patch was achieved by creating a 1:1 scale Physical Terrain Model (PTM) from high-density polyurethane using a computer-controlled milling process based on Digital Terrain Model (DTM) data, the latter acquired via terrestrial laser scanning (TLS) survey. The PTM was deployed within the flume immediately downstream of a 8m long hydraulically smooth 'run-in' section with a steady discharge replicating an over bank flow observed in the field, thus achieving 1:1 hydraulic scaling. Above the PTM 3D flow velocity time-series were acquired at each node on a dense (5-10cm horizontal spatial resolution) sampling grid using Acoustic Doppler Velocimeters (ADVs). The data were analysed by visualising the 3D structure of flow velocity and derivative statistics (turbulent intensity, turbulent kinetic energy, Reynolds stresses, etc), combined with quadrant analysis to identify the spatial variation of each quadrant's contribution to the turbulence intensity. These analyses have been used to delineate flow regions dominated by different structures, and construct an empirical model that will be helpful in defining relevant modelling strategies in future research.

  12. Automation of the CFD Process on Distributed Computing Systems

    NASA Technical Reports Server (NTRS)

    Tejnil, Ed; Gee, Ken; Rizk, Yehia M.

    2000-01-01

    A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational resources required to compute and store the information. The scripts were continually modified to improve the utilization of the computational resources and reduce the likelihood of data loss due to failures. An ad-hoc file server was created to manage the large amount of data being generated as part of the design event. Files were stored and retrieved as needed to create new jobs and analyze the results. Additional information is contained in the original.

  13. Application of the ultrasonic technique and high-speed filming for the study of the structure of air-water bubbly flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carvalho, R.D.M.; Venturini, O.J.; Tanahashi, E.I.

    2009-10-15

    Multiphase flows are very common in industry, oftentimes involving very harsh environments and fluids. Accordingly, there is a need to determine the dispersed phase holdup using noninvasive fast responding techniques; besides, knowledge of the flow structure is essential for the assessment of the transport processes involved. The ultrasonic technique fulfills these requirements and could have the capability to provide the information required. In this paper, the potential of the ultrasonic technique for application to two-phase flows was investigated by checking acoustic attenuation data against experimental data on the void fraction and flow topology of vertical, upward, air-water bubbly flows inmore » the zero to 15% void fraction range. The ultrasonic apparatus consisted of one emitter/receiver transducer and three other receivers at different positions along the pipe circumference; simultaneous high-speed motion pictures of the flow patterns were made at 250 and 1000 fps. The attenuation data for all sensors exhibited a systematic interrelated behavior with void fraction, thereby testifying to the capability of the ultrasonic technique to measure the dispersed phase holdup. From the motion pictures, basic gas phase structures and different flows patterns were identified that corroborated several features of the acoustic attenuation data. Finally, the acoustic wave transit time was also investigated as a function of void fraction. (author)« less

  14. Development of optimal models of porous media by combining static and dynamic data: the permeability and porosity distributions.

    PubMed

    Hamzehpour, Hossein; Rasaei, M Reza; Sahimi, Muhammad

    2007-05-01

    We describe a method for the development of the optimal spatial distributions of the porosity phi and permeability k of a large-scale porous medium. The optimal distributions are constrained by static and dynamic data. The static data that we utilize are limited data for phi and k, which the method honors in the optimal model and utilizes their correlation functions in the optimization process. The dynamic data include the first-arrival (FA) times, at a number of receivers, of seismic waves that have propagated in the porous medium, and the time-dependent production rates of a fluid that flows in the medium. The method combines the simulated-annealing method with a simulator that solves numerically the three-dimensional (3D) acoustic wave equation and computes the FA times, and a second simulator that solves the 3D governing equation for the fluid's pressure as a function of time. To our knowledge, this is the first time that an optimization method has been developed to determine simultaneously the global minima of two distinct total energy functions. As a stringent test of the method's accuracy, we solve for flow of two immiscible fluids in the same porous medium, without using any data for the two-phase flow problem in the optimization process. We show that the optimal model, in addition to honoring the data, also yields accurate spatial distributions of phi and k, as well as providing accurate quantitative predictions for the single- and two-phase flow problems. The efficiency of the computations is discussed in detail.

  15. Magma transport in sheet intrusions of the Alnö carbonatite complex, central Sweden.

    PubMed

    Andersson, Magnus; Almqvist, Bjarne S G; Burchardt, Steffi; Troll, Valentin R; Malehmir, Alireza; Snowball, Ian; Kübler, Lutz

    2016-06-10

    Magma transport through the Earth's crust occurs dominantly via sheet intrusions, such as dykes and cone-sheets, and is fundamental to crustal evolution, volcanic eruptions and geochemical element cycling. However, reliable methods to reconstruct flow direction in solidified sheet intrusions have proved elusive. Anisotropy of magnetic susceptibility (AMS) in magmatic sheets is often interpreted as primary magma flow, but magnetic fabrics can be modified by post-emplacement processes, making interpretation of AMS data ambiguous. Here we present AMS data from cone-sheets in the Alnö carbonatite complex, central Sweden. We discuss six scenarios of syn- and post-emplacement processes that can modify AMS fabrics and offer a conceptual framework for systematic interpretation of magma movements in sheet intrusions. The AMS fabrics in the Alnö cone-sheets are dominantly oblate with magnetic foliations parallel to sheet orientations. These fabrics may result from primary lateral flow or from sheet closure at the terminal stage of magma transport. As the cone-sheets are discontinuous along their strike direction, sheet closure is the most probable process to explain the observed AMS fabrics. We argue that these fabrics may be common to cone-sheets and an integrated geology, petrology and AMS approach can be used to distinguish them from primary flow fabrics.

  16. Magma transport in sheet intrusions of the Alnö carbonatite complex, central Sweden

    PubMed Central

    Andersson, Magnus; Almqvist, Bjarne S. G.; Burchardt, Steffi; Troll, Valentin R.; Malehmir, Alireza; Snowball, Ian; Kübler, Lutz

    2016-01-01

    Magma transport through the Earth’s crust occurs dominantly via sheet intrusions, such as dykes and cone-sheets, and is fundamental to crustal evolution, volcanic eruptions and geochemical element cycling. However, reliable methods to reconstruct flow direction in solidified sheet intrusions have proved elusive. Anisotropy of magnetic susceptibility (AMS) in magmatic sheets is often interpreted as primary magma flow, but magnetic fabrics can be modified by post-emplacement processes, making interpretation of AMS data ambiguous. Here we present AMS data from cone-sheets in the Alnö carbonatite complex, central Sweden. We discuss six scenarios of syn- and post-emplacement processes that can modify AMS fabrics and offer a conceptual framework for systematic interpretation of magma movements in sheet intrusions. The AMS fabrics in the Alnö cone-sheets are dominantly oblate with magnetic foliations parallel to sheet orientations. These fabrics may result from primary lateral flow or from sheet closure at the terminal stage of magma transport. As the cone-sheets are discontinuous along their strike direction, sheet closure is the most probable process to explain the observed AMS fabrics. We argue that these fabrics may be common to cone-sheets and an integrated geology, petrology and AMS approach can be used to distinguish them from primary flow fabrics. PMID:27282420

  17. Utility accommodation and conflict tracker (UACT) installation and configuration manual.

    DOT National Transportation Integrated Search

    2009-02-01

    Project 0-5475 performed a comprehensive analysis of utility conflict data/information flows between utility : accommodation stakeholders in the Texas Department of Transportation project development process, : developed data models to accommodate wo...

  18. High Reynolds Number Liquid Flow Measurements

    DTIC Science & Technology

    1988-08-01

    25. .n Fig. 25, the dotted line represents data taken from Eckelmann’s study in the thick viscous sublaver of an oil channel. Scatter in the...measurements of the fundamental physical quantities are not only an essencial part in an understanding of multiphase flows but also in the measurement process...technique. One of the most yloei’ used techniques, however, is some form of flow visualization. This includes the use o: tufts, oil paint films

  19. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Device Status Data

    DTIC Science & Technology

    2015-09-01

    Figures iv List of Tables iv 1. Introduction 1 2. Device Status Data 1 2.1 SNMP 1 2.2 NMS 1 2.3 ICMP Ping 2 3. Data Collection 2 4. Hydra ...Configuration 3 4.1 Status Codes 4 4.2 Request Time 5 4.3 Hydra BLOb Metadata 6 5. Data Processing 6 5.1 Hydra Data Processing Framework 6 5.1.1...Basic Components 6 5.1.2 Map Component 7 5.1.3 Postmap Methods 8 5.1.4 Data Flow 9 5.1.5 Distributed Processing Considerations 9 5.2 Specific Hydra

  20. NASA airborne Doppler lidar program: Data characteristics of 1981

    NASA Technical Reports Server (NTRS)

    Lee, R. W.

    1982-01-01

    The first flights of the NASA/Marshall airborne CO2 Doppler lidar wind measuring system were made during the summer of 1981. Successful measurements of two-dimensional flow fields were made to ranges of 15 km from the aircraft track. The characteristics of the data obtained are examined. A study of various artifacts introduced into the data set by incomplete compensation for aircraft dynamics is summarized. Most of these artifacts can be corrected by post processing, which reduces velocity errors in the reconstructed flow field to remarkably low levels.

  1. Flow velocity, water temperature, and conductivity at selected locations in Shark River Slough, Everglades National Park, Florida; July 1999 - July 2003

    USGS Publications Warehouse

    Schaffranek, Raymond W.; Riscassi, Ami L.

    2005-01-01

    Flow-velocity, water-temperature, and conductivity data were collected at five locations in Shark River Slough, Everglades National Park (ENP), Florida, from 1999 to 2003. The data were collected as part of the U.S. Geological Survey Priority Ecosystems Science Initiative in support of the Comprehensive Everglades Restoration Plan. This report contains digital files and graphical plots of the processed, quality-checked, and edited data. Information pertinent to the locations and monitoring strategy also is presented.

  2. Freeze-drying simulation framework coupling product attributes and equipment capability: toward accelerating process by equipment modifications.

    PubMed

    Ganguly, Arnab; Alexeenko, Alina A; Schultz, Steven G; Kim, Sherry G

    2013-10-01

    A physics-based model for the sublimation-transport-condensation processes occurring in pharmaceutical freeze-drying by coupling product attributes and equipment capabilities into a unified simulation framework is presented. The system-level model is used to determine the effect of operating conditions such as shelf temperature, chamber pressure, and the load size on occurrence of choking for a production-scale dryer. Several data sets corresponding to production-scale runs with a load from 120 to 485 L have been compared with simulations. A subset of data is used for calibration, whereas another data set corresponding to a load of 150 L is used for model validation. The model predictions for both the onset and extent of choking as well as for the measured product temperature agree well with the production-scale measurements. Additionally, we study the effect of resistance to vapor transport presented by the duct with a valve and a baffle in the production-scale freeze-dryer. Computation Fluid Dynamics (CFD) techniques augmented with a system-level unsteady heat and mass transfer model allow to predict dynamic process conditions taking into consideration specific dryer design. CFD modeling of flow structure in the duct presented here for a production-scale freeze-dryer quantifies the benefit of reducing the obstruction to the flow through several design modifications. It is found that the use of a combined valve-baffle system can increase vapor flow rate by a factor of 2.2. Moreover, minor design changes such as moving the baffle downstream by about 10 cm can increase the flow rate by 54%. The proposed design changes can increase drying rates, improve efficiency, and reduce cycle times due to fewer obstructions in the vapor flow path. The comprehensive simulation framework combining the system-level model and the detailed CFD computations can provide a process analytical tool for more efficient and robust freeze-drying of bio-pharmaceuticals. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. ATLAS DataFlow Infrastructure: Recent results from ATLAS cosmic and first-beam data-taking

    NASA Astrophysics Data System (ADS)

    Vandelli, Wainer; ATLAS TDAQ Collaboration

    2010-04-01

    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently transport event-data with high reliability, while providing aggregated bandwidths larger than 5 GByte/s and coping with many thousands network connections. Nevertheless, routing and streaming capabilities and monitoring and data accounting functionalities are also fundamental requirements. During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC beams provided an unprecedented test-bed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability. Besides, operating the system far from its design specifications helped in exercising its flexibility and contributed in understanding its limitations. Moreover, the integration with the detector and the interfacing with the off-line data processing and management have been able to take advantage of this extended data taking-period as well. In this paper we report on the usage of the DataFlow infrastructure during the ATLAS data-taking. These results, backed-up by complementary performance tests, validate the architecture of the ATLAS DataFlow and prove that the system is robust, flexible and scalable enough to cope with the final requirements of the ATLAS experiment.

  4. Joint parameter and state estimation algorithms for real-time traffic monitoring.

    DOT National Transportation Integrated Search

    2013-12-01

    A common approach to traffic monitoring is to combine a macroscopic traffic flow model with traffic sensor data in a process called state estimation, data fusion, or data assimilation. The main challenge of traffic state estimation is the integration...

  5. 40 CFR 75.53 - Monitoring plan.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... are pre-combustion, post-combustion, or integral to the combustion process; control equipment code... fuel flow-to-load test in section 2.1.7 of appendix D to this part is used: (A) The upper and lower... and applied to the hourly flow rate data: (A) Stack or duct width at the test location, ft; (B) Stack...

  6. Migrant nurses in Brazil: demographic characteristics, migration flow and relationship with the training process

    PubMed Central

    Silva, Kênia Lara; de Sena, Roseni Rosângela; Tavares, Tatiana Silva; Belga, Stephanie Marques Moura Franco; Maas, Lucas Wan Der

    2016-01-01

    Objective to analyze the migration of nurses in Brazil, describe the demographic characteristics of migrant nurses, the main migration flows, and establish relationships with the training process. Method a descriptive, exploratory study, based on 2010 Census data. The data were analyzed using descriptive statistics. Result there were 355,383 nurses in Brazil in 2010. Of these, 36,479 (10.3%) reported having moved compared to the year 2005: 18,073 (5.1%) for intrastate migration, 17,525 (4.8%) interstate migration, and 871 (0.2%) international migration. Females (86.3%), Caucasians (65.2%), and unmarried (48.3%) nurses prevailed in the population, without considerable variation between groups according to migration situation. The findings indicate that the migration flows are driven by the training process for states that concentrate a greater number of courses and positions in undergraduate and graduate studies, and the motivation of employment opportunity in regions of economic expansion in the country. Conclusion it is necessary to deepen the discussion on the movement of nurses in Brazil, their motivations, and international migration. PMID:27027681

  7. Evaluation of processes affecting 1,2-dibromo-3-chloropropane (DBCP) concentrations in ground water in the eastern San Joaquin Valley, California : analysis of chemical data and ground-water flow and transport simulations

    USGS Publications Warehouse

    Burow, Karen R.; Panshin, Sandra Y.; Dubrovsky, Neil H.; Vanbrocklin, David; Fogg, Graham E.

    1999-01-01

    A conceptual two-dimensional numerical flow and transport modeling approach was used to test hypotheses addressing dispersion, transformation rate, and in a relative sense, the effects of ground- water pumping and reapplication of irrigation water on DBCP concentrations in the aquifer. The flow and transport simulations, which represent hypothetical steady-state flow conditions in the aquifer, were used to refine the conceptual understanding of the aquifer system rather than to predict future concentrations of DBCP. Results indicate that dispersion reduces peak concentrations, but this process alone does not account for the apparent decrease in DBCP concentrations in ground water in the eastern San Joaquin Valley. Ground-water pumping and reapplication of irrigation water may affect DBCP concentrations to the extent that this process can be simulated indirectly using first-order decay. Transport simulation results indicate that the in situ 'effective' half-life of DBCP caused by processes other than dispersion and transformation to BAA could be on the order of 6 years.

  8. Assessment of a Geothermal Doublet in the Malm Aquifer Using a Push-Pull Tracer Test

    NASA Astrophysics Data System (ADS)

    Lafogler, Mark; Somogyi, Gabriella; Nießner, Reinhard; Baumann, Thomas

    2013-04-01

    Geothermal exploration of the Malm aquifer in Bavaria is highly successful. Data about the long-term operation, however, is still scarce, although detailed knowledge about the processes occurring in the aquifer is a key requirement to run geothermal facilities efficiently and economically. While there usually is a constant flow of data from the production well (temperatures, hydraulic data, hydrochemical conditions, gas composition) not even the temperatures in the immediate surrounding of the reinjection well are accessible or known. In 2011 the geothermal facility in Pullach was extended with a third geothermal well reaching into the Malm aquifer which is now used as a reinjection well. The former reinjection well was converted to a production well after 5 years of operation. This setting offers a unique opportunity to study the processes in the vicinity of a reinjection well and provides the data base to describe the hydraulic, thermal and hydrochemical performance of the reservoir. The viscosity of the reinjected cold water is increasing by 60% compared to the production well, thus one would expect an increase of the reinjection pressure as the cold water plume spreads around the reinjection well. Measurements, however, show a significant decrease of the reinjection pressure, suggesting processes in the aquifer which positively change the hydraulic properties and overcompensate the viscosity effects. Hydrochemical data and modeling indicate that a dissolution of the matrix along the flow pathways is responsible for the decreasing reinjection pressures. The change of the flow direction from reinjection to production was used to conduct a push-pull tracer test. Here, a series of fluorescent dye pulses was added to the reinjected water before the former reinjection well was shut down (push phase). These tracers included a conservative tracer (Fluorescein), surface-sensitive tracers (Eosin/Sulforhodamin B), and a NAPL-sensitive tracer (Na-Naphthionate). After changing to production mode in October 2012 the pull phase was started. The different behavior of the tracers within the reservoir delivers data about dispersion, sorption properties, matrix interaction and the regional flux. First tracer breakthrough curves point to a significant heterogeneity of the flow pathways and that regional flow is not negligible.

  9. Perception of object trajectory: parsing retinal motion into self and object movement components.

    PubMed

    Warren, Paul A; Rushton, Simon K

    2007-08-16

    A moving observer needs to be able to estimate the trajectory of other objects moving in the scene. Without the ability to do so, it would be difficult to avoid obstacles or catch a ball. We hypothesized that neural mechanisms sensitive to the patterns of motion generated on the retina during self-movement (optic flow) play a key role in this process, "parsing" motion due to self-movement from that due to object movement. We investigated this "flow parsing" hypothesis by measuring the perceived trajectory of a moving probe placed within a flow field that was consistent with movement of the observer. In the first experiment, the flow field was consistent with an eye rotation; in the second experiment, it was consistent with a lateral translation of the eyes. We manipulated the distance of the probe in both experiments and assessed the consequences. As predicted by the flow parsing hypothesis, manipulating the distance of the probe had differing effects on the perceived trajectory of the probe in the two experiments. The results were consistent with the scene geometry and the type of simulated self-movement. In a third experiment, we explored the contribution of local and global motion processing to the results of the first two experiments. The data suggest that the parsing process involves global motion processing, not just local motion contrast. The findings of this study support a role for optic flow processing in the perception of object movement during self-movement.

  10. Sulfur flows and biosolids processing: Using Material Flux Analysis (MFA) principles at wastewater treatment plants.

    PubMed

    Fisher, R M; Alvarez-Gaitan, J P; Stuetz, R M; Moore, S J

    2017-08-01

    High flows of sulfur through wastewater treatment plants (WWTPs) may cause noxious gaseous emissions, corrosion of infrastructure, inhibit wastewater microbial communities, or contribute to acid rain if the biosolids or biogas is combusted. Yet, sulfur is an important agricultural nutrient and the direct application of biosolids to soils enables its beneficial re-use. Flows of sulfur throughout the biosolids processing of six WWTPs were investigated to identify how they were affected by biosolids processing configurations. The process of tracking sulfur flows through the sites also identified limitations in data availability and quality, highlighting future requirements for tracking substance flows. One site was investigated in more detail showing sulfur speciation throughout the plant and tracking sulfur flows in odour control systems in order to quantify outflows to air, land and ocean sinks. While the majority of sulfur from WWTPs is removed as sulfate in the secondary effluent, the sulfur content of biosolids is valuable as it can be directly returned to soils to combat the potential sulfur deficiencies. Biosolids processing configurations, which focus on maximising solids recovery, through high efficiency separation techniques in primary sedimentation tanks, thickeners and dewatering centrifuges retain more sulfur in the biosolids. However, variations in sulfur loads and concentrations entering the WWTPs affect sulfur recovery in the biosolids, suggesting industrial emitters, and chemical dosing of iron salts are responsible for differences in recovery between sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Event-Driven Messaging for Offline Data Quality Monitoring at ATLAS

    NASA Astrophysics Data System (ADS)

    Onyisi, Peter

    2015-12-01

    During LHC Run 1, the information flow through the offline data quality monitoring in ATLAS relied heavily on chains of processes polling each other's outputs for handshaking purposes. This resulted in a fragile architecture with many possible points of failure and an inability to monitor the overall state of the distributed system. We report on the status of a project undertaken during the LHC shutdown to replace the ad hoc synchronization methods with a uniform message queue system. This enables the use of standard protocols to connect processes on multiple hosts; reliable transmission of messages between possibly unreliable programs; easy monitoring of the information flow; and the removal of inefficient polling-based communication.

  12. Visualizing flow fields using acoustic Doppler current profilers and the Velocity Mapping Toolbox

    USGS Publications Warehouse

    Jackson, P. Ryan

    2013-01-01

    The purpose of this fact sheet is to provide examples of how the U.S. Geological Survey is using acoustic Doppler current profilers for much more than routine discharge measurements. These instruments are capable of mapping complex three-dimensional flow fields within rivers, lakes, and estuaries. Using the Velocity Mapping Toolbox to process the ADCP data allows detailed visualization of the data, providing valuable information for a range of studies and applications.

  13. Development of image processing techniques for applications in flow visualization and analysis

    NASA Technical Reports Server (NTRS)

    Disimile, Peter J.; Shoe, Bridget; Toy, Norman; Savory, Eric; Tahouri, Bahman

    1991-01-01

    A comparison between two flow visualization studies of an axi-symmetric circular jet issuing into still fluid, using two different experimental techniques, is described. In the first case laser induced fluorescence is used to visualize the flow structure, whilst smoke is utilized in the second. Quantitative information was obtained from these visualized flow regimes using two different digital imaging systems. Results are presented of the rate at which the jet expands in the downstream direction and these compare favorably with the more established data.

  14. Mean flow characteristics for the oblique impingement of an axisymmetric jet

    NASA Technical Reports Server (NTRS)

    Foss, J. F.; Kleis, S. J.

    1975-01-01

    The oblique impingement of an axisymmetric jet has been investigated. A summary of the data and the analytical interpretations of the dominant mechanisms which influence the flow are reported. The major characteristics of the shallow angle oblique jet impingement flow field are: (1) minimal dynamic spreading as revealed by the surface pressure field, (2) pronounced kinematic spreading as revealed by the jet flow velocity field, (3) a pronounced upstream shift of the stagnation point from the maximum pressure point, (4) the production of streamwise vorticity by the impingement process.

  15. A numerical simulation of the NFAC (National Full-scale Aerodynamics Complex) open-return wind tunnel inlet flow

    NASA Technical Reports Server (NTRS)

    Kaul, U. K.; Ross, J. C.; Jacocks, J. L.

    1985-01-01

    The flow into an open return wind tunnel inlet was simulated using Euler equations. An explicit predictor-corrector method was employed to solve the system. The calculation is time-accurate and was performed to achieve a steady-state solution. The predictions are in reasonable agreement with the experimental data. Wall pressures are accurately predicted except in a region of recirculating flow. Flow-field surveys agree qualitatively with laser velocimeter measurements. The method can be used in the design process for open return wind tunnels.

  16. NPTool: Towards Scalability and Reliability of Business Process Management

    NASA Astrophysics Data System (ADS)

    Braghetto, Kelly Rosa; Ferreira, João Eduardo; Pu, Calton

    Currently one important challenge in business process management is provide at the same time scalability and reliability of business process executions. This difficulty becomes more accentuated when the execution control assumes complex countless business processes. This work presents NavigationPlanTool (NPTool), a tool to control the execution of business processes. NPTool is supported by Navigation Plan Definition Language (NPDL), a language for business processes specification that uses process algebra as formal foundation. NPTool implements the NPDL language as a SQL extension. The main contribution of this paper is a description of the NPTool showing how the process algebra features combined with a relational database model can be used to provide a scalable and reliable control in the execution of business processes. The next steps of NPTool include reuse of control-flow patterns and support to data flow management.

  17. Methodology Development of a Gas-Liquid Dynamic Flow Regime Transition Model

    NASA Astrophysics Data System (ADS)

    Doup, Benjamin Casey

    Current reactor safety analysis codes, such as RELAP5, TRACE, and CATHARE, use flow regime maps or flow regime transition criteria that were developed for static fully-developed two-phase flows to choose interfacial transfer models that are necessary to solve the two-fluid model. The flow regime is therefore difficult to identify near the flow regime transitions, in developing two-phase flows, and in transient two-phase flows. Interfacial area transport equations were developed to more accurately predict the dynamic nature of two-phase flows. However, other model coefficients are still flow regime dependent. Therefore, an accurate prediction of the flow regime is still important. In the current work, the methodology for the development of a dynamic flow regime transition model that uses the void fraction and interfacial area concentration obtained by solving three-field the two-fluid model and two-group interfacial area transport equation is investigated. To develop this model, detailed local experimental data are obtained, the two-group interfacial area transport equations are revised, and a dynamic flow regime transition model is evaluated using a computational fluid dynamics model. Local experimental data is acquired for 63 different flow conditions in bubbly, cap-bubbly, slug, and churn-turbulent flow regimes. The measured parameters are the group-1 and group-2 bubble number frequency, void fraction, interfacial area concentration, and interfacial bubble velocities. The measurements are benchmarked by comparing the prediction of the superficial gas velocities, determined using the local measurements with those determined from volumetric flow rate measurements and the agreement is generally within +/-20%. The repeatability four-sensor probe construction process is within +/-10%. The repeatability of the measurement process is within +/-7%. The symmetry of the test section is examined and the average agreement is within +/-5.3% at z/D = 10 and +/-3.4% at z/D = 32. Revised source/sink terms for the two-group interfacial area transport equations are derived and fit to area-averaged experimental data to determine new model coefficients. The average agreement between this model and the experiment data for the void fraction and interfacial area concentration is 10.6% and 15.7%, respectively. This revised two-group interfacial area transport equation and the three-field two-fluid model are used to solve for the group-1 and group-2 interfacial area concentration and void fraction. These values and a dynamic flow regime transition model are used to classify the flow regimes. The flow regimes determined using this model are compared with the flow regimes based on the experimental data and on a flow regime map using Mishima and Ishii's (1984) transition criteria. The dynamic flow regime transition model is shown to predict the flow regimes dynamically and has improved the prediction of the flow regime over that using a flow regime map. Safety codes often employ the one-dimensional two-fluid model to model two-phase flows. The area-averaged relative velocity correlation necessary to close this model is derived from the drift flux model. The effects of the necessary assumptions used to derive this correlation are investigated using local measurements and these effects are found to have a limited impact on the prediction of the area-averaged relative velocity.

  18. Data analysis and hydrological modelling of frozen ground, shallow groundwater formation and river flow co-evolution at small watersheds of Russia in continuous, discontinuous permafrost and the zone of seasonal ground freezing

    NASA Astrophysics Data System (ADS)

    Lebedeva, Luidmila; Semenova, Olga

    2015-04-01

    Frozen ground distribution and its properties control the presence of aquifuge and aquifers. Correct representation of interactions between infiltrating water, ground ice, permafrost or seasonal freezing table and river flow is challenging for hydrological modelling in cold regions. Observational data of ground water levels, thawing depths in different landscapes or topographical units and meteorological information with high temporal and spatial resolution are required to analyze seasonal and interannual evolution of groundwater in active layer and its linkage to river flow. Such data are extremely rare in vast and remote regions of Russia. There are few historical datasets inherited from former USSR containing unique collection of long-term daily observations of water fluxes, frozen ground characteristics and groundwater levels. The data from three water balance stations were employed in our study with overall goal to analyze co-evolution of thawing layer, shallow groundwater and river flow by data processing and process-based modelling. Three instrumented small watersheds are situated in continuous, discontinuous permafrost zones and at the territory with seasonally frozen ground. They present different climates, landscapes and geology. The Kolyma water-balance station is located in mountainous region of continuous permafrost in North-Eastern Russia. The watershed area of 22 km2 is covered by bare rocks, mountain tundra, sparse larch forest and wet larch forest depending on slope aspect and inclination. The Bomnak water-balance station (22 km2) is situated in discontinuous permafrost zone in upper part of the Amur River basin and characterized by unmerged permafrost. Dominant landscapes are birch forest and bogs. The Pribaltiyskaya water-balance station (40 km2) located in Latvia is characterized by seasonally frozen ground and is covered by mixed forest and arable land. Process-based Hydrograph model was employed in the study. The model was developed specifically for cold regions. It describes all essential processes of land hydrological cycle including detailed algorithm of water and heat dynamics in soil accounting for water phase change. The model parameters relate to basin characteristics and could be assessed in the field. It allows avoiding parameters calibration and transferring model parameterization schemes to ungauged basins in similar conditions. The model was applied and tested against internal states of watersheds (snow, soil thawing/freezing, etc.) and runoff. Different role of frozen ground in formation of shallow groundwater and river flow in continuous, discontinuous and non-permafrost area is highlighted by comparative analysis of observations and simulations in three studied basins. The changes of fractional input of surface and subsurface components into river flow during warm seasons were assessed for each watershed. We concluded that verified hydrological model with meaningful parameters that adequately describe river flow formation and internal hydrological processes and ground freezing/thawing in the catchment could be used in scenario simulations, future predictions and transferring the results between scales.

  19. Microtopographic evolution of lava flows at Cima volcanic field, Mojave Desert, California

    NASA Technical Reports Server (NTRS)

    Farr, Tom G.

    1992-01-01

    Microtopographic profiles were measured and power spectra calculated for dated lava flow surfaces at Cima volcanic field in the eastern Mojave Desert of California in order to quantify changes in centimeter- to meter-scale roughness as a function of age. For lava flows younger than about 0.8 m.y., roughness over all spatial scales decreases with age, with meter-scale roughness decreasing slightly more than centimeter scales. Flows older than about 0.8 m.y. show a reversal of this trend, becoming as rough as young flows at these scales. Modeling indicates that eolian deposition can explain most of the change observed in the offset, or roughness amplitude, of power spectra of flow surface profiles up to 0.8 m.y. Other processes, such as rubbing and stone pavement development, appear to have a minor effect in this age range. Changes in power spectra of surfaces older than about 0.8 m.y. are consistent with roughening due to fluvial dissection. These results agree qualitatively with a process-response model that attributes systematic changes in flow surface morphology to cyclic changes in the rates of eolian, soil formation, and fluvial processes. Identification of active surficial processes and estimation of the extent of their effects, or stage of surficial evolution, through measurement of surface roughness will help put the correlation of surficial units on a quantitative basis. This may form the basis for the use of radar remote sensing data to help in regional correlations of surficial units.

  20. Automating Mapping Production for the Enterprise: from Contract to Delivery

    NASA Astrophysics Data System (ADS)

    Uebbing, R.; Xie, C.; Beshah, B.; Welter, J.

    2012-07-01

    The ever increasing volume and quality of geospatial data has created new challenges for mapping companies. Due to increased image resolution, fusion of different data sources and more frequent data update requirements, mapping production is forced to streamline the work flow to meet client deadlines. But the data volume alone is not the only barrier for an efficient production work flow. Processing geospatial information traditionally uses domain and vendor specific applications that do not interface with each other, often leading to data duplication and therefore creating sources for error. Also, it creates isolation between different departments within a mapping company resulting in additional communication barriers. North West Geomatics has designed and implemented a data centric enterprise solution for the flight acquisition and production work flow to combat the above challenges. A central data repository containing not only geospatial data in the strictest sense such as images, vector layers and 3D point clouds, but also other information such as product specifications, client requirements, flight acquisition data, production resource usage and much more has been deployed at the company. As there is only one instance of the database shared throughout the whole organization it allows all employees, given they have been granted the appropriate permission, to view the current status of any project with a graphical and table based interface through its life cycle from sales, through flight acquisition, production and product delivery. Not only can users track progress and status of various work flow steps, but the system also allows users and applications to actively schedule or start specific production steps such as data ingestion and triangulation with many other steps (orthorectification, mosaicing, accounting, etc.) in the planning stages. While the complete system is exposed to the users through a web interface and therefore allowing outside customers to also view their data, much of the design and development was focused on work flow automation, scalability and security. Ideally, users will interact with the system to retrieve a specific project status and summaries while the work flow processes are triggered automatically by modeling their dependencies. The enterprise system is built using open source technologies (PostGIS, Hibernate, OpenLayers, GWT and others) and adheres to OGC web services for data delivery (WMS/WFS/WCS) to third party applications.

  1. Optimization of the cleaning process on a pilot filtration setup for waste water treatment accompanied by flow visualization

    NASA Astrophysics Data System (ADS)

    Bílek, Petr; Hrůza, Jakub

    2018-06-01

    This paper deals with an optimization of the cleaning process on a liquid flat-sheet filter accompanied by visualization of the inlet side of a filter. The cleaning process has a crucial impact on the hydrodynamic properties of flat-sheet filters. Cleaning methods avoid depositing of particles on the filter surface and forming a filtration cake. Visualization significantly helps to optimize the cleaning methods, because it brings new overall view on the filtration process in time. The optical method, described in the article, enables to see flow behaviour in a thin laser sheet on the inlet side of a tested filter during the cleaning process. Visualization is a strong tool for investigation of the processes on filters in details and it is also possible to determine concentration of particles after an image analysis. The impact of air flow rate, inverse pressure drop and duration on the cleaning mechanism is investigated in the article. Images of the cleaning process are compared to the hydrodynamic data. The tests are carried out on a pilot filtration setup for waste water treatment.

  2. Multisensor data fusion across time and space

    NASA Astrophysics Data System (ADS)

    Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.

    2014-06-01

    Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.

  3. Characterizing and Optimizing the Performance of the MAESTRO 49-Core Processor

    DTIC Science & Technology

    2014-03-27

    process large volumes of data, it is necessary during testing to vary the dimensions of the inbound data matrix to determine what effect this has on the...needed that can process the extra data these systems seek to collect. However, the space environment presents a number of threats, such as ambient or...induced faults, and that also have sufficient computational power to handle the large flow of data they encounter. This research investigates one

  4. User Guide and Documentation for Five MODFLOW Ground-Water Modeling Utility Programs

    USGS Publications Warehouse

    Banta, Edward R.; Paschke, Suzanne S.; Litke, David W.

    2008-01-01

    This report documents five utility programs designed for use in conjunction with ground-water flow models developed with the U.S. Geological Survey's MODFLOW ground-water modeling program. One program extracts calculated flow values from one model for use as input to another model. The other four programs extract model input or output arrays from one model and make them available in a form that can be used to generate an ArcGIS raster data set. The resulting raster data sets may be useful for visual display of the data or for further geographic data processing. The utility program GRID2GRIDFLOW reads a MODFLOW binary output file of cell-by-cell flow terms for one (source) model grid and converts the flow values to input flow values for a different (target) model grid. The spatial and temporal discretization of the two models may differ. The four other utilities extract selected 2-dimensional data arrays in MODFLOW input and output files and write them to text files that can be imported into an ArcGIS geographic information system raster format. These four utilities require that the model cells be square and aligned with the projected coordinate system in which the model grid is defined. The four raster-conversion utilities are * CBC2RASTER, which extracts selected stress-package flow data from a MODFLOW binary output file of cell-by-cell flows; * DIS2RASTER, which extracts cell-elevation data from a MODFLOW Discretization file; * MFBIN2RASTER, which extracts array data from a MODFLOW binary output file of head or drawdown; and * MULT2RASTER, which extracts array data from a MODFLOW Multiplier file.

  5. How the provenance of electronic health record data matters for research: a case example using system mapping.

    PubMed

    Johnson, Karin E; Kamineni, Aruna; Fuller, Sharon; Olmstead, Danielle; Wernli, Karen J

    2014-01-01

    The use of electronic health records (EHRs) for research is proceeding rapidly, driven by computational power, analytical techniques, and policy. However, EHR-based research is limited by the complexity of EHR data and a lack of understanding about data provenance, meaning the context under which the data were collected. This paper presents system flow mapping as a method to help researchers more fully understand the provenance of their EHR data as it relates to local workflow. We provide two specific examples of how this method can improve data identification, documentation, and processing. EHRs store clinical and administrative data, often in unstructured fields. Each clinical system has a unique and dynamic workflow, as well as an EHR customized for local use. The EHR customization may be influenced by a broader context such as documentation required for billing. We present a case study with two examples of using system flow mapping to characterize EHR data for a local colorectal cancer screening process. System flow mapping demonstrated that information entered into the EHR during clinical practice required interpretation and transformation before it could be accurately applied to research. We illustrate how system flow mapping shaped our knowledge of the quality and completeness of data in two examples: (1) determining colonoscopy indication as recorded in the EHR, and (2) discovering a specific EHR form that captured family history. Researchers who do not consider data provenance risk compiling data that are systematically incomplete or incorrect. For example, researchers who are not familiar with the clinical workflow under which data were entered might miss or misunderstand patient information or procedure and diagnostic codes. Data provenance is a fundamental characteristic of research data from EHRs. Given the diversity of EHR platforms and system workflows, researchers need tools for evaluating and reporting data availability, quality, and transformations. Our case study illustrates how system mapping can inform researchers about the provenance of their data as it pertains to local workflows.

  6. Fluid outflows from Venus impact craters - Analysis from Magellan data

    NASA Technical Reports Server (NTRS)

    Asimow, Paul D.; Wood, John A.

    1992-01-01

    Many impact craters on Venus have unusual outflow features originating in or under the continuous ejecta blankets and continuing downhill into the surrounding terrain. These features clearly resulted from flow of low-viscosity fluids, but the identity of those fluids is not clear. In particular, it should not be assumed a priori that the fluid is an impact melt. A number of candidate processes by which impact events might generate the observed features are considered, and predictions are made concerning the rheological character of flows produce by each mechanism. A sample of outflows was analyzed using Magellan images and a model of unconstrained Bingham plastic flow on inclined planes, leading to estimates of viscosity and yield strength for the flow materials. It is argued that at least two different mechanisms have produced outflows on Venus: an erosive, channel-forming process and a depositional process. The erosive fluid is probably an impact melt, but the depositional fluid may consist of fluidized solid debris, vaporized material, and/or melt.

  7. Heat flow anomalies and their interpretation

    NASA Astrophysics Data System (ADS)

    Chapman, David S.; Rybach, Ladislaus

    1985-12-01

    More than 10,000 heat flow determinations exist for the earth and the data set is growing steadily at about 450 observations per year. If heat flow is considered as a surface expression of geothermal processes at depth, the analysis of the data set should reveal properties of those thermal processes. They do, but on a variety of scales. For this review heat flow maps are classified by 4 different horizontal scales of 10 n km (n = 1, 2, 3 and 4) and attention is focussed on the interpretation of anomalies which appear with characteristic dimensions of 10 (n - 1) km in the respective representations. The largest scale of 10 4 km encompasses heat flow on a global scale. Global heat loss is 4 × 10 13 W and the process of sea floor spreading is the principal agent in delivering much of this heat to the surface. Correspondingly, active ocean ridge systems produce the most prominent heat flow anomalies at this scale with characteristic widths of 10 3 km. Shields, with similar dimensions, exhibit negative anomalies. The scale of 10 3 km includes continent wide displays. Heat flow patterns at this scale mimic tectonic units which have dimensions of a few times 10 2 km, although the thermal boundaries between these units are sometimes sharp. Heat flow anomalies at this scale also result from plate tectonic processes, and are associated with arc volcanism, back arc basins, hot spot traces, and continental rifting. There are major controversies about the extent to which these surface thermal provinces reflect upper mantle thermal conditions, and also about the origin and evolution of the thermal state of continental lithosphere. Beginning with map dimensions of 10 2 km thermal anomalies of scale 10 1 km, which have a definite crustal origin, become apparent. The origin may be tectonic, geologic, or hydrologic. Ten kilometers is a common wavelength of topographic relief which drives many groundwater flow systems producing thermal anomalies. The largest recognized continental geothermal systems have thermal anomalies 10 1 km wide and are capable of producing hundreds of megawatts of thermal energy. The smallest scale addressed in this paper is 10 1 km. Worldwide interest in exploiting geothermal systems has been responsible for a recent accumulation of heat flow data on the smallest of scales considered here. The exploration nature of the surveys involve 10's of drillholes and reveal thermal anomalies having widths of 10 0 km. These are almost certainly connected to surface and subsurface fluid discharge systems which, in spite of their restricted size, are typically delivering 10 MW of heat to the near surface environment.

  8. NMR imaging and hydrodynamic analysis of neutrally buoyant non-Newtonian slurry flows

    NASA Astrophysics Data System (ADS)

    Bouillard, J. X.; Sinton, S. W.

    The flow of solids loaded suspension in cylindrical pipes has been the object of intense experimental and theoretical investigations in recent years. These types of flows are of great interest in chemical engineering because of their important use in many industrial manufacturing processes. Such flows are for example encountered in the manufacture of solid-rocket propellants, advanced ceramics, reinforced polymer composites, in heterogeneous catalytic reactors, and in the pipeline transport of liquid-solids suspensions. In most cases, the suspension microstructure and the degree of solids dispersion greatly affect the final performance of the manufactured product. For example, solid propellant pellets need to be extremely-well dispersed in gel matrices for use as rocket engine solid fuels. The homogeneity of pellet dispersion is critical to allow good uniformity of the burn rate, which in turn affects the final mechanical performance of the engine. Today's manufacturing of such fuels uses continuous flow processes rather than batch processes. Unfortunately, the hydrodynamics of such flow processes is poorly understood and is difficult to assess because it requires the simultaneous measurements of liquid/solids phase velocities and volume fractions. Due to the recent development in pulsed Fourier Transform NMR imaging, NMR imaging is now becoming a powerful technique for the non intrusive investigation of multi-phase flows. This paper reports and exposes a state-of-the-art experimental and theoretical methodology that can be used to study such flows. The hydrodynamic model developed for this study is a two-phase flow shear thinning model with standard constitutive fluid/solids interphase drag and solids compaction stresses. this model shows good agreement with experimental data and the limitations of this model are discussed.

  9. A computer-controlled scintiscanning system and associated computer graphic techniques for study of regional distribution of blood flow.

    NASA Technical Reports Server (NTRS)

    Coulam, C. M.; Dunnette, W. H.; Wood, E. H.

    1970-01-01

    Two methods whereby a digital computer may be used to regulate a scintiscanning process are discussed from the viewpoint of computer input-output software. The computer's function, in this case, is to govern the data acquisition and storage, and to display the results to the investigator in a meaningful manner, both during and subsequent to the scanning process. Several methods (such as three-dimensional maps, contour plots, and wall-reflection maps) have been developed by means of which the computer can graphically display the data on-line, for real-time monitoring purposes, during the scanning procedure and subsequently for detailed analysis of the data obtained. A computer-governed method for converting scintiscan data recorded over the dorsal or ventral surfaces of the thorax into fractions of pulmonary blood flow traversing the right and left lungs is presented.

  10. Recycling isoelectric focusing with computer controlled data acquisition system. [for high resolution electrophoretic separation and purification of biomolecules

    NASA Technical Reports Server (NTRS)

    Egen, N. B.; Twitty, G. E.; Bier, M.

    1979-01-01

    Isoelectric focusing is a high-resolution technique for separating and purifying large peptides, proteins, and other biomolecules. The apparatus described in the present paper constitutes a new approach to fluid stabilization and increased throughput. Stabilization is achieved by flowing the process fluid uniformly through an array of closely spaced filter elements oriented parallel both to the electrodes and the direction of the flow. This seems to overcome the major difficulties of parabolic flow and electroosmosis at the walls, while limiting the convection to chamber compartments defined by adjacent spacers. Increased throughput is achieved by recirculating the process fluid through external heat exchange reservoirs, where the Joule heat is dissipated.

  11. Flow and Compaction During the Vacuum Assisted Resin Transfer Molding Process

    NASA Technical Reports Server (NTRS)

    Grimsley, Brian W.; Hubert, Pascal; Song, Xiao-Lan; Cano, Roberto J.; Loos, Alfred C.; Pipes, R. Byron

    2001-01-01

    The flow of an epoxy resin and compaction behavior of carbon fiber preform during vacuum- assisted resin transfer molding (VARTM) infiltration was measured using an instrumented tool. Composite panels were fabricated by the VARTM process using SAERTEX(R)2 multi-axial non- crimp carbon fiber fabric and the A.T.A.R.D. SI-ZG-5A epoxy resin. Resin pressure and preform thickness variation was measured during infiltration. The effects of the resin on the compaction behavior of the preform were measured. The local preform compaction during the infiltration is a combination of wetting and spring-back deformations. Flow front position computed by the 3DINFIL model was compared with the experimental data.

  12. Internal Catchment Process Simulation in a Snow-Dominated Basin: Performance Evaluation with Spatiotemporally Variable Runoff Generation and Groundwater Dynamics

    NASA Astrophysics Data System (ADS)

    Kuras, P. K.; Weiler, M.; Alila, Y.; Spittlehouse, D.; Winkler, R.

    2006-12-01

    Hydrologic models have been increasingly used in forest hydrology to overcome the limitations of paired watershed experiments, where vegetative recovery and natural variability obscure the inferences and conclusions that can be drawn from such studies. Models, however, are also plagued by uncertainty stemming from a limited understanding of hydrological processes in forested catchments and parameter equifinality is a common concern. This has created the necessity to improve our understanding of how hydrological systems work, through the development of hydrological measures, analyses and models that address the question: are we getting the right answers for the right reasons? Hence, physically-based, spatially-distributed hydrologic models should be validated with high-quality experimental data describing multiple concurrent internal catchment processes under a range of hydrologic regimes. The distributed hydrology soil vegetation model (DHSVM) frequently used in forest management applications is an example of a process-based model used to address the aforementioned circumstances, and this study takes a novel approach at collectively examining the ability of a pre-calibrated model application to realistically simulate outlet flows along with the spatial-temporal variation of internal catchment processes including: continuous groundwater dynamics at 9 locations, stream and road network flow at 67 locations for six individual days throughout the freshet, and pre-melt season snow distribution. Model efficiency was improved over prior evaluations due to continuous efforts in improving the quality of meteorological data in the watershed. Road and stream network flows were very well simulated for a range of hydrological conditions, and the spatial distribution of the pre-melt season snowpack was in general agreement with observed values. The model was effective in simulating the spatial variability of subsurface flow generation, except at locations where strong stream-groundwater interactions existed, as the model is not capable of simulating such processes and subsurface flows always drain to the stream network. The model has proven overall to be quite capable in realistically simulating internal catchment processes in the watershed, which creates more confidence in future model applications exploring the effects of various forest management scenarios on the watershed's hydrological processes.

  13. The design of multi-core DSP parallel model based on message passing and multi-level pipeline

    NASA Astrophysics Data System (ADS)

    Niu, Jingyu; Hu, Jian; He, Wenjing; Meng, Fanrong; Li, Chuanrong

    2017-10-01

    Currently, the design of embedded signal processing system is often based on a specific application, but this idea is not conducive to the rapid development of signal processing technology. In this paper, a parallel processing model architecture based on multi-core DSP platform is designed, and it is mainly suitable for the complex algorithms which are composed of different modules. This model combines the ideas of multi-level pipeline parallelism and message passing, and summarizes the advantages of the mainstream model of multi-core DSP (the Master-Slave model and the Data Flow model), so that it has better performance. This paper uses three-dimensional image generation algorithm to validate the efficiency of the proposed model by comparing with the effectiveness of the Master-Slave and the Data Flow model.

  14. Sonic environment of aircraft structure immersed in a supersonic jet flow stream

    NASA Technical Reports Server (NTRS)

    Guinn, W. A.; Balena, F. J.; Soovere, J.

    1976-01-01

    Test methods for determining the sonic environment of aircraft structure that is immersed in the flow stream of a high velocity jet or that is subjected to the noise field surrounding the jet, were investigated. Sonic environment test data measured on a SCAT 15-F model in the flow field of Mach 1.5 and 2.5 jets were processed. Narrow band, lateral cross correlation and noise contour plots are presented. Data acquisition and reduction methods are depicted. A computer program for scaling the model data is given that accounts for model size, jet velocity, transducer size, and jet density. Comparisons of scaled model data and full size aircraft data are made for the L-1011, S-3A, and a V/STOL lower surface blowing concept. Sonic environment predictions are made for an engine-over-the-wing SST configuration.

  15. Data-Informed Large-Eddy Simulation of Coastal Land-Air-Sea Interactions

    NASA Astrophysics Data System (ADS)

    Calderer, A.; Hao, X.; Fernando, H. J.; Sotiropoulos, F.; Shen, L.

    2016-12-01

    The study of atmospheric flows in coastal areas has not been fully addressed due to the complex processes emerging from the land-air-sea interactions, e.g., abrupt change in land topography, strong current shear, wave shoaling, and depth-limited wave breaking. The available computational tools that have been applied to study such littoral regions are mostly based on open-ocean assumptions, which most times do not lead to reliable solutions. The goal of the present study is to better understand some of these near-shore processes, employing the advanced computational tools, developed in our research group. Our computational framework combines a large-eddy simulation (LES) flow solver for atmospheric flows, a sharp-interface immersed boundary method that can deal with real complex topographies (Calderer et al., J. Comp. Physics 2014), and a phase-resolved, depth-dependent, wave model (Yang and Shen, J. Comp. Physics 2011). Using real measured data taken in the FRF station in Duck, North Carolina, we validate and demonstrate the predictive capabilities of the present computational framework, which are shown to be in overall good agreement with the measured data under different wind-wave scenarios. We also analyse the effects of some of the complex processes captured by our simulation tools.

  16. Pattern database applications from design to manufacturing

    NASA Astrophysics Data System (ADS)

    Zhuang, Linda; Zhu, Annie; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh

    2017-03-01

    Pattern-based approaches are becoming more common and popular as the industry moves to advanced technology nodes. At the beginning of a new technology node, a library of process weak point patterns for physical and electrical verification are starting to build up and used to prevent known hotspots from re-occurring on new designs. Then the pattern set is expanded to create test keys for process development in order to verify the manufacturing capability and precheck new tape-out designs for any potential yield detractors. With the database growing, the adoption of pattern-based approaches has expanded from design flows to technology development and then needed for mass-production purposes. This paper will present the complete downstream working flows of a design pattern database(PDB). This pattern-based data analysis flow covers different applications across different functional teams from generating enhancement kits to improving design manufacturability, populating new testing design data based on previous-learning, generating analysis data to improve mass-production efficiency and manufacturing equipment in-line control to check machine status consistency across different fab sites.

  17. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    Research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a special distributed computer environment is presented. This model is identified by the acronym ATAMM which represents Algorithms To Architecture Mapping Model. The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  18. Preferential flow from pore to landscape scales

    NASA Astrophysics Data System (ADS)

    Koestel, J. K.; Jarvis, N.; Larsbo, M.

    2017-12-01

    In this presentation, we give a brief personal overview of some recent progress in quantifying preferential flow in the vadose zone, based on our own work and those of other researchers. One key challenge is to bridge the gap between the scales at which preferential flow occurs (i.e. pore to Darcy scales) and the scales of interest for management (i.e. fields, catchments, regions). We present results of recent studies that exemplify the potential of 3-D non-invasive imaging techniques to visualize and quantify flow processes at the pore scale. These studies should lead to a better understanding of how the topology of macropore networks control key state variables like matric potential and thus the strength of preferential flow under variable initial and boundary conditions. Extrapolation of this process knowledge to larger scales will remain difficult, since measurement technologies to quantify macropore networks at these larger scales are lacking. Recent work suggests that the application of key concepts from percolation theory could be useful in this context. Investigation of the larger Darcy-scale heterogeneities that generate preferential flow patterns at the soil profile, hillslope and field scales has been facilitated by hydro-geophysical measurement techniques that produce highly spatially and temporally resolved data. At larger regional and global scales, improved methods of data-mining and analyses of large datasets (machine learning) may help to parameterize models as well as lead to new insights into the relationships between soil susceptibility to preferential flow and site attributes (climate, land uses, soil types).

  19. An approach to quantify sources, seasonal change, and biogeochemical processes affecting metal loading in streams: Facilitating decisions for remediation of mine drainage

    USGS Publications Warehouse

    Kimball, B.A.; Runkel, R.L.; Walton-Day, K.

    2010-01-01

    Historical mining has left complex problems in catchments throughout the world. Land managers are faced with making cost-effective plans to remediate mine influences. Remediation plans are facilitated by spatial mass-loading profiles that indicate the locations of metal mass-loading, seasonal changes, and the extent of biogeochemical processes. Field-scale experiments during both low- and high-flow conditions and time-series data over diel cycles illustrate how this can be accomplished. A low-flow experiment provided spatially detailed loading profiles to indicate where loading occurred. For example, SO42 - was principally derived from sources upstream from the study reach, but three principal locations also were important for SO42 - loading within the reach. During high-flow conditions, Lagrangian sampling provided data to interpret seasonal changes and indicated locations where snowmelt runoff flushed metals to the stream. Comparison of metal concentrations between the low- and high-flow experiments indicated substantial increases in metal loading at high flow, but little change in metal concentrations, showing that toxicity at the most downstream sampling site was not substantially greater during snowmelt runoff. During high-flow conditions, a detailed temporal sampling at fixed sites indicated that Zn concentration more than doubled during the diel cycle. Monitoring programs must account for diel variation to provide meaningful results. Mass-loading studies during different flow conditions and detailed time-series over diel cycles provide useful scientific support for stream management decisions.

  20. What controls channel form in steep mountain streams?

    NASA Astrophysics Data System (ADS)

    Palucis, M. C.; Lamb, M. P.

    2017-07-01

    Steep mountain streams have channel morphologies that transition from alternate bar to step-pool to cascade with increasing bed slope, which affect stream habitat, flow resistance, and sediment transport. Experimental and theoretical studies suggest that alternate bars form under large channel width-to-depth ratios, step-pools form in near supercritical flow or when channel width is narrow compared to bed grain size, and cascade morphology is related to debris flows. However, the connection between these process variables and bed slope—the apparent dominant variable for natural stream types—is unclear. Combining field data and theory, we find that certain bed slopes have unique channel morphologies because the process variables covary systematically with bed slope. Multiple stable states are predicted for other ranges in bed slope, suggesting that a competition of underlying processes leads to the emergence of the most stable channel form.

  1. Experimental Characterization of the Jet Wiping Process

    NASA Astrophysics Data System (ADS)

    Mendez, Miguel Alfonso; Enache, Adriana; Gosset, Anne; Buchlin, Jean-Marie

    2018-06-01

    This paper presents an experimental characterization of the jet wiping process, used in continuous coating applications to control the thickness of a liquid coat using an impinging gas jet. Time Resolved Particle Image Velocimetry (TR-PIV) is used to characterize the impinging gas flow, while an automatic interface detection algorithm is developed to track the liquid interface at the impact. The study of the flow interaction is combined with time resolved 3D thickness measurements of the liquid film remaining after the wiping, via Time Resolved Light Absorption (TR-LAbs). The simultaneous frequency analysis of liquid and gas flows allows to correlate their respective instability, provide an experimental data set for the validation of numerical studies and allows for formulating a working hypothesis on the origin of the coat non-uniformity encountered in many jet wiping processes.

  2. Controlling Gas-Flow Mass Ratios

    NASA Technical Reports Server (NTRS)

    Morris, Brian G.

    1990-01-01

    Proposed system automatically controls proportions of gases flowing in supply lines. Conceived for control of oxidizer-to-fuel ratio in new gaseous-propellant rocket engines. Gas-flow control system measures temperatures and pressures at various points. From data, calculates control voltages for electronic pressure regulators for oxygen and hydrogen. System includes commercially available components. Applicable to control of mass ratios in such gaseous industrial processes as chemical-vapor depostion of semiconductor materials and in automotive engines operating on compressed natural gas.

  3. Zn isotope fractionation in the komatiitic and tholeiitic lava flows of Fred's flow and Theo's flow (Ontario, Canada)

    NASA Astrophysics Data System (ADS)

    Mattielli, N. D.; Haenecour, P.; Debaille, V.

    2010-12-01

    Komatiites are subvolcanic or volcanic ultramafic rocks characterized by a high MgO content (>18 wt%) usually but not systematically associated to a spinifex texture. Komatiites are nearly exclusively Archean in age and essentially found in the greenstone belts of the oldest cratons, although some rare Proterozoic and Cretaceous examples are also known. Komatiitic flows are commonly associated with tholeiitic lavas, which have many petrological, textural and geochemical similarities with komatiites. We present new high-precision MC-ICPMS Zinc isotopic data for the komatiitic lavas of Fred’s flow and the associated tholeiitic lavas of Theo’s flow from Munro Township in the 2.7 Ga Abitibi greenstone belt (Ontario, Canada). Zinc isotopes show a significant shift between Fred’s flow (mean δ66Zn = +0.30±0.04‰ (2SD)) and Theo’s flow samples (mean δ66Zn = +0.39±0.03‰ (1)). In addition, the two flows show a systematic shift in δ66Zn between the ultrabasic level at the bottom of the sequence (= +0.51± 0.04‰ and +0.47±0.04‰ for Fred’s Flow and Theo’s Flow, respectively) and the rest of the pile (Δ = 0.21±0.01‰). According to the literature, processes of secondary alteration may cause Zn isotope fractionation. However, petrographic data indicate a slight alteration fingerprint while the geochemical study (whole rock and in-situ) shows no remobilization of HFSE and REE by secondary alteration (low-grade metamorphism and/or hydrothermal alteration). In addition, if similar levels of alteration affected the two lava flows, the alteration process cannot explain the difference of δ66Zn between Fred’s and Theo’s flows. Alternatively, this isotopic difference can be interpreted as reflecting either source effects or mineral fractionation related to spinel crystallization. The correlation between the δ66Zn values and the Cr bulk concentrations may suggest fractionation effects of Zn isotopes by the crystallization of spinel minerals. However, the conditions and the processes implying Zn isotopic fractionation during the crystallization of spinel minerals are still not much studied. Isotopic fractionation related to source effects cannot also be excluded. This latter hypothesis implies that Fred’s flow and Theo’s flow formed from two geochemical or mineralogical different mantle sources. While the geochemical data do not support or invalidate this second hypothesis, two different sources may explain the difference between the two lava flows; however, it cannot account for the isotopic variations observed at the ultrabasic levels. In conclusion, the role of mineralogy either at the origin or during the crystallization evolution should be investigated for a better understanding of Zn isotope fractionation in the mantle. (1): To validate the quality of isotopic analyses, reference materials BCR-1(basalt) (δ66Zn = +0.33±0.03‰) and HRM-27(gabbro) (+0.17±0.01‰) (n=3) have been measured.

  4. Development of comprehensive numerical schemes for predicting evaporating gas-droplets flow processes of a liquid-fueled combustor

    NASA Technical Reports Server (NTRS)

    Chen, C. P.

    1990-01-01

    An existing Computational Fluid Dynamics code for simulating complex turbulent flows inside a liquid rocket combustion chamber was validated and further developed. The Advanced Rocket Injector/Combustor Code (ARICC) is simplified and validated against benchmark flow situations for laminar and turbulent flows. The numerical method used in ARICC Code is re-examined for incompressible flow calculations. For turbulent flows, both the subgrid and the two equation k-epsilon turbulence models are studied. Cases tested include idealized Burger's equation in complex geometries and boundaries, a laminar pipe flow, a high Reynolds number turbulent flow, and a confined coaxial jet with recirculations. The accuracy of the algorithm is examined by comparing the numerical results with the analytical solutions as well as experimented data with different grid sizes.

  5. Non-linear flow law of rockglacier creep determined from geomorphological observations: A case study from the Murtèl rockglacier (Engadin, SE Switzerland)

    NASA Astrophysics Data System (ADS)

    Frehner, Marcel; Amschwand, Dominik; Gärtner-Roer, Isabelle

    2016-04-01

    Rockglaciers consist of unconsolidated rock fragments (silt/sand-rock boulders) with interstitial ice; hence their creep behavior (i.e., rheology) may deviate from the simple and well-known flow-laws for pure ice. Here we constrain the non-linear viscous flow law that governs rockglacier creep based on geomorphological observations. We use the Murtèl rockglacier (upper Engadin valley, SE Switzerland) as a case study, for which high-resolution digital elevation models (DEM), time-lapse borehole deformation data, and geophysical soundings exist that reveal the exterior and interior architecture and dynamics of the landform. Rockglaciers often feature a prominent furrow-and-ridge topography. For the Murtèl rockglacier, Frehner et al. (2015) reproduced the wavelength, amplitude, and distribution of the furrow-and-ridge morphology using a linear viscous (Newtonian) flow model. Arenson et al. (2002) presented borehole deformation data, which highlight the basal shear zone at about 30 m depth and a curved deformation profile above the shear zone. Similarly, the furrow-and-ridge morphology also exhibits a curved geometry in map view. Hence, the surface morphology and the borehole deformation data together describe a curved 3D geometry, which is close to, but not quite parabolic. We use a high-resolution DEM to quantify the curved geometry of the Murtèl furrow-and-ridge morphology. We then calculate theoretical 3D flow geometries using different non-linear viscous flow laws. By comparing them to the measured curved 3D geometry (i.e., both surface morphology and borehole deformation data), we can determine the most adequate flow-law that fits the natural data best. Linear viscous models result in perfectly parabolic flow geometries; non-linear creep leads to localized deformation at the sides and bottom of the rockglacier while the deformation in the interior and top are less intense. In other words, non-linear creep results in non-parabolic flow geometries. Both the linear (power-law exponent, n=1) and strongly non-linear models (n=10) do not match the measured data well. However, the moderately non-linear models (n=2-3) match the data quite well indicating that the creep of the Murtèl rockglacier is governed by a moderately non-linear viscous flow law with a power-law exponent close to the one of pure ice. Our results are crucial for improving existing numerical models of rockglacier flow that currently use simplified (i.e., linear viscous) flow-laws. References: Arenson L., Hoelzle M., and Springman S., 2002: Borehole deformation measurements and internal structure of some rock glaciers in Switzerland, Permafrost and Periglacial Processes 13, 117-135. Frehner M., Ling A.H.M., and Gärtner-Roer I., 2015: Furrow-and-ridge morphology on rockglaciers explained by gravity-driven buckle folding: A case study from the Murtèl rockglacier (Switzerland), Permafrost and Periglacial Processes 26, 57-66.

  6. Flow-gated radial phase-contrast imaging in the presence of weak flow.

    PubMed

    Peng, Hsu-Hsia; Huang, Teng-Yi; Wang, Fu-Nien; Chung, Hsiao-Wen

    2013-01-01

    To implement a flow-gating method to acquire phase-contrast (PC) images of carotid arteries without use of an electrocardiography (ECG) signal to synchronize the acquisition of imaging data with pulsatile arterial flow. The flow-gating method was realized through radial scanning and sophisticated post-processing methods including downsampling, complex difference, and correlation analysis to improve the evaluation of flow-gating times in radial phase-contrast scans. Quantitatively comparable results (R = 0.92-0.96, n = 9) of flow-related parameters, including mean velocity, mean flow rate, and flow volume, with conventional ECG-gated imaging demonstrated that the proposed method is highly feasible. The radial flow-gating PC imaging method is applicable in carotid arteries. The proposed flow-gating method can potentially avoid the setting up of ECG-related equipment for brain imaging. This technique has potential use in patients with arrhythmia or weak ECG signals.

  7. Regional-scale calculation of the LS factor using parallel processing

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong

    2015-05-01

    With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.

  8. JSC earth resources data analysis capabilities available to EOD revision B

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A list and summary description of all Johnson Space Center electronic laboratory and photographic laboratory capabilities available to earth resources division personnel for processing earth resources data are provided. The electronic capabilities pertain to those facilities and systems that use electronic and/or photographic products as output. The photographic capabilities pertain to equipment that uses photographic images as input and electronic and/or table summarizes processing steps. A general hardware description is presented for each of the data processing systems, and the titles of computer programs are used to identify the capabilities and data flow.

  9. Experimental study of the free surface velocity field in an asymmetrical confluence

    NASA Astrophysics Data System (ADS)

    Creelle, Stephan; Mignot, Emmanuel; Schindfessel, Laurent; De Mulder, Tom

    2017-04-01

    The hydrodynamic behavior of open channel confluences is highly complex because of the combination of different processes that interact with each other. To gain further insights in how the velocity uniformization between the upstream channels and the downstream channel is proceeding, experiments are performed in a large scale 90 degree angled concrete confluence flume with a chamfered rectangular cross-section and a width of 0.98m. The dimensions and lay-out of the flume are representative for a prototype scale confluence in e.g. drainage and irrigation systems. In this type of engineered channels with sharp corners the separation zone is very large and thus the velocity difference between the most contracted section and the separation zone is pronounced. With the help of surface particle tracking velocimetry the velocity field is recorded from upstream of the confluence to a significant distance downstream of the confluence. The resulting data allow to analyze the evolution of the incoming flows (with a developed velocity profile) that interact with the stagnation zone and each other, causing a shear layer between the two bulk flows. Close observation of the velocity field near the stagnation zone shows that there are actually two shear layers in the vicinity of the upstream corner. Furthermore, the data reveals that the shear layer observed more downstream between the two incoming flows is actually one of the two shear layers next to the stagnation zone that continues, while the other shear layer ceases to exist. The extensive measurement domain also allows to study the shear layer between the contracted section and the separation zone. The shear layers of the stagnation zone between the incoming flows and the one between the contracted flow and separation zone are localized and parameters such as the maximum gradient, velocity difference and width of the shear layer are calculated. Analysis of these data shows that the shear layer between the incoming flows disappears quite quickly, because of the severe flow contraction that aids the flow uniformization. This is also accelerated because of a flow redistribution process that starts already upstream of the confluence, resulting in a lower than expected velocity difference over the shear layer between the bulk of the incoming flows. In contrast, the shear layer between the contracted section and the separation zone proves to be of a significantly higher order of magnitude, with large turbulent structures appearing that get transported far downstream. In conclusion, the resulting understanding of this analysis of velocity fields with a larger field of view shows that when analyzing confluence hydrodynamics, one should pay ample attention to analyze data far enough up and downstream to assess all the relevant processes.

  10. Paleointensity in ignimbrites and other volcaniclastic flows

    NASA Astrophysics Data System (ADS)

    Bowles, J. A.; Gee, J. S.; Jackson, M. J.

    2011-12-01

    Ash flow tuffs (ignimbrites) are common worldwide, frequently contain fine-grained magnetite hosted in the glassy matrix, and often have high-quality 40Ar/39Ar ages. This makes them attractive candidates for paleointensity studies, potentially allowing for a substantial increase in the number of well-dated paleointensity estimates. However, the timing and nature of remanence acquisition in ignimbrites are not sufficiently understood to allow confident interpretation of paleointensity data from ash flows. The remanence acquisition may be a complex function of mineralogy and thermal history. Emplacement conditions and post-emplacement processes vary considerably between and within tuffs and may potentially affect the ability to recover ancient field intensity information. To better understand the relevant magnetic recording assemblage(s) and remanence acquisition processes we have collected samples from two well-documented historical ignimbrites, the 1980 ash flows at Mt. St. Helens (MSH), Washington, and the 1912 flows from Mt. Katmai in the Valley of Ten Thousand Smokes (VTTS), Alaska. Data from these relatively small, poorly- to non-welded historical flows are compared to the more extensive and more densely welded 0.76 Ma Bishop Tuff. This sample set enables us to better understand the geologic processes that destroy or preserve paleointensity information so that samples from ancient tuffs may be selected with care. Thellier-type paleointensity experiments carried out on pumice blocks sampled from the MSH flows resulted in a paleointensity of 55.8 μT +/- 0.8 (1 standard error). This compares favorably with the actual value of 56.0 μT. Excluded specimens of poor technical quality were dominantly from sites that were either emplaced at low temperature (<350°C) or were subject to post-emplacement hydrothermal alteration. The VTTS experienced much more wide-spread low-temperature hydrothermal activity than did MSH. Pumice-bearing ash matrix samples from this locality are characterized by at least two magnetic phases, one of which appears to carry a chemical remanent magnetization. Paleointensities derived from the second phase give results that vary widely but which may be correlated with degree of hydrothermal alteration or hydration. Preliminary data from the Bishop Tuff suggests that vapor-phase alteration at high (>600°C) temperatures does not corrupt the paleointensity signal, and additional data will be presented which explores this more fully.

  11. D GIS for Flood Modelling in River Valleys

    NASA Astrophysics Data System (ADS)

    Tymkow, P.; Karpina, M.; Borkowski, A.

    2016-06-01

    The objective of this study is implementation of system architecture for collecting and analysing data as well as visualizing results for hydrodynamic modelling of flood flows in river valleys using remote sensing methods, tree-dimensional geometry of spatial objects and GPU multithread processing. The proposed solution includes: spatial data acquisition segment, data processing and transformation, mathematical modelling of flow phenomena and results visualization. Data acquisition segment was based on aerial laser scanning supplemented by images in visible range. Vector data creation was based on automatic and semiautomatic algorithms of DTM and 3D spatial features modelling. Algorithms for buildings and vegetation geometry modelling were proposed or adopted from literature. The implementation of the framework was designed as modular software using open specifications and partially reusing open source projects. The database structure for gathering and sharing vector data, including flood modelling results, was created using PostgreSQL. For the internal structure of feature classes of spatial objects in a database, the CityGML standard was used. For the hydrodynamic modelling the solutions of Navier-Stokes equations in two-dimensional version was implemented. Visualization of geospatial data and flow model results was transferred to the client side application. This gave the independence from server hardware platform. A real-world case in Poland, which is a part of Widawa River valley near Wroclaw city, was selected to demonstrate the applicability of proposed system.

  12. Development of the Hydroecological Integrity Assessment Process for Determining Environmental Flows for New Jersey Streams

    USGS Publications Warehouse

    Kennen, Jonathan G.; Henriksen, James A.; Nieswand, Steven P.

    2007-01-01

    The natural flow regime paradigm and parallel stream ecological concepts and theories have established the benefits of maintaining or restoring the full range of natural hydrologic variation for physiochemical processes, biodiversity, and the evolutionary potential of aquatic and riparian communities. A synthesis of recent advances in hydroecological research coupled with stream classification has resulted in a new process to determine environmental flows and assess hydrologic alteration. This process has national and international applicability. It allows classification of streams into hydrologic stream classes and identification of a set of non-redundant and ecologically relevant hydrologic indices for 10 critical sub-components of flow. Three computer programs have been developed for implementing the Hydroecological Integrity Assessment Process (HIP): (1) the Hydrologic Indices Tool (HIT), which calculates 171 ecologically relevant hydrologic indices on the basis of daily-flow and peak-flow stream-gage data; (2) the New Jersey Hydrologic Assessment Tool (NJHAT), which can be used to establish a hydrologic baseline period, provide options for setting baseline environmental-flow standards, and compare past and proposed streamflow alterations; and (3) the New Jersey Stream Classification Tool (NJSCT), designed for placing unclassified streams into pre-defined stream classes. Biological and multivariate response models including principal-component, cluster, and discriminant-function analyses aided in the development of software and implementation of the HIP for New Jersey. A pilot effort is currently underway by the New Jersey Department of Environmental Protection in which the HIP is being used to evaluate the effects of past and proposed surface-water use, ground-water extraction, and land-use changes on stream ecosystems while determining the most effective way to integrate the process into ongoing regulatory programs. Ultimately, this scientifically defensible process will help to quantify the effects of anthropogenic changes and development on hydrologic variability and help planners and resource managers balance current and future water requirements with ecological needs.

  13. Bubble Formation from Wall Orifice in Liquid Cross-Flow Under Low Gravity

    NASA Technical Reports Server (NTRS)

    Nahra, Henry K.; Kamotani, Y.

    2000-01-01

    Two-phase flows present a wide variety of applications for spacecraft thermal control systems design. Bubble formation and detachment is an integral part of the two phase flow science. The objective of the present work is to experimentally investigate the effects of liquid cross-flow velocity, gas flow rate, and orifice diameter on bubble formation in a wall-bubble injection configuration. Data were taken mainly under reduced gravity conditions but some data were taken in normal gravity for comparison. The reduced gravity experiment was conducted aboard the NASA DC-9 Reduced Gravity Aircraft. The results show that the process of bubble formation and detachment depends on gravity, the orifice diameter, the gas flow rate, and the liquid cross-flow velocity. The data are analyzed based on a force balance, and two different detachment mechanisms are identified. When the gas momentum is large, the bubble detaches from the injection orifice as the gas momentum overcomes the attaching effects of liquid drag and inertia. The surface tension force is much reduced because a large part of the bubble pinning edge at the orifice is lost as the bubble axis is tilted by the liquid flow. When the gas momentum is small, the force balance in the liquid flow direction is important, and the bubble detaches when the bubble axis inclination exceeds a certain angle.

  14. Situational Lightning Climatologies for Central Florida: Phase V

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2011-01-01

    The AMU added three years of data to the POR from the previous work resulting in a 22-year POR for the warm season months from 1989-2010. In addition to the flow regime stratification, moisture and stability stratifications were added to separate more active from less active lighting days within the same flow regime. The parameters used for moisture and stability stratifications were PWAT and TI which were derived from sounding data at four Florida radiosonde sites. Lightning data consisted of NLDN CG lightning flashes within 30 NM of each airfield. The AMU increased the number of airfields from nine to thirty-six which included the SLF, CCAFS, PAFB and thirty-three airfields across Florida. The NWS MLB requested the AMU calculate lightning climatologies for additional airfields that they support as a backup to NWS TBW which was then expanded to include airfields supported by NWS JAX and NWS MFL. The updated climatologies of lightning probabilities are based on revised synoptic-scale flow regimes over the Florida peninsula (Lambert 2007) for 5-, 10-, 20- and 30-NM radius range rings around the thirty-six airfields in 1-, 3- and 6-hour increments. The lightning, flow regime, moisture and stability data were processed in S-PLUS software using scripts written by the AMU to automate much of the data processing. The S-PLUS data files were exported to Excel to allow the files to be combined in Excel Workbooks for easier data handling and to create the tables and charts for the Gill. The AMU revised the Gill developed in the previous phase (Bauman 2009) with the new data and provided users with an updated HTML tool to display and manipulate the data and corresponding charts. The tool can be used with most web browsers and is computer operating system independent. The AMU delivered two Gills - one with just the PWAT stratification and one with both the PWAT and TI stratifications due to insufficient data in some of the PWATITI stratification combinations. This will allow the forecasters to choose a moisture-only or moisture/stability stratification depending on the flow regime and available data.

  15. Implementation of a state-to-state analytical framework for the calculation of expansion tube flow properties

    NASA Astrophysics Data System (ADS)

    James, C. M.; Gildfind, D. E.; Lewis, S. W.; Morgan, R. G.; Zander, F.

    2018-03-01

    Expansion tubes are an important type of test facility for the study of planetary entry flow-fields, being the only type of impulse facility capable of simulating the aerothermodynamics of superorbital planetary entry conditions from 10 to 20 km/s. However, the complex flow processes involved in expansion tube operation make it difficult to fully characterise flow conditions, with two-dimensional full facility computational fluid dynamics simulations often requiring tens or hundreds of thousands of computational hours to complete. In an attempt to simplify this problem and provide a rapid flow condition prediction tool, this paper presents a validated and comprehensive analytical framework for the simulation of an expansion tube facility. It identifies central flow processes and models them from state to state through the facility using established compressible and isentropic flow relations, and equilibrium and frozen chemistry. How the model simulates each section of an expansion tube is discussed, as well as how the model can be used to simulate situations where flow conditions diverge from ideal theory. The model is then validated against experimental data from the X2 expansion tube at the University of Queensland.

  16. Wildfire impacts on the processes that generate debris flows in burned watersheds

    USGS Publications Warehouse

    Parise, M.; Cannon, S.H.

    2012-01-01

    Every year, and in many countries worldwide, wildfires cause significant damage and economic losses due to both the direct effects of the fires and the subsequent accelerated runoff, erosion, and debris flow. Wildfires can have profound effects on the hydrologic response of watersheds by changing the infiltration characteristics and erodibility of the soil, which leads to decreased rainfall infiltration, significantly increased overland flow and runoff in channels, and movement of soil. Debris-flow activity is among the most destructive consequences of these changes, often causing extensive damage to human infrastructure. Data from the Mediterranean area and Western United States of America help identify the primary processes that result in debris flows in recently burned areas. Two primary processes for the initiation of fire-related debris flows have been so far identified: (1) runoff-dominated erosion by surface overland flow; and (2) infiltration-triggered failure and mobilization of a discrete landslide mass. The first process is frequently documented immediately post-fire and leads to the generation of debris flows through progressive bulking of storm runoff with sediment eroded from the hillslopes and channels. As sediment is incorporated into water, runoff can convert to debris flow. The conversion to debris flow may be observed at a position within a drainage network that appears to be controlled by threshold values of upslope contributing area and its gradient. At these locations, sufficient eroded material has been incorporated, relative to the volume of contributing surface runoff, to generate debris flows. Debris flows have also been generated from burned basins in response to increased runoff by water cascading over a steep, bedrock cliff, and incorporating material from readily erodible colluvium or channel bed. Post-fire debris flows have also been generated by infiltration-triggered landslide failures which then mobilize into debris flows. However, only 12% of documented cases exhibited this process. When they do occur, the landslide failures range in thickness from a few tens of centimeters to more than 6 m, and generally involve the soil and colluvium-mantled hillslopes. Surficial landslide failures in burned areas most frequently occur in response to prolonged periods of storm rainfall, or prolonged rainfall in combination with rapid snowmelt or rain-on-snow events. ?? 2011 Springer Science+Business Media B.V.

  17. Development of a subsurface gas flow probe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cutler, R.P.; Ballard, S.; Barker, G.T.

    1997-04-01

    This report describes a project to develop a flow probe to monitor gas movement in the vadose zone due to passive venting or active remediation efforts such as soil vapor extraction. 3-D and 1-D probes were designed, fabricated, tested in known flow fields under laboratory conditions, and field tested. The 3-D pores were based on technology developed for ground water flow monitoring. The probes gave excellent agreement with measured air velocities in the laboratory tests. Data processing software developed for ground water flow probes was modified for use with air flow, and to accommodate various probe designs. Modifications were mademore » to decrease the cost of the probes, including developing a downhole multiplexer. Modeling indicated problems with flow channeling due to the mode of deployment. Additional testing was conducted and modifications were made to the probe and to the deployment methods. The probes were deployed at three test sites: a large outdoor test tank, a brief vapor extraction test at the Chemical Waste landfill, and at an active remediation site at a local gas station. The data from the field tests varied markedly from the laboratory test data. All of the major events such as vapor extraction system turn on and turn off, as well as changes in the flow rate, could be seen in the data. However, there were long term trends in the data which were much larger than the velocity signals, which made it difficult to determine accurate air velocities. These long term trends may be due to changes in soil moisture content and seasonal ground temperature variations.« less

  18. A fast response miniature probe for wet steam flow field measurements

    NASA Astrophysics Data System (ADS)

    Bosdas, Ilias; Mansour, Michel; Kalfas, Anestis I.; Abhari, Reza S.

    2016-12-01

    Modern steam turbines require operational flexibility due to renewable energies’ increasing share of the electrical grid. Additionally, the continuous increase in energy demand necessitates efficient design of the steam turbines as well as power output augmentation. The long turbine rotor blades at the machines’ last stages are prone to mechanical vibrations and as a consequence time-resolved experimental data under wet steam conditions are essential for the development of large-scale low-pressure steam turbines. This paper presents a novel fast response miniature heated probe for unsteady wet steam flow field measurements. The probe has a tip diameter of 2.5 mm, and a miniature heater cartridge ensures uncontaminated pressure taps from condensed water. The probe is capable of providing the unsteady flow angles, total and static pressure as well as the flow Mach number. The operating principle and calibration procedure are described in the current work and a detailed uncertainty analysis demonstrates the capability of the new probe to perform accurate flow field measurements under wet steam conditions. In order to exclude any data possibly corrupted by droplets’ impact or evaporation from the heating process, a filtering algorithm was developed and implemented in the post-processing phase of the measured data. In the last part of this paper the probe is used in an experimental steam turbine test facility and measurements are conducted at the inlet and exit of the last stage with an average wetness mass fraction of 8.0%.

  19. Non-rigid Reconstruction of Casting Process with Temperature Feature

    NASA Astrophysics Data System (ADS)

    Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Ying; Wang, Lu

    2017-09-01

    Off-line reconstruction of rigid scene has made a great progress in the past decade. However, the on-line reconstruction of non-rigid scene is still a very challenging task. The casting process is a non-rigid reconstruction problem, it is a high-dynamic molding process lacking of geometric features. In order to reconstruct the casting process robustly, an on-line fusion strategy is proposed for dynamic reconstruction of casting process. Firstly, the geometric and flowing feature of casting are parameterized in manner of TSDF (truncated signed distance field) which is a volumetric block, parameterized casting guarantees real-time tracking and optimal deformation of casting process. Secondly, data structure of the volume grid is extended to have temperature value, the temperature interpolation function is build to generate the temperature of each voxel. This data structure allows for dynamic tracking of temperature of casting during deformation stages. Then, the sparse RGB features is extracted from casting scene to search correspondence between geometric representation and depth constraint. The extracted color data guarantees robust tracking of flowing motion of casting. Finally, the optimal deformation of the target space is transformed into a nonlinear regular variational optimization problem. This optimization step achieves smooth and optimal deformation of casting process. The experimental results show that the proposed method can reconstruct the casting process robustly and reduce drift in the process of non-rigid reconstruction of casting.

  20. The Development of Point Doppler Velocimeter Data Acquisition and Processing Software

    NASA Technical Reports Server (NTRS)

    Cavone, Angelo A.

    2008-01-01

    In order to develop efficient and quiet aircraft and validate Computational Fluid Dynamic predications, aerodynamic researchers require flow parameter measurements to characterize flow fields about wind tunnel models and jet flows. A one-component Point Doppler Velocimeter (pDv), a non-intrusive, laser-based instrument, was constructed using a design/develop/test/validate/deploy approach. A primary component of the instrument is software required for system control/management and data collection/reduction. This software along with evaluation algorithms, advanced pDv from a laboratory curiosity to a production level instrument. Simultaneous pDv and pitot probe velocity measurements obtained at the centerline of a flow exiting a two-inch jet, matched within 0.4%. Flow turbulence spectra obtained with pDv and a hot-wire detected the primary and secondary harmonics with equal dynamic range produced by the fan driving the flow. Novel,hardware and software methods were developed, tested and incorporated into the system to eliminate and/or minimize error sources and improve system reliability.

  1. Computed Tomography 3-D Imaging of the Metal Deformation Flow Path in Friction Stir Welding

    NASA Technical Reports Server (NTRS)

    Schneider, Judy; Beshears, Ronald; Nunes, Arthur C., Jr.

    2005-01-01

    In friction stir welding (FSW), a rotating threaded pin tool is inserted into a weld seam and literally stirs the edges of the seam together. To determine optimal processing parameters for producing a defect free weld, a better understanding of the resulting metal deformation flow path is required. Marker studies are the principal method of studying the metal deformation flow path around the FSW pin tool. In our study, we have used computed tomography (CT) scans to reveal the flow pattern of a lead wire embedded in a FSW weld seam. At the welding temperature of aluminum, the lead becomes molten and is carried with the macro-flow of the weld metal. By using CT images, a 3-dimensional (3D) image of the lead flow pattern can be reconstructed. CT imaging was found to be a convenient and comprehensive way of collecting and displaying tracer data. It marks an advance over previous more tedious and ambiguous radiographic/metallographic data collection methods.

  2. Plant uprooting by flow as a fatigue mechanical process

    NASA Astrophysics Data System (ADS)

    Perona, Paolo; Edmaier, Katharina; Crouzy, Benoît

    2015-04-01

    In river corridors, plant uprooting by flow mostly occurs as a delayed process where flow erosion first causes root exposure until residual anchoring balances hydrodynamic forces on the part of the plant that is exposed to the stream. Because a given plant exposure time to the action of the stream is needed before uprooting occurs (time-to-uprooting), this uprooting mechanism has been denominated Type II, in contrast to Type I, which mostly affect early stage seedlings and is rather instantaneous. In this work, we propose a stochastic framework that describes a (deterministic) mechanical fatigue process perturbed by a (stochastic) process noise, where collapse occurs after a given exposure time. We test the model using the experimental data of Edmaier (2014) and Edmaier et al. (submitted), who investigated vegetation uprooting by flow in the limit of low plant stem-to-sediment size ratio by inducing parallel riverbed erosion within an experimental flume. We first identify the proper timescale and lengthscale for rescaling the model. Then, we show that it describes well all the empirical cumulative distribution functions (cdf) of time-to-uprooting obtained under constant riverbed erosion rate and assuming additive gaussian process noise. By this mean, we explore the level of determinism and stochasticity affecting the time-to-uprooting for Avena sativa in relation to root anchoring and flow drag forces. We eventually ascribe the overall dynamics of the Type II uprooting mechanism to the memory of the plant-soil system that is stored by root anchoring, and discuss related implications thereof. References Edmaier, K., Uprooting mechansims of juvenile vegetation by flow erosion, Ph.D. thesis, EPFL, 2014. Edmaier, K., Crouzy, B. and P. Perona. Experimental characterization of vegetation uprooting by flow. J. of Geophys. Res. - Biogeosci., submitted

  3. Ice Flow in the North East Greenland Ice Stream

    NASA Technical Reports Server (NTRS)

    Joughin, Ian; Kwok, Ron; Fahnestock, M.; MacAyeal, Doug

    1999-01-01

    Early observations with ERS-1 SAR image data revealed a large ice stream in North East Greenland (Fahnestock 1993). The ice stream has a number of the characteristics of the more closely studied ice streams in Antarctica, including its large size and gross geometry. The onset of rapid flow close to the ice divide and the evolution of its flow pattern, however, make this ice stream unique. These features can be seen in the balance velocities for the ice stream (Joughin 1997) and its outlets. The ice stream is identifiable for more than 700 km, making it much longer than any other flow feature in Greenland. Our research goals are to gain a greater understanding of the ice flow in the northeast Greenland ice stream and its outlet glaciers in order to assess their impact on the past, present, and future mass balance of the ice sheet. We will accomplish these goals using a combination of remotely sensed data and ice sheet models. We are using satellite radar interferometry data to produce a complete maps of velocity and topography over the entire ice stream. We are in the process of developing methods to use these data in conjunction with existing ice sheet models similar to those that have been used to improve understanding of the mechanics of flow in Antarctic ice streams.

  4. Using Self Potential and Multiphase Flow Modeling to Optimize Groundwater Pumping

    NASA Astrophysics Data System (ADS)

    Gasperikova, E.; Zhang, Y.; Hubbard, S.

    2008-12-01

    Numerical and field hydrological and geophysical studies have been conducted to investigate the impact of groundwater pumping on near-river hydrology for a segment of the Russian River at the Wohler Site, California, which is a riverbed filtration system managed by the Sonoma County Water Agency. Groundwater pumping near streams can cause a creation of unsaturated regions and hence reduce the pumping capacity and change the flow paths. A three-dimensional multiphase flow and transport model can be calibrated to the temperature, and water levels at monitoring wells based on known pumping rates, and the river stage. Streaming (self) potential (SP) is one of the electrokinetic processes that describes the coupled behavior of hydraulic and electrical flow within a porous medium, and is easily measured on the surface or in boreholes. Observing temporal and spatial variations in geophysical signatures provides a powerful approach for monitoring changes in the natural systems due to natural or forced (pumping) system perturbations. Geophysical and hydrological data were collected before, during and after a pumping experiment at the Wohler Site. Using this monitoring dataset, we illustrate how loose coupling between hydrogeological and geophysical (SP) processes and data can be used to calibrate the flow model and to optimize pumping schedules as needed to guide sustainable water resource development.

  5. Application of photogrammetry to transforming PIV-acquired velocity fields to a moving-body coordinate system

    NASA Astrophysics Data System (ADS)

    Nikoueeyan, Pourya; Naughton, Jonathan

    2016-11-01

    Particle Image Velocimetry is a common choice for qualitative and quantitative characterization of unsteady flows associated with moving bodies (e.g. pitching and plunging airfoils). Characterizing the separated flow behavior is of great importance in understanding the flow physics and developing predictive reduced-order models. In most studies, the model under investigation moves within a fixed camera field-of-view, and vector fields are calculated based on this fixed coordinate system. To better characterize the genesis and evolution of vortical structures in these unsteady flows, the velocity fields need to be transformed into the moving-body frame of reference. Data converted to this coordinate system allow for a more detailed analysis of the flow field using advanced statistical tools. In this work, a pitching NACA0015 airfoil has been used to demonstrate the capability of photogrammetry for such an analysis. Photogrammetry has been used first to locate the airfoil within the image and then to determine an appropriate mask for processing the PIV data. The photogrammetry results are then further used to determine the rotation matrix that transforms the velocity fields to airfoil coordinates. Examples of the important capabilities such a process enables are discussed. P. Nikoueeyan is supported by a fellowship from the University of Wyoming's Engineering Initiative.

  6. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony

    1990-01-01

    The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  7. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.

    1990-01-01

    Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  8. Validation of Multi-Dimensional Stirling Engine Design Codes: Measurements in the 90-Degree Turn Test Section

    NASA Technical Reports Server (NTRS)

    Simon, Terrence W.; Adolfson, David

    2006-01-01

    The work to be presented herein was motivated largely by a desire to improve the understanding of oscillatory fluid mechanics inside a Stirling engine. To this end, a CFD project was undertaken at Cleveland State University with the goal of accurately predicting the fluid dynamics within an engine or engine component. Along with the CFD efforts, a code validation project was undertaken at the University of Minnesota. The material covered herein consists of four main parts. In section 1, an experimental investigation of a small aspect ratio impinging jet is discussed. Included in this discussion is a description of the test facilities and instrumentation. A presentation of the collected data is given and comments are made. Next, in section 2, a parallel experimental investigation is presented in which the same geometry as that of section 1 is used, but the flow conditions are changed from steady unidirectional flow to sinusoidally oscillating flow. In section Two, collected data are presented and comments are made. In section 3, a comparison is made between the results of sections 1 and 2, namely, sinusoidally oscillating flow results are compared to steady, unidirectional flow results from the same geometry. Finally, in section 4, a comparison is made between experimentally collected data (the main subject of this work) and CFD generated results. Furthermore, in appendix A, an introductory description of the primary measurement tool used in the experimental process the hot wire anemometer is given for the unfamiliar. The anemometer calibration procedure is described in appendix B. A portfolio of data reduction and data processing codes is provided in appendix C and lastly, a DVD and a roadmap of its contents is provided in an appendix D. 1.0 Unidirectional Flow Investigations 1.1 Introduction This unidirectional experimental program was undertaken to complement an oscillatory flow investigation conducted at the University of Minnesota. The oscillatory investigation is discussed thoroughly in section 2. We defer the description of the motivation behind these experiments until the introduction of section 2. The work that is discussed in this thesis began (chronologically) with oscillatory flow visualization experiments. It was decided that it would be valuable and important to investigate the flow under unidirectional conditions in the same geometry as that of the oscillatory experiments. The thought was that the unidirectional case would be less complicated to model with a CFD program (a moving boundary would be replaced with a steady state boundary condition). Thus, a series of unidirectional experiments were carried out to capture the important features of the flow within the test section. The purpose of these experiments was to provide a data set for comparison to CFD generated velocity fields. Hot-wire anemometry data were taken and flow visualization was conducted as a standard for code validation. The flow geometry was simple, such that it could be easily gridded in a CFD program. However, the geometry provided separation and transition zones, shear layers and recirculation zones. These characteristics made the flow complex and challenging for CFD computation. We comment that the order of experiments that produced this report is as follows: experimental flow visualization under oscillatory flow conditions was carried out; this was followed by unidirectional flow visualization and hot wire anemometry; finally, oscillatory hot wire anemometry was conducted. We present the results out of chronological order for the following reason: the unidirectional results are easier

  9. Application of laser anemometry in turbine engine research

    NASA Technical Reports Server (NTRS)

    Seasholtz, R. G.

    1983-01-01

    The application of laser anemometry to the study of flow fields in turbine engine components is reviewed. Included are discussions of optical configurations, seeding requirements, electronic signal processing, and data processing. Some typical results are presented along with a discussion of ongoing work.

  10. Application of laser anemometry in turbine engine research

    NASA Technical Reports Server (NTRS)

    Seasholtz, R. G.

    1982-01-01

    The application of laser anemometry to the study of flow fields in turbine engine components is reviewed. Included are discussions of optical configurations, seeding requirements, electronic signal processing, and data processing. Some typical results are presented along with a discussion of ongoing work.

  11. Guidance on Data Quality Assessment for Life Cycle Inventory ...

    EPA Pesticide Factsheets

    Data quality within Life Cycle Assessment (LCA) is a significant issue for the future support and development of LCA as a decision support tool and its wider adoption within industry. In response to current data quality standards such as the ISO 14000 series, various entities within the LCA community have developed different methodologies to address and communicate the data quality of Life Cycle Inventory (LCI) data. Despite advances in this field, the LCA community is still plagued by the lack of reproducible data quality results and documentation. To address these issues, US EPA has created this guidance in order to further support reproducible life cycle inventory data quality results and to inform users of the proper application of the US EPA supported data quality system. The work for this report was begun in December 2014 and completed as of April 2016.The updated data quality system includes a novel approach to the pedigree matrix by addressing data quality at the flow and the process level. Flow level indicators address source reliability, temporal correlation, geographic correlation, technological correlation and data sampling methods. The process level indicators address the level of review the unit process has undergone and its completeness. This guidance is designed to be updatable as part of the LCA Research Center’s continuing commitment to data quality advancements. Life cycle assessment is increasingly being used as a tool to identify areas of

  12. Shale Gas Well, Hydraulic Fracturing, and Formation Data to Support Modeling of Gas and Water Flow in Shale Formations

    NASA Astrophysics Data System (ADS)

    Edwards, Ryan W. J.; Celia, Michael A.

    2018-04-01

    The potential for shale gas development and hydraulic fracturing to cause subsurface water contamination has prompted a number of modeling studies to assess the risk. A significant impediment for conducting robust modeling is the lack of comprehensive publicly available information and data about the properties of shale formations, shale wells, the process of hydraulic fracturing, and properties of the hydraulic fractures. We have collated a substantial amount of these data that are relevant for modeling multiphase flow of water and gas in shale gas formations. We summarize these data and their sources in tabulated form.

  13. An evaluation of Dynamic TOPMODEL in natural and human-impacted catchments for low flow simulation

    NASA Astrophysics Data System (ADS)

    Coxon, Gemma; Freer, Jim; Lane, Rosanna; Musuuza, Jude; Woods, Ross; Wagener, Thorsten; Howden, Nicholas

    2017-04-01

    Models of catchment hydrology are essential tools for drought risk management, often providing input to water resource system models, aiding our understanding of low flow processes within catchments and providing low flow simulations and predictions. However, simulating low flows is challenging as hydrological systems often demonstrate threshold effects in connectivity, non-linear groundwater contributions and a greater influence of anthropogenic modifications such as surface and ground water abstractions during low flow periods. These processes are typically not well represented in commonly used hydrological models due to knowledge, data and model limitations. Hence, a better understanding of the natural and human processes that occur during low flows, how these are represented within models and how they could be improved is required to be able to provide robust and reliable predictions of future drought events. The aim of this study is to assess the skill of dynamic TOPMODEL during low flows for both natural and human-impacted catchments. Dynamic TOPMODEL was chosen for this study as it is able to explicitly characterise connectivity and fluxes across landscapes using hydrological response units (HRU's) while still maintaining flexibility in how spatially complex the model is configured and what specific functions (i.e. abstractions or groundwater stores) are represented. We apply dynamic TOPMODEL across the River Thames catchment using daily time series of observed rainfall and potential evapotranspiration data for the period 1999 - 2014, covering two major droughts in the Thames catchment. Significantly, to assess the impact of abstractions on low flows across the Thames catchment, we incorporate functions to characterise over 3,500 monthly surface water and ground water abstractions covering the simulation period into dynamic TOPMODEL. We evaluate dynamic TOPMODEL at over 90 gauging stations across the Thames catchment against multiple signatures of catchment low-flow behaviour in a 'limits of acceptability' GLUE framework. We investigate differences in model performance between signatures, different low flow periods and for natural and human impacted catchments to better understand the ability of dynamic TOPMODEL to represent low flows in space and time. Finally, we discuss future developments of dynamic TOPMODEL to improve low flow simulation and the implications of these results for modelling hydrological extremes in natural and human impacted catchments across the UK and the world.

  14. Data management for Computer-Aided Engineering (CAE)

    NASA Technical Reports Server (NTRS)

    Bryant, W. A.; Smith, M. R.

    1984-01-01

    Analysis of data flow through the design and manufacturing processes has established specific information management requirements and identified unique problems. The application of data management technology to the engineering/manufacturing environment addresses these problems. An overview of the IPAD prototype data base management system, representing a partial solution to these problems, is presented here.

  15. High-efficient Extraction of Drainage Networks from Digital Elevation Model Data Constrained by Enhanced Flow Enforcement from Known River Map

    NASA Astrophysics Data System (ADS)

    Wu, T.; Li, T.; Li, J.; Wang, G.

    2017-12-01

    Improved drainage network extraction can be achieved by flow enforcement whereby information of known river maps is imposed to the flow-path modeling process. However, the common elevation-based stream burning method can sometimes cause unintended topological errors and misinterpret the overall drainage pattern. We presented an enhanced flow enforcement method to facilitate accurate and efficient process of drainage network extraction. Both the topology of the mapped hydrography and the initial landscape of the DEM are well preserved and fully utilized in the proposed method. An improved stream rasterization is achieved here, yielding continuous, unambiguous and stream-collision-free raster equivalent of stream vectors for flow enforcement. By imposing priority-based enforcement with a complementary flow direction enhancement procedure, the drainage patterns of the mapped hydrography are fully represented in the derived results. The proposed method was tested over the Rogue River Basin, using DEMs with various resolutions. As indicated by the visual and statistical analyses, the proposed method has three major advantages: (1) it significantly reduces the occurrences of topological errors, yielding very accurate watershed partition and channel delineation, (2) it ensures scale-consistent performance at DEMs of various resolutions, and (3) the entire extraction process is well-designed to achieve great computational efficiency.

  16. Application of process mining to assess the data quality of routinely collected time-based performance data sourced from electronic health records by validating process conformance.

    PubMed

    Perimal-Lewis, Lua; Teubner, David; Hakendorf, Paul; Horwood, Chris

    2016-12-01

    Effective and accurate use of routinely collected health data to produce Key Performance Indicator reporting is dependent on the underlying data quality. In this research, Process Mining methodology and tools were leveraged to assess the data quality of time-based Emergency Department data sourced from electronic health records. This research was done working closely with the domain experts to validate the process models. The hospital patient journey model was used to assess flow abnormalities which resulted from incorrect timestamp data used in time-based performance metrics. The research demonstrated process mining as a feasible methodology to assess data quality of time-based hospital performance metrics. The insight gained from this research enabled appropriate corrective actions to be put in place to address the data quality issues. © The Author(s) 2015.

  17. Spatial structure and scaling of macropores in hydrological process at small catchment scale

    NASA Astrophysics Data System (ADS)

    Silasari, Rasmiaditya; Broer, Martine; Blöschl, Günter

    2013-04-01

    During rainfall events, the formation of overland flow can occur under the circumstances of saturation excess and/or infiltration excess. These conditions are affected by the soil moisture state which represents the soil water content in micropores and macropores. Macropores act as pathway for the preferential flows and have been widely studied locally. However, very little is known about their spatial structure and conductivity of macropores and other flow characteristic at the catchment scale. This study will analyze these characteristics to better understand its importance in hydrological processes. The research will be conducted in Petzenkirchen Hydrological Open Air Laboratory (HOAL), a 64 ha catchment located 100 km west of Vienna. The land use is divided between arable land (87%), pasture (5%), forest (6%) and paved surfaces (2%). Video cameras will be installed on an agricultural field to monitor the overland flow pattern during rainfall events. A wireless soil moisture network is also installed within the monitored area. These field data will be combined to analyze the soil moisture state and the responding surface runoff occurrence. The variability of the macropores spatial structure of the observed area (field scale) then will be assessed based on the topography and soil data. Soil characteristics will be supported with laboratory experiments on soil matrix flow to obtain proper definitions of the spatial structure of macropores and its variability. A coupled physically based distributed model of surface and subsurface flow will be used to simulate the variability of macropores spatial structure and its effect on the flow behaviour. This model will be validated by simulating the observed rainfall events. Upscaling from field scale to catchment scale will be done to understand the effect of macropores variability on larger scales by applying spatial stochastic methods. The first phase in this study is the installation and monitoring configuration of video cameras and soil moisture monitoring equipment to obtain the initial data of overland flow occurrence and soil moisture state relationships.

  18. Undergraduate Laboratory on a Turbulent Impinging Jet

    NASA Astrophysics Data System (ADS)

    Ivanosky, Arnaud; Brezzard, Etienne; van Poppel, Bret; Benson, Michael

    2017-11-01

    An undergraduate thermal sciences laboratory exercise that includes both experimental fluid mechanics and heat transfer measurements of an impinging jet is presented. The flow field is measured using magnetic resonance velocimetry (MRV) of a water flow, while IR thermography is used in the heat transfer testing. Flow Reynolds numbers for both the heat transfer and fluid mechanics tests range from 20,000-50,000 based on the jet diameter for a fully turbulent flow condition, with target surface temperatures in the heat transfer test reaching a maximum of approximately 50 Kelvin. The heat transfer target surface is subject to a measured uniform Joule heat flux, a well-defined boundary condition that allows comparison to existing correlations. The MRV generates a 3-component 3-dimensional data set, while the IR thermography provides a 2-dimensional heat transfer coefficient (or Nusselt number) map. These data sets can be post-processed and compared to existing correlations to verify data quality, and the sets can be juxtaposed to understand how flow features drive heat transfer. The laboratory setup, data acquisition, and analysis procedures are described for the laboratory experience, which can be incorporated as fluid mechanics, experimental methods, and heat transfer courses

  19. Comparison of numerical simulation and experimental data for steam-in-place sterilization

    NASA Technical Reports Server (NTRS)

    Young, Jack H.; Lasher, William C.

    1993-01-01

    A complex problem involving convective flow of a binary mixture containing a condensable vapor and noncondensable gas in a partially enclosed chamber was modelled and results compared to transient experimental values. The finite element model successfully predicted transport processes in dead-ended tubes with inside diameters of 0.4 to 1.0 cm. When buoyancy driven convective flow was dominant, temperature and mixture compositions agreed with experimental data. Data from 0.4 cm tubes indicate diffusion to be the primary air removal method in small diameter tubes and the diffusivity value in the model to be too large.

  20. 4D-SFM Photogrammetry for Monitoring Sediment Dynamics in a Debris-Flow Catchment: Software Testing and Results Comparison

    NASA Astrophysics Data System (ADS)

    Cucchiaro, S.; Maset, E.; Fusiello, A.; Cazorzi, F.

    2018-05-01

    In recent years, the combination of Structure-from-Motion (SfM) algorithms and UAV-based aerial images has revolutionised 3D topographic surveys for natural environment monitoring, offering low-cost, fast and high quality data acquisition and processing. A continuous monitoring of the morphological changes through multi-temporal (4D) SfM surveys allows, e.g., to analyse the torrent dynamic also in complex topography environment like debris-flow catchments, provided that appropriate tools and procedures are employed in the data processing steps. In this work we test two different software packages (3DF Zephyr Aerial and Agisoft Photoscan) on a dataset composed of both UAV and terrestrial images acquired on a debris-flow reach (Moscardo torrent - North-eastern Italian Alps). Unlike other papers in the literature, we evaluate the results not only on the raw point clouds generated by the Structure-from- Motion and Multi-View Stereo algorithms, but also on the Digital Terrain Models (DTMs) created after post-processing. Outcomes show differences between the DTMs that can be considered irrelevant for the geomorphological phenomena under analysis. This study confirms that SfM photogrammetry can be a valuable tool for monitoring sediment dynamics, but accurate point cloud post-processing is required to reliably localize geomorphological changes.

  1. Mathematical modeling of flow in the working part of an acousto-convective drying system

    NASA Astrophysics Data System (ADS)

    Kravchenko, A. S.; Zhilin, A. A.; Fedorova, N. N.

    2018-03-01

    The objective of this study was to numerically simulate the nonstationary processes occurring in the acoustic-convective dryer (ACD) channel. In the present work, the problem was solved numerically in a three-dimensional formulation taking into account all features of the ACD duct in real geometry. The processes occurring in the ACD duct were simulated using the ANSYS Fluent 18.0 software. The numerical experiments provided an aggregate picture of the working gas flow in the ACD duct with the features near the subsonic nozzle and the cavity. The results of the numerical calculations were compared with experimental data. The best agreement with the experimental data was obtained for the viscosity model neglecting turbulent effects.

  2. [Family Health Program implementation in municipalities in Mato Grosso State, Brazil].

    PubMed

    Canesqui, Ana Maria; Spinelli, Maria Angélica do Santos

    2008-04-01

    This article analysis some key aspects in the implementation of the Family Health Program (FHP): results; conditions; and institutional mechanisms; flow and regularity of funding; organizational structures; and human resources availability and training. The study was conducted in seven municipalities (counties) in the State of Mato Grosso, Brazil, and used secondary data as well as primary data from interviews with different stakeholders. The research design was evaluative, using a quantitative/qualitative analysis. The results showed: varying stages in the implementation process, different FHP models, and adaptation of organizational structures; high level of human resources availability, except for nurse assistants; availability of financial resources, with some difficulties in their flow; and other institutional factors that hinder or facilitate the micro-implementation process in the municipalities.

  3. Numerical Modeling of the Transient Chilldown Process of a Cryogenic Propellant Transfer Line

    NASA Technical Reports Server (NTRS)

    Hartwig, Jason; Vera, Jerry

    2015-01-01

    Before cryogenic fuel depots can be fully realized, efficient methods with which to chill down the spacecraft transfer line and receiver tank are required. This paper presents numerical modeling of the chilldown of a liquid hydrogen tank-to-tank propellant transfer line using the Generalized Fluid System Simulation Program (GFSSP). To compare with data from recently concluded turbulent LH2 chill down experiments, seven different cases were run across a range of inlet liquid temperatures and mass flow rates. Both trickle and pulse chill down methods were simulated. The GFSSP model qualitatively matches external skin mounted temperature readings, but large differences are shown between measured and predicted internal stream temperatures. Discrepancies are attributed to the simplified model correlation used to compute two-phase flow boiling heat transfer. Flow visualization from testing shows that the initial bottoming out of skin mounted sensors corresponds to annular flow, but that considerable time is required for the stream sensor to achieve steady state as the system moves through annular, churn, and bubbly flow. The GFSSP model does adequately well in tracking trends in the data but further work is needed to refine the two-phase flow modeling to better match observed test data.

  4. Flow-Velocity, Water-Temperature and Conductivity Data Collected in Shark River Slough, Everglades National Park, During 1999-2000 and 2000-2001 Wet Seasons

    USGS Publications Warehouse

    Riscassi, Ami L.; Schaffranek, R.W.

    2002-01-01

    A project within the U. S. Geological Survey Place- Based Studies Program is focused on investigation of ?Forcing Effects on Flow Structure in Vegetated Wetlands of the Everglades.? Data-collection efforts conducted within this project at three locations in Shark River Slough, Everglades National Park, during the 1999-2000 and 2000-2001 wet seasons are described in this report. Techniques for collecting and processing the data and summaries of daily mean flowvelocity, water-temperature, and conductivity data are presented. The quality-checked and edited data have been compiled and stored on the USGS South Florida Information Access website.

  5. Therapy of Prostate Cancer Using a Human Antibody Targeting the Type 1 Insulin-Like Growth Factor Receptor (IGF-IR)

    DTIC Science & Technology

    2009-09-01

    euthanized, tumors harvested and portions processed for IHC, Western blot, flow cytometry , culture, and RNA analysis. If not enough tissue is available...temperature for 60 minutes. Samples were analyzed by flow cytometry using a BD FACScan. Data were analyzed with CellQuestPRO software. Evaluation of BrdUrd...were approved by the University of Washington Institutional Animal Care and Use Committee (IACUC). Flow cytometry . To measure tumor IGF-IR expression

  6. [Development of automatic urine monitoring system].

    PubMed

    Wei, Liang; Li, Yongqin; Chen, Bihua

    2014-03-01

    An automatic urine monitoring system is presented to replace manual operation. The system is composed of the flow sensor, MSP430f149 single chip microcomputer, human-computer interaction module, LCD module, clock module and memory module. The signal of urine volume is captured when the urine flows through the flow sensor and then displayed on the LCD after data processing. The experiment results suggest that the design of the monitor provides a high stability, accurate measurement and good real-time, and meets the demand of the clinical application.

  7. A pilot study of river flow prediction in urban area based on phase space reconstruction

    NASA Astrophysics Data System (ADS)

    Adenan, Nur Hamiza; Hamid, Nor Zila Abd; Mohamed, Zulkifley; Noorani, Mohd Salmi Md

    2017-08-01

    River flow prediction is significantly related to urban hydrology impact which can provide information to solve any problems such as flood in urban area. The daily river flow of Klang River, Malaysia was chosen to be forecasted in this pilot study which based on phase space reconstruction. The reconstruction of phase space involves a single variable of river flow data to m-dimensional phase space in which the dimension (m) is based on the optimal values of Cao method. The results from the reconstruction of phase space have been used in the forecasting process using local linear approximation method. From our investigation, river flow at Klang River is chaotic based on the analysis from Cao method. The overall results provide good value of correlation coefficient. The value of correlation coefficient is acceptable since the area of the case study is influence by a lot of factors. Therefore, this pilot study may be proposed to forecast daily river flow data with the purpose of providing information about the flow of the river system in urban area.

  8. Using Selective Drainage Methods to Extract Continuous Surface Flow from 1-Meter Lidar-Derived Digital Elevation Data

    USGS Publications Warehouse

    Poppenga, Sandra K.; Worstell, Bruce B.; Stoker, Jason M.; Greenlee, Susan K.

    2010-01-01

    Digital elevation data commonly are used to extract surface flow features. One source for high-resolution elevation data is light detection and ranging (lidar). Lidar can capture a vast amount of topographic detail because of its fine-scale ability to digitally capture the surface of the earth. Because elevation is a key factor in extracting surface flow features, high-resolution lidar-derived digital elevation models (DEMs) provide the detail needed to consistently integrate hydrography with elevation, land cover, structures, and other geospatial features. The U.S. Geological Survey has developed selective drainage methods to extract continuous surface flow from high-resolution lidar-derived digital elevation data. The lidar-derived continuous surface flow network contains valuable information for water resource management involving flood hazard mapping, flood inundation, and coastal erosion. DEMs used in hydrologic applications typically are processed to remove depressions by filling them. High-resolution DEMs derived from lidar can capture much more detail of the land surface than courser elevation data. Therefore, high-resolution DEMs contain more depressions because of obstructions such as roads, railroads, and other elevated structures. The filling of these depressions can significantly affect the DEM-derived surface flow routing and terrain characteristics in an adverse way. In this report, selective draining methods that modify the elevation surface to drain a depression through an obstruction are presented. If such obstructions are not removed from the elevation data, the filling of depressions to create continuous surface flow can cause the flow to spill over an obstruction in the wrong location. Using this modified elevation surface improves the quality of derived surface flow and retains more of the true surface characteristics by correcting large filled depressions. A reliable flow surface is necessary for deriving a consistently connected drainage network, which is important in understanding surface water movement and developing applications for surface water runoff, flood inundation, and erosion. Improved methods are needed to extract continuous surface flow features from high-resolution elevation data based on lidar.

  9. Spatio-temporal changes in river bank mass failures in the Lockyer Valley, Queensland, Australia

    NASA Astrophysics Data System (ADS)

    Thompson, Chris; Croke, Jacky; Grove, James; Khanal, Giri

    2013-06-01

    Wet-flow river bank failure processes are poorly understood relative to the more commonly studied processes of fluvial entrainment and gravity-induced mass failures. Using high resolution topographic data (LiDAR) and near coincident aerial photography, this study documents the downstream distribution of river bank mass failures which occurred as a result of a catastrophic flood in the Lockyer Valley in January 2011. In addition, this distribution is compared with wet flow mass failure features from previous large floods. The downstream analysis of these two temporal data sets indicated that they occur across a range of river lengths, catchment areas, bank heights and angles and do not appear to be scale-dependent or spatially restricted to certain downstream zones. The downstream trends of each bank failure distribution show limited spatial overlap with only 17% of wet flows common to both distributions. The modification of these features during the catastrophic flood of January 2011 also indicated that such features tend to form at some 'optimum' shape and show limited evidence of subsequent enlargement even when flow and energy conditions within the banks and channel were high. Elevation changes indicate that such features show evidence for infilling during subsequent floods. The preservation of these features in the landscape for a period of at least 150 years suggests that the seepage processes dominant in their initial formation appear to have limited role in their continuing enlargement over time. No evidence of gully extension or headwall retreat is evident. It is estimated that at least 12 inundation events would be required to fill these failures based on the average net elevation change recorded for the 2011 event. Existing conceptual models of downstream bank erosion process zones may need to consider a wider array of mass failure processes to accommodate for wet flow failures.

  10. A water-powered Energy Harvesting system with Bluetooth Low Energy interface

    NASA Astrophysics Data System (ADS)

    Kroener, M.; Allinger, K.; Berger, M.; Grether, E.; Wieland, F.; Heller, S.; Woias, P.

    2016-11-01

    This paper reports the design, and testing of a water turbine generator system for typical flow rates in domestic applications, with an integrated power management and a Bluetooth low energy (BLE) based RF data transmission interface. It is based on a commercially available low cost hydro generator. The generator is built into a housing with optimized reduced fluidic resistance to enable operation with flow rates as low as 6 l/min. The power management combines rectification, buffering, defined start-up, and circuit protection. An MSP430FR5949 microcontroller is used for data acquisition and processing. The data are transmitted via RF, using a Bluegiga BLE112 module in advertisement mode, to a PC where the measured flow rate is stored and displayed. The transmission rate of the wireless sensor node (WSN) is set to 1 Hz if enough power is available, which is the case for flow rates above 5.5 l/min. The electronics power demand is calculated to be 340 μW in average, while the generator is capable of delivering more than 200 mW for flow rates above 15 l/min.

  11. Sedimentary processes of the lower Monterey Fan channel and channel-mouth lobe

    USGS Publications Warehouse

    Klaucke, I.; Masson, D.G.; Kenyon, Neil H.; Gardner, J.V.

    2004-01-01

    The distribution of deposits, sediment transport pathways and processes on the lower Monterey Fan channel and channel-mouth lobe (CML) are studied through the integration of GLORIA and TOBI sidescan sonar data with 7-kHz subbottom profiler records and sediment cores for ground-truthing. The lower Monterey channel is characterised by an up to 30-m-deep channel with poorly developed levees and alternating muddy and silty muddy overbank deposits. The channel is discontinuous, disappearing where gradients are less than about 1:350. Ground-truthing of the large CML shows that the entire CML is characterised by widespread deposits of generally fine sand, with coarser sand at the base of turbidites. Sand is particularly concentrated in finger-like areas of low-backscatter intensity and is interpreted as the result of non-turbulent sediment-gravity flows depositing metres thick massive, fine sand. TOBI sidescan sonar data reveal recent erosional features in the form of scours, secondary channels, large flow slides, and trains of blocks at the distal end of the CML. Erosion is probably related to increasing gradient as the CML approaches Murray Fracture zone and to differential loading of sandy submarine fan deposits onto pelagic clays. Reworking of older flow slides by sediment transport processes on the lobe produces trains of blocks that are several metres in diameter and aligned parallel to the flow direction. ?? 2004 Elsevier B.V. All rights reserved.

  12. Impact of flow routing on catchment area calculations, slope estimates, and numerical simulations of landscape development

    NASA Astrophysics Data System (ADS)

    Shelef, Eitan; Hilley, George E.

    2013-12-01

    Flow routing across real or modeled topography determines the modeled discharge and wetness index and thus plays a central role in predicting surface lowering rate, runoff generation, likelihood of slope failure, and transition from hillslope to channel forming processes. In this contribution, we compare commonly used flow-routing rules as well as a new routing rule, to commonly used benchmarks. We also compare results for different routing rules using Airborne Laser Swath Mapping (ALSM) topography to explore the impact of different flow-routing schemes on inferring the generation of saturation overland flow and the transition between hillslope to channel forming processes, as well as on location of saturation overland flow. Finally, we examined the impact of flow-routing and slope-calculation rules on modeled topography produced by Geomorphic Transport Law (GTL)-based simulations. We found that different rules produce substantive differences in the structure of the modeled topography and flow patterns over ALSM data. Our results highlight the impact of flow-routing and slope-calculation rules on modeled topography, as well as on calculated geomorphic metrics across real landscapes. As such, studies that use a variety of routing rules to analyze and simulate topography are necessary to determine those aspects that most strongly depend on a chosen routing rule.

  13. Rayleigh Scattering Diagnostic for Simultaneous Measurements of Dynamic Density and Velocity

    NASA Technical Reports Server (NTRS)

    Seasholtz, Richard G.; Panda, J.

    2000-01-01

    A flow diagnostic technique based on the molecular Rayleigh scattering of laser light is used to obtain dynamic density and velocity data in turbulent flows. The technique is based on analyzing the Rayleigh scattered light with a Fabry-Perot interferometer and recording information about the interference pattern with a multiple anode photomultiplier tube (PMT). An artificial neural network is used to process the signals from the PMT to recover the velocity time history, which is then used to calculate the velocity power spectrum. The technique is illustrated using simulated data. The results of an experiment to measure the velocity power spectrum in a low speed (100 rn/sec) flow are also presented.

  14. Field calibration of orifice meters for natural gas flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ting, V.C.; Shen, J.J.S.

    1989-03-01

    This paper presents the orifice calibration results for nominal 15.24, 10.16, and 5.08-cm (6,4,2-in.) orifice meters conducted at the Chevron's Sand Hills natural gas flow measurement facility in Crane, Texas. Over 200 test runs were collected in a field environment to study the accuracy of the orifice meters. Data were obtained at beta ratios ranging from 0.12 to 0.74 at the nominal conditions of 4576 kPa and 27{sup 0}C (650 psig and 80{sup 0}F) with a 0.57 specific gravity processed, pipeline quality natural gas. A bank of critical flow nozzles was used as the flow rate proving device to calibratemore » the orifice meters. Orifice discharge coefficients were computed with ANSI/API 2530-1985 (AGA3) and ISO 5167/ASME MFC-3M-1984 equations for every set of data points. With the orifice bore Reynolds numbers ranging from 1 to 9 million, the Sand Hills calibration data bridge the gap between the Ohio State water data at low Reynolds numbers and Chevron's high Reynolds number test data taken at a large test facility in Venice, Louisiana. The test results also successfully demonstrate that orifice meters can be accurately proved with critical flow nozzles under realistic field conditions.« less

  15. Analysis of fluid fuel flow to the neutron kinetics on molten salt reactor FUJI-12

    NASA Astrophysics Data System (ADS)

    Aji, Indarta Kuncoro; Waris, Abdul; Permana, Sidik

    2015-09-01

    Molten Salt Reactor is a reactor are operating with molten salt fuel flowing. This condition interpret that the neutron kinetics of this reactor is affected by the flow rate of the fuel. This research analyze effect by the alteration velocity of the fuel by MSR type Fuji-12, with fuel composition LiF-BeF2-ThF4-233UF4 respectively 71.78%-16%-11.86%-0.36%. Calculation process in this study is performed numerically by SOR and finite difference method use C programming language. Data of reactivity, neutron flux, and the macroscopic fission cross section for calculation process obtain from SRAC-CITATION (Standard thermal Reactor Analysis Code) and JENDL-4.0 data library. SRAC system designed and developed by JAEA (Japan Atomic Energy Agency). This study aims to observe the effect of the velocity of fuel salt to the power generated from neutron precursors at fourth year of reactor operate (last critical condition) with number of multiplication effective; 1.0155.

  16. Laser Speckle Imaging of Cerebral Blood Flow

    NASA Astrophysics Data System (ADS)

    Luo, Qingming; Jiang, Chao; Li, Pengcheng; Cheng, Haiying; Wang, Zhen; Wang, Zheng; Tuchin, Valery V.

    Monitoring the spatio-temporal characteristics of cerebral blood flow (CBF) is crucial for studying the normal and pathophysiologic conditions of brain metabolism. By illuminating the cortex with laser light and imaging the resulting speckle pattern, relative CBF images with tens of microns spatial and millisecond temporal resolution can be obtained. In this chapter, a laser speckle imaging (LSI) method for monitoring dynamic, high-resolution CBF is introduced. To improve the spatial resolution of current LSI, a modified LSI method is proposed. To accelerate the speed of data processing, three LSI data processing frameworks based on graphics processing unit (GPU), digital signal processor (DSP), and field-programmable gate array (FPGA) are also presented. Applications for detecting the changes in local CBF induced by sensory stimulation and thermal stimulation, the influence of a chemical agent on CBF, and the influence of acute hyperglycemia following cortical spreading depression on CBF are given.

  17. Control system of water flow and casting speed in continuous steel casting

    NASA Astrophysics Data System (ADS)

    Tirian, G. O.; Gheorghiu, C. A.; Hepuţ, T.; Chioncel, C.

    2017-05-01

    This paper presents the results of research based on real data taken from the installation process at Arcelor Mittal Hunedoara. Using Matlab Simulink an intelligent system is made that takes in data from the process and makes real time adjustments in the rate of flow of the cooling water and the speed of casting that eliminates fissures in the poured material from the secondary cooling of steel. Using Matlab Simulink simulation environment allowed for qualitative analysis for various real world situations. Thus, compared to the old method of approach for the problem of cracks forming in the crust of the steel in the continuous casting, this new method, proposed and developed, brings safety and precision in this complex process, thus removing any doubt on the existence or non-existence of cracks and takes the necessary steps to prevent and correct them.

  18. CoreFlow: a computational platform for integration, analysis and modeling of complex biological data.

    PubMed

    Pasculescu, Adrian; Schoof, Erwin M; Creixell, Pau; Zheng, Yong; Olhovsky, Marina; Tian, Ruijun; So, Jonathan; Vanderlaan, Rachel D; Pawson, Tony; Linding, Rune; Colwill, Karen

    2014-04-04

    A major challenge in mass spectrometry and other large-scale applications is how to handle, integrate, and model the data that is produced. Given the speed at which technology advances and the need to keep pace with biological experiments, we designed a computational platform, CoreFlow, which provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts into project-specific pipelines, tracks interdependencies between related tasks, and enables the generation of summary reports as well as publication-quality images. As a result, the gap between experimental and computational components of a typical large-scale biology project is reduced, decreasing the time between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion, and modeling of multiple/selected reaction monitoring (MRM/SRM) results. CoreFlow was purposely designed as an environment for programmers to rapidly perform data analysis. These analyses are assembled into project-specific workflows that are readily shared with biologists to guide the next stages of experimentation. Its simple yet powerful interface provides a structure where scripts can be written and tested virtually simultaneously to shorten the life cycle of code development for a particular task. The scripts are exposed at every step so that a user can quickly see the relationships between the data, the assumptions that have been made, and the manipulations that have been performed. Since the scripts use commonly available programming languages, they can easily be transferred to and from other computational environments for debugging or faster processing. This focus on 'on the fly' analysis sets CoreFlow apart from other workflow applications that require wrapping of scripts into particular formats and development of specific user interfaces. Importantly, current and future releases of data analysis scripts in CoreFlow format will be of widespread benefit to the proteomics community, not only for uptake and use in individual labs, but to enable full scrutiny of all analysis steps, thus increasing experimental reproducibility and decreasing errors. This article is part of a Special Issue entitled: Can Proteomics Fill the Gap Between Genomics and Phenotypes? Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Computational modeling of river flow using bathymetry collected with an experimental, water-penetrating, green LiDAR

    NASA Astrophysics Data System (ADS)

    Kinzel, P. J.; Legleiter, C. J.; Nelson, J. M.

    2009-12-01

    Airborne bathymetric Light Detection and Ranging (LiDAR) systems designed for coastal and marine surveys are increasingly being deployed in fluvial environments. While the adaptation of this technology to rivers and streams would appear to be straightforward, currently technical challenges remain with regard to achieving high levels of vertical accuracy and precision when mapping bathymetry in shallow fluvial settings. Collectively these mapping errors have a direct bearing on hydraulic model predictions made using these data. We compared channel surveys conducted along the Platte River, Nebraska, and the Trinity River, California, using conventional ground-based methods with those made with the hybrid topographic/bathymetric Experimental Advanced Airborne Research LiDAR (EAARL). In the turbid and braided Platte River, a bathymetric-waveform processing algorithm was shown to enhance the definition of thalweg channels over a more simplified, first-surface waveform processing algorithm. Consequently flow simulations using data processed with the shallow bathymetric algorithm resulted in improved prediction of wetted area relative to the first-surface algorithm, when compared to the wetted area in concurrent aerial imagery. However, when compared to using conventionally collected data for flow modeling, the inundation extent was over predicted with the EAARL topography due to higher bed elevations measured by the LiDAR. In the relatively clear, meandering Trinity River, bathymetric processing algorithms were capable of defining a 3 meter deep pool. However, a similar bias in depth measurement was observed, with the LiDAR measuring the elevation of the river bottom above its actual position, resulting in a predicted water surface higher than that measured by field data. This contribution addresses the challenge of making bathymetric measurements with the EAARL in different environmental conditions encountered in fluvial settings, explores technical issues related to reliably detecting the water surface and river bottom, and illustrates the impact of using LiDAR data and current processing techniques to produce above and below water topographic surfaces for hydraulic modeling and habitat applications.

  20. 45 CFR 205.37 - Responsibilities of the Administration for Children and Families (ACF).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Processing and Information Retrieval System Guide. The initial advance automatic data processing planning... description of the proposed statewide management system, including the description of information flows, input..., review, assess, and inspect the planning, design, and operation of, statewide management information...

  1. 45 CFR 205.37 - Responsibilities of the Administration for Children and Families (ACF).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Application Processing and Information Retrieval System Guide. The initial advance automatic data processing... description of the proposed statewide management system, including the description of information flows, input..., review, assess, and inspect the planning, design, and operation of, statewide management information...

  2. 45 CFR 205.37 - Responsibilities of the Administration for Children and Families (ACF).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Application Processing and Information Retrieval System Guide. The initial advance automatic data processing... description of the proposed statewide management system, including the description of information flows, input..., review, assess, and inspect the planning, design, and operation of, statewide management information...

  3. Preface

    NASA Astrophysics Data System (ADS)

    Faybishenko, Boris; Witherspoon, Paul A.; Gale, John

    How to characterize fluid flow, heat, and chemical transport in geologic media remains a central challenge for geoscientists and engineers worldwide. Investigations of fluid flow and transport within rock relate to such fundamental and applied problems as environmental remediation; nonaqueous phase liquid (NAPL) transport; exploitation of oil, gas, and geothermal resources; disposal of spent nuclear fuel; and geotechnical engineering. It is widely acknowledged that fractures in unsaturated-saturated rock can play a major role in solute transport from the land surface to underlying aquifers. It is also evident that general issues concerning flow and transport predictions in subsurface fractured zones can be resolved in a practical manner by integrating investigations into the physical nature of flow in fractures, developing relevant mathematical models and modeling approaches, and collecting site characterization data. Because of the complexity of flow and transport processes in most fractured rock flow problems, it is not yet possible to develop models directly from first principles. One reason for this is the presence of episodic, preferential water seepage and solute transport, which usually proceed more rapidly than expected from volume-averaged and time-averaged models. However, the physics of these processes is still known.

  4. Numerical modelling study of gully recharge and debris flows in Haida Gwaii, British Columbia

    NASA Astrophysics Data System (ADS)

    Martin, Yvonne; Johnson, Edward; Chaikina, Olga

    2015-04-01

    In high mountains, debris flows are a major process responsible for transferring sediment to more downstream fluvial reaches. This sediment transfer begins on mountain hillslopes where various mass wasting processes move sediment from hillslopes to uppermost reaches of the channel system (these reaches are herein referred to as gullies and only experience water flow during high intensity precipitation events). Sediment recharge into gullies, which has received minimal attention in the scientific literature, refers to the transfer of sediment and other debris from surrounding hillslopes into gullies (Jakob and Oden, 2005). Debris flow occurrence and debris flow volumes depend on some precipitation threshold as well as volumes of material contained in the particular gully. For example, if one debris flow has removed all of the accumulated material from the gully, then any subsequent debris flow will be smaller if enough time has not yet passed for notable sediment recharge. Herein, we utilize the numerical model of landscape development, LandMod (Martin, 1998; Dadson and Church, 2005; Martin, 2007), to explore connections between hillslope processes, gully recharge rates, and transfer of sediment to downstream channel reaches in the Haida Gwaii, British Columbia. Hillslope processes in the model include shallow landsliding, bedrock failures and weathering. The updated debris flow algorithm is based on extensive field data available for debris flows in Haida Gwaii (e.g., Rood, 1984; Oden, 1994; Jakob and Oden, 2005), as well as theoretical considerations based on debris flow studies. The most significant model extension is the calculation of gully recharge rates; for each gully, the total accumulated sediment in gullies at each time step is determined using a power-law relation for area-normalized recharge rate versus elapsed time since the last debris flow. Thus, when the stochastic driver for debris flow occurrence triggers an event, the amount of stored material is known and can be transferred and deposited along the channel system. Results show that the size distribution of debris flows and sediment transfers from gullies to downstream reaches are modified by the inclusion of a module that accounts for sediment recharge when compared to model runs that do not consider gully recharge.

  5. Architecture and design of a 500-MHz gallium-arsenide processing element for a parallel supercomputer

    NASA Technical Reports Server (NTRS)

    Fouts, Douglas J.; Butner, Steven E.

    1991-01-01

    The design of the processing element of GASP, a GaAs supercomputer with a 500-MHz instruction issue rate and 1-GHz subsystem clocks, is presented. The novel, functionally modular, block data flow architecture of GASP is described. The architecture and design of a GASP processing element is then presented. The processing element (PE) is implemented in a hybrid semiconductor module with 152 custom GaAs ICs of eight different types. The effects of the implementation technology on both the system-level architecture and the PE design are discussed. SPICE simulations indicate that parts of the PE are capable of being clocked at 1 GHz, while the rest of the PE uses a 500-MHz clock. The architecture utilizes data flow techniques at a program block level, which allows efficient execution of parallel programs while maintaining reasonably good performance on sequential programs. A simulation study of the architecture indicates that an instruction execution rate of over 30,000 MIPS can be attained with 65 PEs.

  6. Managing mapping data using commercial data base management software.

    USGS Publications Warehouse

    Elassal, A.A.

    1985-01-01

    Electronic computers are involved in almost every aspect of the map making process. This involvement has become so thorough that it is practically impossible to find a recently developed process or device in the mapping field which does not employ digital processing in some form or another. This trend, which has been evolving over two decades, is accelerated by the significant improvements in capility, reliability, and cost-effectiveness of electronic devices. Computerized mapping processes and devices share a common need for machine readable data. Integrating groups of these components into automated mapping systems requires careful planning for data flow amongst them. Exploring the utility of commercial data base management software to assist in this task is the subject of this paper. -Author

  7. Autonomous sensor particle for parameter tracking in large vessels

    NASA Astrophysics Data System (ADS)

    Thiele, Sebastian; Da Silva, Marco Jose; Hampel, Uwe

    2010-08-01

    A self-powered and neutrally buoyant sensor particle has been developed for the long-term measurement of spatially distributed process parameters in the chemically harsh environments of large vessels. One intended application is the measurement of flow parameters in stirred fermentation biogas reactors. The prototype sensor particle is a robust and neutrally buoyant capsule, which allows free movement with the flow. It contains measurement devices that log the temperature, absolute pressure (immersion depth) and 3D-acceleration data. A careful calibration including an uncertainty analysis has been performed. Furthermore, autonomous operation of the developed prototype was successfully proven in a flow experiment in a stirred reactor model. It showed that the sensor particle is feasible for future application in fermentation reactors and other industrial processes.

  8. HYDJET++ for ultra-relativistic HIC’s: A hot cocktail of hydrodynamics, resonances and jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bravina, L. V.; Johansson, B. H. Brusheim; Crkovska, J.

    An ultra-relativistic heavy-ion collision at LHC energies is a mixture of soft and hard processes. For comparison with data we employ the HYDJET++ model, which combines the description of soft processes with the treatment of hard partons propagating hot and dense nuclear medium. Importance of the interplay of ideal hydrodynamics, final state interactions and jets for the description of harmonics of the anisotropic flow is discussed. Jets are found to be the main source of violation of the number-of-constituent-quark (NCQ) scaling at LHC energies. Many features of higher flow harmonics and dihadron angular correlations, including ridge, can be described bymore » the interference of elliptic and triangular flows.« less

  9. Comparison Between Predicted and Experimentally Measured Flow Fields at the Exit of the SSME HPFTP Impeller

    NASA Technical Reports Server (NTRS)

    Bache, George

    1993-01-01

    Validation of CFD codes is a critical first step in the process of developing CFD design capability. The MSFC Pump Technology Team has recognized the importance of validation and has thus funded several experimental programs designed to obtain CFD quality validation data. The first data set to become available is for the SSME High Pressure Fuel Turbopump Impeller. LDV Data was taken at the impeller inlet (to obtain a reliable inlet boundary condition) and three radial positions at the impeller discharge. Our CFD code, TASCflow, is used within the Propulsion and Commercial Pump industry as a tool for pump design. The objective of this work, therefore, is to further validate TASCflow for application in pump design. TASCflow was used to predict flow at the impeller discharge for flowrates of 80, 100 and 115 percent of design flow. Comparison to data has been made with encouraging results.

  10. Lagrangian postprocessing of computational hemodynamics.

    PubMed

    Shadden, Shawn C; Arzani, Amirhossein

    2015-01-01

    Recent advances in imaging, modeling, and computing have rapidly expanded our capabilities to model hemodynamics in the large vessels (heart, arteries, and veins). This data encodes a wealth of information that is often under-utilized. Modeling (and measuring) blood flow in the large vessels typically amounts to solving for the time-varying velocity field in a region of interest. Flow in the heart and larger arteries is often complex, and velocity field data provides a starting point for investigating the hemodynamics. This data can be used to perform Lagrangian particle tracking, and other Lagrangian-based postprocessing. As described herein, Lagrangian methods are necessary to understand inherently transient hemodynamic conditions from the fluid mechanics perspective, and to properly understand the biomechanical factors that lead to acute and gradual changes of vascular function and health. The goal of the present paper is to review Lagrangian methods that have been used in post-processing velocity data of cardiovascular flows.

  11. Measuremants in the wake of an infinite swept airfoil

    NASA Technical Reports Server (NTRS)

    Novak, C. J.; Ramaprian, B. R.

    1982-01-01

    This is a report of the measurements in the trailing edge region as well as in the report of the developing wake behind a swept NACA 0012 airfoil at zero incidence and a sweep angle of 30 degrees. The measurements include both the mean and turbulent flow properties. The mean flow velocities, flow inclination and static pressure are measured using a calibrated three-hole yaw probe. The measurements of all the relevant Reynolds stress components in the wake are made using a tri-axial hot-wire probe and a digital data processing technique developed by the authors. The development of the three dimensional near-wake into a nearly two dimensional far-wake is discussed in the light of the experimental data. A complete set of wake data along with the data on the initial boundary layer in the trailing edge region of the airfoil are tabulated in an appendix to the report.

  12. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1987-01-01

    The results of ongoing research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a spatial distributed computer environment is presented. This model is identified by the acronym ATAMM (Algorithm/Architecture Mapping Model). The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to optimize computational concurrency in the multiprocessor environment and to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  13. Lagrangian postprocessing of computational hemodynamics

    PubMed Central

    Shadden, Shawn C.; Arzani, Amirhossein

    2014-01-01

    Recent advances in imaging, modeling and computing have rapidly expanded our capabilities to model hemodynamics in the large vessels (heart, arteries and veins). This data encodes a wealth of information that is often under-utilized. Modeling (and measuring) blood flow in the large vessels typically amounts to solving for the time-varying velocity field in a region of interest. Flow in the heart and larger arteries is often complex, and velocity field data provides a starting point for investigating the hemodynamics. This data can be used to perform Lagrangian particle tracking, and other Lagrangian-based postprocessing. As described herein, Lagrangian methods are necessary to understand inherently transient hemodynamic conditions from the fluid mechanics perspective, and to properly understand the biomechanical factors that lead to acute and gradual changes of vascular function and health. The goal of the present paper is to review Lagrangian methods that have been used in post-processing velocity data of cardiovascular flows. PMID:25059889

  14. Research and Design on a Product Data Definition System of Semiconductor Packaging Industry

    NASA Astrophysics Data System (ADS)

    Shi, Jinfei; Ma, Qingyao; Zhou, Yifan; Chen, Ruwen

    2017-12-01

    This paper develops a product data definition (PDD) system for a semiconductor packaging and testing company with independent intellectual property rights. The new PDD system can solve the problems such as, the effective control of production plans, the timely feedback of production processes, and the efficient schedule of resources. Firstly, this paper introduces the general requirements of the PDD system and depicts the operation flow and the data flow of the PDD system. Secondly, the overall design scheme of the PDD system is put forward. After that, the physical data model is developed using the Power Designer15.0 tool, and the database system is built. Finally, the function realization and running effects of the PDD system are analysed. The successful operation of the PDD system can realize the information flow among various production departments of the enterprise to meet the standard of the enterprise manufacturing integration and improve the efficiency of production management.

  15. Modular GIS Framework for National Scale Hydrologic and Hydraulic Modeling Support

    NASA Astrophysics Data System (ADS)

    Djokic, D.; Noman, N.; Kopp, S.

    2015-12-01

    Geographic information systems (GIS) have been extensively used for pre- and post-processing of hydrologic and hydraulic models at multiple scales. An extensible GIS-based framework was developed for characterization of drainage systems (stream networks, catchments, floodplain characteristics) and model integration. The framework is implemented as a set of free, open source, Python tools and builds on core ArcGIS functionality and uses geoprocessing capabilities to ensure extensibility. Utilization of COTS GIS core capabilities allows immediate use of model results in a variety of existing online applications and integration with other data sources and applications.The poster presents the use of this framework to downscale global hydrologic models to local hydraulic scale and post process the hydraulic modeling results and generate floodplains at any local resolution. Flow forecasts from ECMWF or WRF-Hydro are downscaled and combined with other ancillary data for input into the RAPID flood routing model. RAPID model results (stream flow along each reach) are ingested into a GIS-based scale dependent stream network database for efficient flow utilization and visualization over space and time. Once the flows are known at localized reaches, the tools can be used to derive the floodplain depth and extent for each time step in the forecast at any available local resolution. If existing rating curves are available they can be used to relate the flow to the depth of flooding, or synthetic rating curves can be derived using the tools in the toolkit and some ancillary data/assumptions. The results can be published as time-enabled spatial services to be consumed by web applications that use floodplain information as an input. Some of the existing online presentation templates can be easily combined with available online demographic and infrastructure data to present the impact of the potential floods on the local community through simple, end user products. This framework has been successfully used in both the data rich environments as well as in locales with minimum available spatial and hydrographic data.

  16. Integration of image capture and processing: beyond single-chip digital camera

    NASA Astrophysics Data System (ADS)

    Lim, SukHwan; El Gamal, Abbas

    2001-05-01

    An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.

  17. Real-time high-velocity resolution color Doppler OCT

    NASA Astrophysics Data System (ADS)

    Westphal, Volker; Yazdanfar, Siavash; Rollins, Andrew M.; Izatt, Joseph A.

    2001-05-01

    Color Doppler optical coherence tomography (CDOCT), also called Optical Doppler Tomography) is a noninvasive optical imaging technique, which allows for micron-scale physiological flow mapping simultaneous with morphological OCT imaging. Current systems for real-time endoscopic optical coherence tomography (EOCT) would be enhanced by the capability to visualize sub-surface blood flow for applications in early cancer diagnosis and the management of bleeding ulcers. Unfortunately, previous implementations of CDOCT have either been sufficiently computationally expensive (employing Fourier or Hilbert transform techniques) to rule out real-time imaging of flow, or have been restricted to imaging of excessively high flow velocities when used in real time. We have developed a novel Doppler OCT signal-processing strategy capable of imaging physiological flow rates in real time. This strategy employs cross-correlation processing of sequential A-scans in an EOCT image, as opposed to autocorrelation processing as described previously. To measure Doppler shifts in the kHz range using this technique, it was necessary to stabilize the EOCT interferometer center frequency, eliminate parasitic phase noise, and to construct a digital cross correlation unit able to correlate signals of megahertz bandwidth by a fixed lag of up to a few ms. The performance of the color Doppler OCT system was demonstrated in a flow phantom, demonstrating a minimum detectable flow velocity of ~0.8 mm/s at a data acquisition rate of 8 images/second (with 480 A-scans/image) using a handheld probe. Dynamic flow as well as using it freehanded was shown. Flow was also detectable in a phantom in combination with a clinical usable endoscopic probe.

  18. Integrated Analysis of Flow, Form, and Function for River Management and Design Testing

    NASA Astrophysics Data System (ADS)

    Lane, B. A. A.; Pasternack, G. B.; Sandoval Solis, S.

    2017-12-01

    Rivers are highly complex, dynamic systems that support numerous ecosystem functions including transporting sediment, modulating biogeochemical processes, and regulating habitat availability for native species. The extent and timing of these functions is largely controlled by the interplay of hydrologic dynamics (i.e. flow) and the shape and composition of the river corridor (i.e. form). This study applies synthetic channel design to the evaluation of river flow-form-function linkages, with the aim of evaluating these interactions across a range of flows and forms to inform process-driven management efforts with limited data and financial requirements. In an application to California's Mediterranean-montane streams, the interacting roles of channel form, water year type, and hydrologic impairment were evaluated across a suite of ecosystem functions related to hydrogeomorphic processes, aquatic habitat, and riparian habitat. Channel form acted as the dominant control on hydrogeomorphic processes considered, while water year type controlled salmonid habitat functions. Streamflow alteration for hydropower increased redd dewatering risk and altered aquatic habitat availability and riparian recruitment dynamics. Study results highlight critical tradeoffs in ecosystem function performance and emphasize the significance of spatiotemporal diversity of flow and form at multiple scales for maintaining river ecosystem integrity. The approach is broadly applicable and extensible to other systems and ecosystem functions, where findings can be used to characterize complex controls on river ecosystems, assess impacts of proposed flow and form alterations, and inform river restoration strategies.

  19. Effect of spatial organisation behaviour on upscaling the overland flow formation in an arable land

    NASA Astrophysics Data System (ADS)

    Silasari, Rasmiaditya; Blöschl, Günter

    2014-05-01

    Overland flow during rainfall events on arable land is important to investigate as it affects the land erosion process and water quality in the river. The formation of overland flow may happen through different ways (i.e. Hortonian overland flow, saturation excess overland flow) which is influenced by the surface and subsurface soil characteristics (i.e. land cover, soil infiltration rate). As the soil characteristics vary throughout the entire catchment, it will form distinct spatial patterns with organised or random behaviour. During the upscaling of hydrological processes from plot to catchment scale, this behaviour will become substantial since organised patterns will result in higher spatial connectivity and thus higher conductivity. However, very few of the existing studies explicitly address this effect of spatial organisations of the patterns in upscaling the hydrological processes to the catchment scale. This study will assess the upscaling of overland flow formation with concerns of spatial organisation behaviour of the patterns by application of direct field observations under natural conditions using video camera and soil moisture sensors and investigation of the underlying processes using a physical-based hydrology model. The study area is a Hydrological Open Air Laboratory (HOAL) located at Petzenkirchen, Lower Austria. It is a 64 ha catchment with land use consisting of arable land (87%), forest (6%), pasture (5%) and paved surfaces (2%). A video camera is installed 7m above the ground on a weather station mast in the middle of the arable land to monitor the overland flow patterns during rainfall events in a 2m x 6m plot scale. Soil moisture sensors with continuous measurement at different depth (5, 10, 20 and 50cm) are installed at points where the field is monitored by the camera. The patterns of overland flow formation and subsurface flow state at the plot scale will be generated using a coupled surface-subsurface flow physical-based hydrology model. The observation data will be assimilated into the model to verify the corresponding processes between surface and subsurface flow during the rainfall events. The patterns of conductivity then will be analyzed at catchment scale using the spatial stochastic analysis based on the classification of soil characteristics of the entire catchment. These patterns of conductivity then will be applied in the model at catchment scale to see how the organisational behaviour can affect the spatial connectivity of the hydrological processes and the results of the catchment response. A detailed modelling of the underlying processes in the physical-based model will allow us to see the direct effect of the spatial connectivity to the occurring surface and subsurface flow. This will improve the analysis of the effect of spatial organisations of the patterns in upscaling the hydrological processes from plot to catchment scale.

  20. Development of flow systems by direct-milling on poly(methyl methacrylate) substrates using UV-photopolymerization as sealing process.

    PubMed

    Rodrigues, Eunice R G O; Lapa, Rui A S

    2009-03-01

    An alternative process for the design and construction of fluidic devices is presented. Several sealing processes were studied, as well as the hydrodynamic characteristics of the proposed fluidic devices. Manifolds were imprinted on polymeric substrates by direct-write milling, according to Computer Assisted Design (CAD) data. Poly(methyl methacrylate) (PMMA) was used as substrate due to its physical and chemical properties. Different bonding approaches for the imprinted channels were evaluated and UV-photopolymerization of acrylic acid (AA) was selected. The hydrodynamic characteristics of the proposed flow devices were assessed and compared to those obtained in similar flow systems using PTFE reactors and micro-pumps as propulsion units (multi-pumping approach). The applicability of the imprinted reactors was evaluated in the sequential determination of calcium and magnesium in water samples. Results obtained were in good agreement with those obtained by the reference procedure.

  1. Rain events and their effect on effluent quality studied at a full scale activated sludge treatment plant.

    PubMed

    Wilén, B M; Lumley, D; Mattsson, A; Mino, T

    2006-01-01

    The effect of rain events on effluent quality dynamics was studied at a full scale activated sludge wastewater treatment plant which has a process solution incorporating pre-denitrification in activated sludge with post-nitrification in trickling filters. The incoming wastewater flow varies significantly due to a combined sewer system. Changed flow conditions have an impact on the whole treatment process since the recirculation to the trickling filters is set by the hydraulic limitations of the secondary settlers. Apart from causing different hydraulic conditions in the plant, increased flow due to rain or snow-melting, changes the properties of the incoming wastewater which affects process performance and effluent quality, especially the particle removal efficiency. A comprehensive set of on-line and laboratory data were collected and analysed to assess the impact of rain events on the plant performance.

  2. Application of process tomography in gas-solid fluidised beds in different scales and structures

    NASA Astrophysics Data System (ADS)

    Wang, H. G.; Che, H. Q.; Ye, J. M.; Tu, Q. Y.; Wu, Z. P.; Yang, W. Q.; Ocone, R.

    2018-04-01

    Gas-solid fluidised beds are commonly used in particle-related processes, e.g. for coal combustion and gasification in the power industry, and the coating and granulation process in the pharmaceutical industry. Because the operation efficiency depends on the gas-solid flow characteristics, it is necessary to investigate the flow behaviour. This paper is about the application of process tomography, including electrical capacitance tomography (ECT) and microwave tomography (MWT), in multi-scale gas-solid fluidisation processes in the pharmaceutical and power industries. This is the first time that both ECT and MWT have been applied for this purpose in multi-scale and complex structure. To evaluate the sensor design and image reconstruction and to investigate the effects of sensor structure and dimension on the image quality, a normalised sensitivity coefficient is introduced. In the meantime, computational fluid dynamic (CFD) analysis based on a computational particle fluid dynamic (CPFD) model and a two-phase fluid model (TFM) is used. Part of the CPFD-TFM simulation results are compared and validated by experimental results from ECT and/or MWT. By both simulation and experiment, the complex flow hydrodynamic behaviour in different scales is analysed. Time-series capacitance data are analysed both in time and frequency domains to reveal the flow characteristics.

  3. A modeling approach to establish environmental flow threshold in ungauged semidiurnal tidal river

    NASA Astrophysics Data System (ADS)

    Akter, A.; Tanim, A. H.

    2018-03-01

    Due to shortage of flow monitoring data in ungauged semidiurnal river, 'environmental flow' (EF) determination based on its key component 'minimum low flow' is always difficult. For EF assessment this study selected a reach immediately after the Halda-Karnafuli confluence, a unique breeding ground for Indian Carp fishes of Bangladesh. As part of an ungauged tidal river, EF threshold establishment faces challenges in changing ecological paradigms with periodic change of tides and hydrologic alterations. This study describes a novel approach through modeling framework comprising hydrological, hydrodynamic and habitat simulation model. The EF establishment was conceptualized according to the hydrologic process of an ungauged semi-diurnal tidal regime in four steps. Initially, a hydrologic model coupled with a hydrodynamic model to simulate flow considering land use changes effect on streamflow, seepage loss of channel, friction dominated tidal decay as well as lack of long term flow characteristics. Secondly, to define hydraulic habitat feature, a statistical analysis on derived flow data was performed to identify 'habitat suitability'. Thirdly, to observe the ecological habitat behavior based on the identified hydrologic alteration, hydraulic habitat features were investigated. Finally, based on the combined habitat suitability index flow alteration and ecological response relationship was established. Then, the obtained EF provides a set of low flow indices of desired regime and thus the obtained discharge against maximum Weighted Usable Area (WUA) was defined as EF threshold for the selected reach. A suitable EF regime condition was obtained within flow range 25-30.1 m3/s i.e., around 10-12% of the mean annual runoff of 245 m3/s and these findings are within researchers' recommendation of minimum flow requirement. Additionally it was observed that tidal characteristics are dominant process in semi-diurnal regime. However, during the study period (2010-2015) the validated model with those reported observations can provide guidance for the decision support system (DSS) to maintain EF range in an ungauged tidal river.

  4. A wavelet-based intermittency detection technique from PIV investigations in transitional boundary layers

    NASA Astrophysics Data System (ADS)

    Simoni, Daniele; Lengani, Davide; Guida, Roberto

    2016-09-01

    The transition process of the boundary layer growing over a flat plate with pressure gradient simulating the suction side of a low-pressure turbine blade and elevated free-stream turbulence intensity level has been analyzed by means of PIV and hot-wire measurements. A detailed view of the instantaneous flow field in the wall-normal plane highlights the physics characterizing the complex process leading to the formation of large-scale coherent structures during breaking down of the ordered motion of the flow, thus generating randomized oscillations (i.e., turbulent spots). This analysis gives the basis for the development of a new procedure aimed at determining the intermittency function describing (statistically) the transition process. To this end, a wavelet-based method has been employed for the identification of the large-scale structures created during the transition process. Successively, a probability density function of these events has been defined so that an intermittency function is deduced. This latter strictly corresponds to the intermittency function of the transitional flow computed trough a classic procedure based on hot-wire data. The agreement between the two procedures in the intermittency shape and spot production rate proves the capability of the method in providing the statistical representation of the transition process. The main advantages of the procedure here proposed concern with its applicability to PIV data; it does not require a threshold level to discriminate first- and/or second-order time-derivative of hot-wire time traces (that makes the method not influenced by the operator); and it provides a clear evidence of the connection between the flow physics and the statistical representation of transition based on theory of turbulent spot propagation.

  5. Analysis of stochastic characteristics of the Benue River flow process

    NASA Astrophysics Data System (ADS)

    Otache, Martins Y.; Bakir, Mohammad; Li, Zhijia

    2008-05-01

    Stochastic characteristics of the Benue River streamflow process are examined under conditions of data austerity. The streamflow process is investigated for trend, non-stationarity and seasonality for a time period of 26 years. Results of trend analyses with Mann-Kendall test show that there is no trend in the annual mean discharges. Monthly flow series examined with seasonal Kendall test indicate the presence of positive change in the trend for some months, especially the months of August, January, and February. For the stationarity test, daily and monthly flow series appear to be stationary whereas at 1%, 5%, and 10% significant levels, the stationarity alternative hypothesis is rejected for the annual flow series. Though monthly flow appears to be stationary going by this test, because of high seasonality, it could be said to exhibit periodic stationarity based on the seasonality analysis. The following conclusions are drawn: (1) There is seasonality in both the mean and variance with unimodal distribution. (2) Days with high mean also have high variance. (3) Skewness coefficients for the months within the dry season period are greater than those of the wet season period, and seasonal autocorrelations for streamflow during dry season are generally larger than those of the wet season. Precisely, they are significantly different for most of the months. (4) The autocorrelation functions estimated “over time” are greater in the absolute value for data that have not been deseasonalised but were initially normalised by logarithmic transformation only, while autocorrelation functions for i = 1, 2, ..., 365 estimated “over realisations” have their coefficients significantly different from other coefficients.

  6. Observing hydrological processes: recent advancements in surface flow monitoring through image analysis

    NASA Astrophysics Data System (ADS)

    Tauro, Flavia; Grimaldi, Salvatore

    2017-04-01

    Recently, several efforts have been devoted to the design and development of innovative, and often unintended, approaches for the acquisition of hydrological data. Among such pioneering techniques, this presentation reports recent advancements towards the establishment of a novel noninvasive and potentially continuous methodology based on the acquisition and analysis of images for spatially distributed observations of the kinematics of surface waters. The approach aims at enabling rapid, affordable, and accurate surface flow monitoring of natural streams. Flow monitoring is an integral part of hydrological sciences and is essential for disaster risk reduction and the comprehension of natural phenomena. However, water processes are inherently complex to observe: they are characterized by multiscale and highly heterogeneous phenomena which have traditionally demanded sophisticated and costly measurement techniques. Challenges in the implementation of such techniques have also resulted in lack of hydrological data during extreme events, in difficult-to-access environments, and at high temporal resolution. By combining low-cost yet high-resolution images and several velocimetry algorithms, noninvasive flow monitoring has been successfully conducted at highly heterogeneous scales, spanning from rills to highly turbulent streams, and medium-scale rivers, with minimal supervision by external users. Noninvasive image data acquisition has also afforded observations in high flow conditions. Latest novelties towards continuous flow monitoring at the catchment scale have entailed the development of a remote gauge-cam station on the Tiber River and integration of flow monitoring through image analysis with unmanned aerial systems (UASs) technology. The gauge-cam station and the UAS platform both afford noninvasive image acquisition and calibration through an innovative laser-based setup. Compared to traditional point-based instrumentation, images allow for generating surface flow velocity maps which fully describe the kinematics of the velocity field in natural streams. Also, continuous observations provide a close picture of the evolving dynamics of natural water bodies. Despite such promising achievements, dealing with images also involves coping with adverse illumination, massive data handling and storage, and data-intensive computing. Most importantly, establishing a novel observational technique requires estimation of the uncertainty associated to measurements and thorough comparison to existing benchmark approaches. In this presentation, we provide answers to some of these issues and perspectives for future research.

  7. Three-dimensional flow characteristics of aluminum alloy in multi-pass equal channel angular pressing

    NASA Astrophysics Data System (ADS)

    Jin, Young-Gwan; Son, Il-Heon; Im, Yong-Taek

    2010-06-01

    Experiments with a square specimen made of commercially pure aluminum alloy (AA1050) were conducted to investigate deformation behaviour during a multi-pass Equal Channel Angular Pressing (ECAP) for routes A, Bc, and C up to four passes. Three-dimensional finite element numerical simulations of the multi-pass ECAP were carried out in order to evaluate the influence of processing routes and number of passes on local flow behaviour by applying a simplified saturation model of flow stress under an isothermal condition. Simulation results were investigated by comparing them with the experimentally measured data in terms of load variations and microhardness distributions. Also, transmission electron microscopy analysis was employed to investigate the microstructural changes. The present work clearly shows that the three-dimensional flow characteristics of the deformed specimen were dependent on the strain path changes due to the processing routes and number of passes that occurred during the multi-pass ECAP.

  8. Process dominance shift in solute chemistry as revealed by long-term high-frequency water chemistry observations of groundwater flowing through weathered argillite underlying a steep forested hillslope

    NASA Astrophysics Data System (ADS)

    Kim, Hyojin; Bishop, James K. B.; Dietrich, William E.; Fung, Inez Y.

    2014-09-01

    Significant solute flux from the weathered bedrock zone - which underlies soils and saprolite - has been suggested by many studies. However, controlling processes for the hydrochemistry dynamics in this zone are poorly understood. This work reports the first results from a four-year (2009-2012) high-frequency (1-3 day) monitoring of major solutes (Ca, Mg, Na, K and Si) in the perched, dynamic groundwater in a 4000 m2 zero-order basin located at the Angelo Coast Range Reserve, Northern California. Groundwater samples were autonomously collected at three wells (downslope, mid-slope, and upslope) aligned with the axis of the drainage. Rain and throughfall samples, profiles of well headspace pCO2, vertical profiles and time series of groundwater temperature, and contemporaneous data from an extensive hydrologic and climate sensor network provided the framework for data analysis. All runoff at this soil-mantled site occurs by vertical unsaturated flow through a 5-25 m thick weathered argillite and then by lateral flows to the adjacent channel as groundwater perched over fresher bedrock. Driven by strongly seasonal rainfall, over each of the four years of observations, the hydrochemistry of the groundwater at each well repeats an annual cycle, which can be explained by two end-member processes. The first end-member process, which dominates during the winter high-flow season in mid- and upslope areas, is CO2 enhanced cation exchange reaction in the vadose zone in the more shallow conductive weathered bedrock. This process rapidly increases the cation concentrations of the infiltrated rainwater, which is responsible for the lowest cation concentration of groundwater. The second-end member process occurs in the deeper perched groundwater and either dominates year-round (at the downslope well) or becomes progressively dominant during low flow season at the two upper slope wells. This process is the equilibrium reaction with minerals such as calcite and clay minerals, but not with primary minerals, suggesting the critical role of the residence time of the water. Collectively, our measurements reveal that the hydrochemistry dynamics of the groundwater in the weathered bedrock zone is governed by two end-member processes whose dominance varies with critical zone structure, the relative importance of vadose versus groundwater zone processes, and thus with the seasonal variation of the chemistry of recharge and runoff.

  9. Delineating wetland catchments and modeling hydrologic ...

    EPA Pesticide Factsheets

    In traditional watershed delineation and topographic modeling, surface depressions are generally treated as spurious features and simply removed from a digital elevation model (DEM) to enforce flow continuity of water across the topographic surface to the watershed outlets. In reality, however, many depressions in the DEM are actual wetland landscape features with seasonal to permanent inundation patterning characterized by nested hierarchical structures and dynamic filling–spilling–merging surface-water hydrological processes. Differentiating and appropriately processing such ecohydrologically meaningful features remains a major technical terrain-processing challenge, particularly as high-resolution spatial data are increasingly used to support modeling and geographic analysis needs. The objectives of this study were to delineate hierarchical wetland catchments and model their hydrologic connectivity using high-resolution lidar data and aerial imagery. The graph-theory-based contour tree method was used to delineate the hierarchical wetland catchments and characterize their geometric and topological properties. Potential hydrologic connectivity between wetlands and streams were simulated using the least-cost-path algorithm. The resulting flow network delineated potential flow paths connecting wetland depressions to each other or to the river network on scales finer than those available through the National Hydrography Dataset. The results demonstrated that

  10. A hacker's guide to catching a debris flow: Lessons learned from four years of chasing mud in Colorado and southern California

    NASA Astrophysics Data System (ADS)

    Kean, J. W.; McCoy, S. W.; Staley, D. M.; Coe, J.; Leeper, R.; Tucker, G. E.

    2012-12-01

    Direct measurements of natural debris flows provide valuable insights into debris-flow processes and hazards. Yet debris flows are difficult to "catch" because they live in rugged terrain, appear infrequently, and have an appetite for destroying monitoring equipment. We present an overview of some successful (and failed) techniques we have used over the past four years to obtain direct measurements of 40+ debris flows in Colorado and southern California. Following the "MacGyver" theme of the session, we focus on the improvised equipment and methods we use in our hunt for quality data. These include an inexpensive erosion sensor to measure rates of debris-flow entrainment, a custom load cell enclosure for measuring debris-flow normal force, tracer rocks implanted with passive integrated transponders, basic pressure transducers to measure debris-flow timing, and standard digital cameras adapted to obtain high-resolution (1936 x 1288 pixels) video footage of debris flows. These techniques are also suitable for catching data on elusive flash floods. In addition, we also share some practical solutions to the logistical problems associated with installing monitoring equipment in rugged debris-flow terrain, such as suspension of non-contact stage gages high above channels.

  11. Multivariate statistical process control of a continuous pharmaceutical twin-screw granulation and fluid bed drying process.

    PubMed

    Silva, A F; Sarraguça, M C; Fonteyne, M; Vercruysse, J; De Leersnyder, F; Vanhoorne, V; Bostijn, N; Verstraeten, M; Vervaet, C; Remon, J P; De Beer, T; Lopes, J A

    2017-08-07

    A multivariate statistical process control (MSPC) strategy was developed for the monitoring of the ConsiGma™-25 continuous tablet manufacturing line. Thirty-five logged variables encompassing three major units, being a twin screw high shear granulator, a fluid bed dryer and a product control unit, were used to monitor the process. The MSPC strategy was based on principal component analysis of data acquired under normal operating conditions using a series of four process runs. Runs with imposed disturbances in the dryer air flow and temperature, in the granulator barrel temperature, speed and liquid mass flow and in the powder dosing unit mass flow were utilized to evaluate the model's monitoring performance. The impact of the imposed deviations to the process continuity was also evaluated using Hotelling's T 2 and Q residuals statistics control charts. The influence of the individual process variables was assessed by analyzing contribution plots at specific time points. Results show that the imposed disturbances were all detected in both control charts. Overall, the MSPC strategy was successfully developed and applied. Additionally, deviations not associated with the imposed changes were detected, mainly in the granulator barrel temperature control. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Managing computer-controlled operations

    NASA Technical Reports Server (NTRS)

    Plowden, J. B.

    1985-01-01

    A detailed discussion of Launch Processing System Ground Software Production is presented to establish the interrelationships of firing room resource utilization, configuration control, system build operations, and Shuttle data bank management. The production of a test configuration identifier is traced from requirement generation to program development. The challenge of the operational era is to implement fully automated utilities to interface with a resident system build requirements document to eliminate all manual intervention in the system build operations. Automatic update/processing of Shuttle data tapes will enhance operations during multi-flow processing.

  13. Addition to the Lewis Chemical Equilibrium Program to allow computation from coal composition data

    NASA Technical Reports Server (NTRS)

    Sevigny, R.

    1980-01-01

    Changes made to the Coal Gasification Project are reported. The program was developed by equilibrium combustion in rocket engines. It can be applied directly to the entrained flow coal gasification process. The particular problem addressed is the reduction of the coal data into a form suitable to the program, since the manual process is involved and error prone. A similar problem in relating the normal output of the program to parameters meaningful to the coal gasification process is also addressed.

  14. Modeling Fluid Flow and Microbial Reactions in the Peru Accretionary Complex

    NASA Astrophysics Data System (ADS)

    Bekins, B. A.; Matmon, D.

    2002-12-01

    Accretionary complexes are sites where sediment compaction and deeper reactions drive large-scale flow systems that can affect global solute budgets. Extensive modeling and drilling studies have elucidated the origin of the fluids, pore pressures, duration of flow, and major flow paths in these settings. An important research goal is to quantify the effect of these flow systems on global chemical budgets of reactive solutes such as carbon. The Peru margin represents an end member setting that can serve as a basis to extend the results to other margins. The sediments are relatively high in organic carbon with an average value of 2.6%. The subduction rate at ~9 cm/yr and taper angle at 14-17° are among the largest in the world. Recent microbial studies on Ocean Drilling Program Leg 201 at the Peru accretionary margin provide many key elements needed to quantify the processes affecting organic carbon in an accretionary complex. Pore water chemistry data from Site 1230 located in the Peru accretionary prism indicate that sulfate reduction is important in the top 8 mbsf. Below this depth, methanogenesis is the dominant process and methane concentrations are among the highest measured at any site on Leg 201. The presence of high methane concentrations at shallow depths suggests that methane is transported upward in the prism by fluid flow. Measurements of in-situ pore pressures and temperatures also support the presence of upward fluid flow. A single in-situ pressure measurement at ~100 mbsf indicated an overpressure of 0.14 MPa. For a reasonable formation permeability of ~ 10-16 m2, the measured overpressure is adequate to produce flow at a rate of ~5 mm/yr. This rate is comparable to previous model estimates for flow rates in the Peru accretionary prism. In addition, curvature in the downhole temperature profile can best be explained by upward fluid flow of 1-10 mm/yr. These data are used to constrain a two-dimensional coupled fluid flow and reactive transport model focusing on the fate of organic carbon entering in the Peru accretionary complex. The proposed work is the first attempt at a quantitative estimate of the processes affecting the fate of organic carbon entering a subduction zone.

  15. Base pressure associated with incompressible flow past wedges at high Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Warpinski, N. R.; Chow, W. L.

    1979-01-01

    A model is suggested to study the viscid-inviscid interaction associated with steady incompressible flow past wedges of arbitrary angles. It is shown from this analysis that the determination of the nearly constant pressure (base pressure) prevailing within the near wake is really the heart of the problem and this pressure can only be determined from these interactive considerations. The basic free streamline flow field is established through two discrete parameters which should adequately describe the inviscid flow around the body and the wake. The viscous flow processes such as boundary-layer buildup along the wedge surface, jet mixing, recompression, and reattachment which occurs along the region attached to the inviscid flow in the sense of the boundary-layer concept, serve to determine the aforementioned parameters needed for the establishment of the inviscid flow. It is found that the point of reattachment behaves as a saddle point singularity for the system of equations describing the viscous recompression process. Detailed results such as the base pressure, pressure distributions on the wedge surface, and the wake geometry as well as the influence of the characteristic Reynolds number are obtained. Discussion of these results and their comparison with the experimental data are reported.

  16. AutoDrug: fully automated macromolecular crystallography workflows for fragment-based drug discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsai, Yingssu; Stanford University, 333 Campus Drive, Mudd Building, Stanford, CA 94305-5080; McPhillips, Scott E.

    New software has been developed for automating the experimental and data-processing stages of fragment-based drug discovery at a macromolecular crystallography beamline. A new workflow-automation framework orchestrates beamline-control and data-analysis software while organizing results from multiple samples. AutoDrug is software based upon the scientific workflow paradigm that integrates the Stanford Synchrotron Radiation Lightsource macromolecular crystallography beamlines and third-party processing software to automate the crystallography steps of the fragment-based drug-discovery process. AutoDrug screens a cassette of fragment-soaked crystals, selects crystals for data collection based on screening results and user-specified criteria and determines optimal data-collection strategies. It then collects and processes diffraction data,more » performs molecular replacement using provided models and detects electron density that is likely to arise from bound fragments. All processes are fully automated, i.e. are performed without user interaction or supervision. Samples can be screened in groups corresponding to particular proteins, crystal forms and/or soaking conditions. A single AutoDrug run is only limited by the capacity of the sample-storage dewar at the beamline: currently 288 samples. AutoDrug was developed in conjunction with RestFlow, a new scientific workflow-automation framework. RestFlow simplifies the design of AutoDrug by managing the flow of data and the organization of results and by orchestrating the execution of computational pipeline steps. It also simplifies the execution and interaction of third-party programs and the beamline-control system. Modeling AutoDrug as a scientific workflow enables multiple variants that meet the requirements of different user groups to be developed and supported. A workflow tailored to mimic the crystallography stages comprising the drug-discovery pipeline of CoCrystal Discovery Inc. has been deployed and successfully demonstrated. This workflow was run once on the same 96 samples that the group had examined manually and the workflow cycled successfully through all of the samples, collected data from the same samples that were selected manually and located the same peaks of unmodeled density in the resulting difference Fourier maps.« less

  17. Environmental analysis using integrated GIS and remotely sensed data - Some research needs and priorities

    NASA Technical Reports Server (NTRS)

    Davis, Frank W.; Quattrochi, Dale A.; Ridd, Merrill K.; Lam, Nina S.-N.; Walsh, Stephen J.

    1991-01-01

    This paper discusses some basic scientific issues and research needs in the joint processing of remotely sensed and GIS data for environmental analysis. Two general topics are treated in detail: (1) scale dependence of geographic data and the analysis of multiscale remotely sensed and GIS data, and (2) data transformations and information flow during data processing. The discussion of scale dependence focuses on the theory and applications of spatial autocorrelation, geostatistics, and fractals for characterizing and modeling spatial variation. Data transformations during processing are described within the larger framework of geographical analysis, encompassing sampling, cartography, remote sensing, and GIS. Development of better user interfaces between image processing, GIS, database management, and statistical software is needed to expedite research on these and other impediments to integrated analysis of remotely sensed and GIS data.

  18. Web-based analysis and publication of flow cytometry experiments.

    PubMed

    Kotecha, Nikesh; Krutzik, Peter O; Irish, Jonathan M

    2010-07-01

    Cytobank is a Web-based application for storage, analysis, and sharing of flow cytometry experiments. Researchers use a Web browser to log in and use a wide range of tools developed for basic and advanced flow cytometry. In addition to providing access to standard cytometry tools from any computer, Cytobank creates a platform and community for developing new analysis and publication tools. Figure layouts created on Cytobank are designed to allow transparent access to the underlying experiment annotation and data processing steps. Since all flow cytometry files and analysis data are stored on a central server, experiments and figures can be viewed or edited by anyone with the proper permission, from any computer with Internet access. Once a primary researcher has performed the initial analysis of the data, collaborators can engage in experiment analysis and make their own figure layouts using the gated, compensated experiment files. Cytobank is available to the scientific community at http://www.cytobank.org. (c) 2010 by John Wiley & Sons, Inc.

  19. Web-Based Analysis and Publication of Flow Cytometry Experiments

    PubMed Central

    Kotecha, Nikesh; Krutzik, Peter O.; Irish, Jonathan M.

    2014-01-01

    Cytobank is a web-based application for storage, analysis, and sharing of flow cytometry experiments. Researchers use a web browser to log in and use a wide range of tools developed for basic and advanced flow cytometry. In addition to providing access to standard cytometry tools from any computer, Cytobank creates a platform and community for developing new analysis and publication tools. Figure layouts created on Cytobank are designed to allow transparent access to the underlying experiment annotation and data processing steps. Since all flow cytometry files and analysis data are stored on a central server, experiments and figures can be viewed or edited by anyone with the proper permissions from any computer with Internet access. Once a primary researcher has performed the initial analysis of the data, collaborators can engage in experiment analysis and make their own figure layouts using the gated, compensated experiment files. Cytobank is available to the scientific community at www.cytobank.org PMID:20578106

  20. How predictable is the behaviour of torrential processes: two case studies of the summer 2012

    NASA Astrophysics Data System (ADS)

    Huebl, Johannes; Eisl, Julia; Janu, Stefan; Hanspeter, Pussnig

    2013-04-01

    Debris flow hazards play an important role in the Austrian Alps since many villages are located on alluvial fans. Most of the mitigation Measures as well as Hazard Zone Maps are designed by engineers of previous generations, who know quite a lot about the torrential behaviour from their experience. But speaking in terms of recurrence intervals of 100 years or even more, human memory is restricted. On the other hand numerical modelling is a fast growing task in dealing with natural hazards. Scenarios of torrential hazards can be defined and accordant deposition pattern, flow depths and velocities are calculated. But of course, errors in the input data must lead to fatal errors in the results, consequently threaten human life in possible affected areas. Thus the need for data collection of exceptional events can help to reproduce the reality in a quite high grade, indeed, but unexpected events are still an issue and pose a challenge to engineers. In summer 2012 two debris flow events occurred in Austria with quite different behaviours, from triggering mechanism and flow behaviour through to deposition: Thunderstorms or long lasting rainfall, slope failures with subsequent channel blockage and dike breaching or linear erosion, one or more debris flows, one huge debris flow surge or a series of debris flow surges, sediments without clay or cohesive material, near channel deposition or outspread deposits. Both debris flows have been unexpected in their dimension, although mitigation measures and hazard maps exist. Both events were documented accurately, first to try to understand the torrential process occurred, second to identify the most fitting mitigation measures, ranging from permanent structures to temporary warning systems.

  1. Process Materialization Using Templates and Rules to Design Flexible Process Models

    NASA Astrophysics Data System (ADS)

    Kumar, Akhil; Yao, Wen

    The main idea in this paper is to show how flexible processes can be designed by combining generic process templates and business rules. We instantiate a process by applying rules to specific case data, and running a materialization algorithm. The customized process instance is then executed in an existing workflow engine. We present an architecture and also give an algorithm for process materialization. The rules are written in a logic-based language like Prolog. Our focus is on capturing deeper process knowledge and achieving a holistic approach to robust process design that encompasses control flow, resources and data, as well as makes it easier to accommodate changes to business policy.

  2. Using CloudSat and the A-Train to Estimate Tropical Cyclone Intensity in the Western North Pacific

    DTIC Science & Technology

    2014-09-01

    CloudSat System Data Flow (from Cooperative Institute for Research in the Atmosphere 2008...radar Department of Defense Data Processing Center European Centre for Medium-Range Weather Forecasts Earth observing system Earth observing... system data and information system Earth sciences systems pathfinder hierarchical data format moderate resolution imaging spectroradiometer moist

  3. Framework for developing hybrid process-driven, artificial neural network and regression models for salinity prediction in river systems

    NASA Astrophysics Data System (ADS)

    Hunter, Jason M.; Maier, Holger R.; Gibbs, Matthew S.; Foale, Eloise R.; Grosvenor, Naomi A.; Harders, Nathan P.; Kikuchi-Miller, Tahali C.

    2018-05-01

    Salinity modelling in river systems is complicated by a number of processes, including in-stream salt transport and various mechanisms of saline accession that vary dynamically as a function of water level and flow, often at different temporal scales. Traditionally, salinity models in rivers have either been process- or data-driven. The primary problem with process-based models is that in many instances, not all of the underlying processes are fully understood or able to be represented mathematically. There are also often insufficient historical data to support model development. The major limitation of data-driven models, such as artificial neural networks (ANNs) in comparison, is that they provide limited system understanding and are generally not able to be used to inform management decisions targeting specific processes, as different processes are generally modelled implicitly. In order to overcome these limitations, a generic framework for developing hybrid process and data-driven models of salinity in river systems is introduced and applied in this paper. As part of the approach, the most suitable sub-models are developed for each sub-process affecting salinity at the location of interest based on consideration of model purpose, the degree of process understanding and data availability, which are then combined to form the hybrid model. The approach is applied to a 46 km reach of the Murray River in South Australia, which is affected by high levels of salinity. In this reach, the major processes affecting salinity include in-stream salt transport, accession of saline groundwater along the length of the reach and the flushing of three waterbodies in the floodplain during overbank flows of various magnitudes. Based on trade-offs between the degree of process understanding and data availability, a process-driven model is developed for in-stream salt transport, an ANN model is used to model saline groundwater accession and three linear regression models are used to account for the flushing of the different floodplain storages. The resulting hybrid model performs very well on approximately 3 years of daily validation data, with a Nash-Sutcliffe efficiency (NSE) of 0.89 and a root mean squared error (RMSE) of 12.62 mg L-1 (over a range from approximately 50 to 250 mg L-1). Each component of the hybrid model results in noticeable improvements in model performance corresponding to the range of flows for which they are developed. The predictive performance of the hybrid model is significantly better than that of a benchmark process-driven model (NSE = -0.14, RMSE = 41.10 mg L-1, Gbench index = 0.90) and slightly better than that of a benchmark data-driven (ANN) model (NSE = 0.83, RMSE = 15.93 mg L-1, Gbench index = 0.36). Apart from improved predictive performance, the hybrid model also has advantages over the ANN benchmark model in terms of increased capacity for improving system understanding and greater ability to support management decisions.

  4. Integrating Flow, Form, and Function for Improved Environmental Water Management

    NASA Astrophysics Data System (ADS)

    Albin Lane, Belize Arela

    Rivers are complex, dynamic natural systems. The performance of river ecosystem functions, such as habitat availability and sediment transport, depends on the interplay of hydrologic dynamics (flow) and geomorphic settings (form). However, most river restoration studies evaluate the role of either flow or form without regard for their dynamic interactions. Despite substantial recent interest in quantifying environmental water requirements to support integrated water management efforts, the absence of quantitative, transferable relationships between river flow, form, and ecosystem functions remains a major limitation. This research proposes a novel, process-driven methodology for evaluating river flow-form-function linkages in support of basin-scale environmental water management. This methodology utilizes publically available geospatial and time-series data and targeted field data collection to improve basic understanding of river systems with limited data and resource requirements. First, a hydrologic classification system is developed to characterize natural hydrologic variability across a highly altered, physio-climatically diverse landscape. Next, a statistical analysis is used to characterize reach-scale geomorphic variability and to investigate the utility of topographic variability attributes (TVAs, subreach-scale undulations in channel width and depth), alongside traditional reach-averaged attributes, for distinguishing dominant geomorphic forms and processes across a hydroscape. Finally, the interacting roles of flow (hydrologic regime, water year type, and hydrologic impairment) and form (channel morphology) are quantitatively evaluated with respect to ecosystem functions related to hydrogeomorphic processes, aquatic habitat, and riparian habitat. Synthetic river corridor generation is used to evaluate and isolate the role of distinct geomorphic attributes without the need for intensive topographic surveying. This three-part methodology was successfully applied in the Sacramento Basin of California, USA, a large, heavily altered Mediterranean-montane basin. A spatially-explicit hydrologic classification of California distinguished eight natural hydrologic regimes representing distinct flow sources, hydrologic characteristics, and rainfall-runoff controls. A hydro-geomorphic sub-classification of the Sacramento Basin based on stratified random field surveys of 161 stream reaches distinguished nine channel types consisting of both previously identified and new channel types. Results indicate that TVAs provide a quantitative basis for interpreting non-uniform as well as uniform geomorphic processes to better distinguish linked channel forms and functions of ecological significance. Finally, evaluation of six ecosystem functions across alternative flow-form scenarios in the Yuba River watershed highlights critical tradeoffs in ecosystem performance and emphasizes the significance of spatiotemporal diversity of flow and form for maintaining ecosystem integrity. The methodology developed in this dissertation is broadly applicable and extensible to other river systems and ecosystem functions, where findings can be used to characterize complex controls on river ecosystems, assess impacts of proposed flow and form alterations, and inform river restoration strategies. Overall, this research improves scientific understanding of the linkages between hydrology, geomorphology, and river ecosystems to more efficiently allocate scare water resources for human and environmental objectives across natural and built landscapes.

  5. Heat flow and thermal processes in the Jornada delMuerto, New Mexico

    NASA Technical Reports Server (NTRS)

    Reiter, M.

    1985-01-01

    Most heat flow data in rifts are uncertain largely because of hydrologic disturbances in regions of extensive fracturing. Estimates of heat flow in deep petroleum tests within a large basin of the Rio Grande rift, which has suffered little syn-rift fracturing, may begin to provide clearer insight into the relationships between high heat flow and crustal thinning processes. The Jornada del Muerto is a large basin located in the Rio Grande rift of south central New Mexico. The region of interest within the Jornada del Muerto is centered about 30 km east of the town of Truth or Consequences, and is approximately 60 km north-south by 30 km east-west. High heat flows are estimated for the region. Values increase from about 90 mWm(-2) in the northern part of the study area to about 125 mWm(-2) in the southern part. These high heat flows are rather enigmatic because in the immediate vicinities of the sites there is little evidence of Cenozoic volcanism or syn-rift extensional tectonics. It is suggested that the geothermal anomaly in the southern Jornada del Muerto (approx. 125 to approx. 95 mWm(-2) results from some type of mass movement-heat transfer mechanism operating in the crust just below the elastic layer. This conclusion is consistent with the geologic and geophysical data which describe a thin crust, apparently devoid of features indicative of extensional-tectonics in the upper part of the lastic crust.

  6. Development of unconfined conditions in multi-aquifer flow systems: a case study in the Rajshahi Barind, Bangladesh

    NASA Astrophysics Data System (ADS)

    Rushton, K. R.; Zaman, M. Asaduz

    2017-01-01

    Identifying flow processes in multi-aquifer flow systems is a considerable challenge, especially if substantial abstraction occurs. The Rajshahi Barind groundwater flow system in Bangladesh provides an example of the manner in which flow processes can change with time. At some locations there has been a decrease with time in groundwater heads and also in the magnitude of the seasonal fluctuations. This report describes the important stages in a detailed field and modelling study at a specific location in this groundwater flow system. To understand more about the changing conditions, piezometers were constructed in 2015 at different depths but the same location; water levels in these piezometers indicate the formation of an additional water table. Conceptual models are described which show how conditions have changed between the years 2000 and 2015. Following the formation of the additional water table, the aquifer system is conceptualised as two units. A pumping test is described with data collected during both the pumping and recovery phases. Pumping test data for the Lower Unit are analysed using a computational model with estimates of the aquifer parameters; the model also provided estimates of the quantity of water moving from the ground surface, through the Upper Unit, to provide an input to the Lower Unit. The reasons for the substantial changes in the groundwater heads are identified; monitoring of the recently formed additional water table provides a means of testing whether over-abstraction is occurring.

  7. A question of scale: how emplacement observations of small, individual lava flows may inform our understanding of large, compound flow fields

    NASA Astrophysics Data System (ADS)

    Applegarth, Jane; James, Mike; Pinkerton, Harry

    2010-05-01

    The early stages of effusive volcanic eruptions, during which lava flows are lengthening, are often closely monitored for hazard management. Processes involved in lengthening are therefore relatively well understood, and lava flow development during this phase can be modelled with some success[1,2]. However, activity may continue after the lavas have reached their maximum length, leading to flow inflation, breakouts and possibly further lengthening of the flow field[3,4]. These processes can be difficult to observe during activity, and may result in highly complex flow morphologies that are not easily interpreted post-eruption. The late-stage development of compound flow fields is therefore important, but is currently an understudied area. The scale of this activity may vary greatly, and probably depends in part on the eruption duration. For example, the largest flow field emplaced during the 2001 eruption of Mt. Etna, Sicily, reached its maximum length of 6 km in 8 days, then was active for a further 2 weeks only. This ‘late-stage' activity involved the initiation of two new channels, a few tens of metres wide, which reached lengths of up to ~2 km. In contrast, the 2008-9 Etna eruption emplaced 6 km long flows within ~6 weeks, then activity continued for a further year. During the last few months of activity, small transient flows were extruded from ephemeral vents, several of which could be active at any given time. Observations of the late-stage activity this flow field as a whole allowed the influence of parameters such as effusion rate and topography on the overall morphology to be studied[5]. Furthermore, the scale of the individual flow units (a few metres wide, a few hundreds of metres long) meant that additional close-range measurements of their short-term development could be carried out, and the results are discussed here. We observed the behaviour of three such flow units, which were fed by a single ephemeral vent, over a 26-hour period within the last month of the 2008-9 Etna eruption. These were monitored using a time-lapse camera, only ~50 m from the vent, that collected images every 3 minutes. From the suite of images collected we observed flow inflation, changing surface textures, overflows, the formation of surface flows and breakouts, and the switching of activity between channels. These data provide unique insights into the processes that lead to the cessation of activity of small flows, and the initiation of new flow units. This approach, whereby processes are studied on small spatial and short temporal scales, may inform our interpretation of complex morphology in larger flow fields, such as that emplaced during the 2001 Etna eruption. Although the flow units in this case were an order of magnitude larger, the sequence of events leading to the initiation of new channels may be very similar. [1] Wright R, Garbeil H, Harris AJL (2008) Using infrared satellite data to drive a thermo-rheological/stochastic lava flow emplacement model: A method for near-real-time volcanic hazard assessment. Geophys Res Lett 35: L19307 [2] Vicari A, Herault A, Del Negro C, Coltelli M, Marsella M, Proietti C (2007) Modelling of the 2001 lava flow at Etna volcano by a Cellular Automata approach. Environ Model & Softw 22(10):1465-1471 [3] Luhr JF, Simkin T (1993) Parícutin, the volcano born in a Mexican cornfield. Geoscience Press, Arizona [4] Kilburn CRJ, Guest, JE (1993) `A`ā lavas of Mount Etna, Sicily. In: Kilburn, CRJ Luongo G (eds) Active lavas: monitoring and modelling. UCL Press, London, 73-106 [5] Pinkerton H, James MR, Applegarth LJ (2010) The importance of high resolution time-lapse imagery in unravelling complex processes during effusive volcanic eruptions. EGU Abstract 5193

  8. Groundwater flow in the transition zone between freshwater and saltwater: a field-based study and analysis of measurement errors

    NASA Astrophysics Data System (ADS)

    Post, Vincent E. A.; Banks, Eddie; Brunke, Miriam

    2018-02-01

    The quantification of groundwater flow near the freshwater-saltwater transition zone at the coast is difficult because of variable-density effects and tidal dynamics. Head measurements were collected along a transect perpendicular to the shoreline at a site south of the city of Adelaide, South Australia, to determine the transient flow pattern. This paper presents a detailed overview of the measurement procedure, data post-processing methods and uncertainty analysis in order to assess how measurement errors affect the accuracy of the inferred flow patterns. A particular difficulty encountered was that some of the piezometers were leaky, which necessitated regular measurements of the electrical conductivity and temperature of the water inside the wells to correct for density effects. Other difficulties included failure of pressure transducers, data logger clock drift and operator error. The data obtained were sufficiently accurate to show that there is net seaward horizontal flow of freshwater in the top part of the aquifer, and a net landward flow of saltwater in the lower part. The vertical flow direction alternated with the tide, but due to the large uncertainty of the head gradients and density terms, no net flow could be established with any degree of confidence. While the measurement problems were amplified under the prevailing conditions at the site, similar errors can lead to large uncertainties everywhere. The methodology outlined acknowledges the inherent uncertainty involved in measuring groundwater flow. It can also assist to establish the accuracy requirements of the experimental setup.

  9. Hydrodynamic modelling and global datasets: Flow connectivity and SRTM data, a Bangkok case study.

    NASA Astrophysics Data System (ADS)

    Trigg, M. A.; Bates, P. B.; Michaelides, K.

    2012-04-01

    The rise in the global interconnected manufacturing supply chains requires an understanding and consistent quantification of flood risk at a global scale. Flood risk is often better quantified (or at least more precisely defined) in regions where there has been an investment in comprehensive topographical data collection such as LiDAR coupled with detailed hydrodynamic modelling. Yet in regions where these data and modelling are unavailable, the implications of flooding and the knock on effects for global industries can be dramatic, as evidenced by the recent floods in Bangkok, Thailand. There is a growing momentum in terms of global modelling initiatives to address this lack of a consistent understanding of flood risk and they will rely heavily on the application of available global datasets relevant to hydrodynamic modelling, such as Shuttle Radar Topography Mission (SRTM) data and its derivatives. These global datasets bring opportunities to apply consistent methodologies on an automated basis in all regions, while the use of coarser scale datasets also brings many challenges such as sub-grid process representation and downscaled hydrology data from global climate models. There are significant opportunities for hydrological science in helping define new, realistic and physically based methodologies that can be applied globally as well as the possibility of gaining new insights into flood risk through analysis of the many large datasets that will be derived from this work. We use Bangkok as a case study to explore some of the issues related to using these available global datasets for hydrodynamic modelling, with particular focus on using SRTM data to represent topography. Research has shown that flow connectivity on the floodplain is an important component in the dynamics of flood flows on to and off the floodplain, and indeed within different areas of the floodplain. A lack of representation of flow connectivity, often due to data resolution limitations, means that important subgrid processes are missing from hydrodynamic models leading to poor model predictive capabilities. Specifically here, the issue of flow connectivity during flood events is explored using geostatistical techniques to quantify the change of flow connectivity on floodplains due to grid rescaling methods. We also test whether this method of assessing connectivity can be used as new tool in the quantification of flood risk that moves beyond the simple flood extent approach, encapsulating threshold changes and data limitations.

  10. Mechanical Design of a Performance Test Rig for the Turbine Air-Flow Task (TAFT)

    NASA Technical Reports Server (NTRS)

    Xenofos, George; Forbes, John; Farrow, John; Williams, Robert; Tyler, Tom; Sargent, Scott; Moharos, Jozsef

    2003-01-01

    To support development of the Boeing-Rocketdyne RS84 rocket engine, a fill-flow, reaction turbine geometry was integrated into the NASA-MSFC turbine air-flow test facility. A mechanical design was generated which minimized the amount of new hardware while incorporating all test and instrUmentation requirements. This paper provides details of the mechanical design for this Turbine Air-Flow Task (TAFT) test rig. The mechanical design process utilized for this task included the following basic stages: Conceptual Design. Preliminary Design. Detailed Design. Baseline of Design (including Configuration Control and Drawing Revision). Fabrication. Assembly. During the design process, many lessons were learned that should benefit future test rig design projects. Of primary importance are well-defined requirements early in the design process, a thorough detailed design package, and effective communication with both the customer and the fabrication contractors. The test rig provided steady and unsteady pressure data necessary to validate the computational fluid dynamics (CFD) code. The rig also helped characterize the turbine blade loading conditions. Test and CFD analysis results are to be presented in another JANNAF paper.

  11. Characterization of Hot Deformation Behavior of a Fe-Cr-Ni-Mo-N Superaustenitic Stainless Steel Using Dynamic Materials Modeling

    NASA Astrophysics Data System (ADS)

    Pu, Enxiang; Zheng, Wenjie; Song, Zhigang; Feng, Han; Zhu, Yuliang

    2017-03-01

    Hot deformation behavior of a Fe-24Cr-22Ni-7Mo-0.5N superaustenitic stainless steel was investigated by hot compression tests in a wide temperature range of 950-1250 °C and strain rate range of 0.001-10 s-1. The flow curves show that the flow stress decreases as the deformation temperature increases or the strain rate decreases. The processing maps developed on the basis of the dynamic materials model and flow stress data were adopted to optimize the parameters of hot working. It was found that the strain higher than 0.2 has no significant effect on the processing maps. The optimum processing conditions were in the temperature range of 1125-1220 °C and strain rate range of 0.1-3 s-1. Comparing to other stable domains, microstructural observations in this domain revealed the complete dynamic recrystallization (DRX) with finer and more uniform grain size. Flow instability occurred in the domain of temperature lower than 1100 °C and strain rate higher than 0.1 s-1.

  12. Spectral kinetic energy transfer in turbulent premixed reacting flows.

    PubMed

    Towery, C A Z; Poludnenko, A Y; Urzay, J; O'Brien, J; Ihme, M; Hamlington, P E

    2016-05-01

    Spectral kinetic energy transfer by advective processes in turbulent premixed reacting flows is examined using data from a direct numerical simulation of a statistically planar turbulent premixed flame. Two-dimensional turbulence kinetic-energy spectra conditioned on the planar-averaged reactant mass fraction are computed through the flame brush and variations in the spectra are connected to terms in the spectral kinetic energy transport equation. Conditional kinetic energy spectra show that turbulent small-scale motions are suppressed in the burnt combustion products, while the energy content of the mean flow increases. An analysis of spectral kinetic energy transfer further indicates that, contrary to the net down-scale transfer of energy found in the unburnt reactants, advective processes transfer energy from small to large scales in the flame brush close to the products. Triadic interactions calculated through the flame brush show that this net up-scale transfer of energy occurs primarily at spatial scales near the laminar flame thermal width. The present results thus indicate that advective processes in premixed reacting flows contribute to energy backscatter near the scale of the flame.

  13. Object-oriented Persistent Homology

    PubMed Central

    Wang, Bao; Wei, Guo-Wei

    2015-01-01

    Persistent homology provides a new approach for the topological simplification of big data via measuring the life time of intrinsic topological features in a filtration process and has found its success in scientific and engineering applications. However, such a success is essentially limited to qualitative data classification and analysis. Indeed, persistent homology has rarely been employed for quantitative modeling and prediction. Additionally, the present persistent homology is a passive tool, rather than a proactive technique, for classification and analysis. In this work, we outline a general protocol to construct object-oriented persistent homology methods. By means of differential geometry theory of surfaces, we construct an objective functional, namely, a surface free energy defined on the data of interest. The minimization of the objective functional leads to a Laplace-Beltrami operator which generates a multiscale representation of the initial data and offers an objective oriented filtration process. The resulting differential geometry based object-oriented persistent homology is able to preserve desirable geometric features in the evolutionary filtration and enhances the corresponding topological persistence. The cubical complex based homology algorithm is employed in the present work to be compatible with the Cartesian representation of the Laplace-Beltrami flow. The proposed Laplace-Beltrami flow based persistent homology method is extensively validated. The consistence between Laplace-Beltrami flow based filtration and Euclidean distance based filtration is confirmed on the Vietoris-Rips complex for a large amount of numerical tests. The convergence and reliability of the present Laplace-Beltrami flow based cubical complex filtration approach are analyzed over various spatial and temporal mesh sizes. The Laplace-Beltrami flow based persistent homology approach is utilized to study the intrinsic topology of proteins and fullerene molecules. Based on a quantitative model which correlates the topological persistence of fullerene central cavity with the total curvature energy of the fullerene structure, the proposed method is used for the prediction of fullerene isomer stability. The efficiency and robustness of the present method are verified by more than 500 fullerene molecules. It is shown that the proposed persistent homology based quantitative model offers good predictions of total curvature energies for ten types of fullerene isomers. The present work offers the first example to design object-oriented persistent homology to enhance or preserve desirable features in the original data during the filtration process and then automatically detect or extract the corresponding topological traits from the data. PMID:26705370

  14. Requirements Specification Document

    DOT National Transportation Integrated Search

    1996-04-26

    The System Definition Document identifies the top level processes, data flows, : and system controls for the Gary-Chicago-Milwaukee (GCM) Corridor Transportation Information Center (C-TIC). This Requirements Specification establishes the requirements...

  15. Introduction to Biotechnology Regulation for Pesticides

    EPA Pesticide Factsheets

    Includes data requirements for the registration of plant-incorporated protectants (PIP), gene flow assessment, ecological non-target organism risk assessment process, environmental fate, insect resistance management in Bt crops.

  16. KSC's work flow assistant

    NASA Technical Reports Server (NTRS)

    Wilkinson, John; Johnson, Earl

    1991-01-01

    The work flow assistant (WFA) is an advanced technology project under the shuttle processing data management system (SPDMS) at Kennedy Space Center (KSC). It will be utilized for short range scheduling, controlling work flow on the floor, and providing near real-time status for all major space transportation systems (STS) work centers at KSC. It will increase personnel and STS safety and improve productivity through deeper active scheduling that includes tracking and correlation of STS and ground support equipment (GSE) configuration and work. It will also provide greater accessibility to this data. WFA defines a standards concept for scheduling data which permits both commercial off-the-shelf (COTS) scheduling tools and WFA developed applications to be reused. WFA will utilize industry standard languages and workstations to achieve a scalable, adaptable, and portable architecture which may be used at other sites.

  17. Model to interpret pulsed-field-gradient NMR data including memory and superdispersion effects.

    PubMed

    Néel, Marie-Christine; Bauer, Daniela; Fleury, Marc

    2014-06-01

    We propose a versatile model specifically designed for the quantitative interpretation of NMR velocimetry data. We use the concept of mobile or immobile tracer particles applied in dispersion theory in its Lagrangian form, adding two mechanisms: (i) independent random arrests of finite average representing intermittent periods of very low velocity zones in the mean flow direction and (ii) the possibility of unexpectedly long (but rare) displacements simulating the occurrence of very high velocities in the porous medium. Based on mathematical properties related to subordinated Lévy processes, we give analytical expressions of the signals recorded in pulsed-field-gradient NMR experiments. We illustrate how to use the model for quantifying dispersion from NMR data recorded for water flowing through a homogeneous grain pack column in single- and two-phase flow conditions.

  18. Feature-Based Statistical Analysis of Combustion Simulation Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, J; Krishnamoorthy, V; Liu, S

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing andmore » reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion science; however, it is applicable to many other science domains.« less

  19. Role of natural analogs in performance assessment of nuclear waste repositories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sagar, B.; Wittmeyer, G.W.

    1995-09-01

    Mathematical models of the flow of water and transport of radionuclides in porous media will be used to assess the ability of deep geologic repositories to safely contain nuclear waste. These models must, in some sense, be validated to ensure that they adequately describe the physical processes occurring within the repository and its geologic setting. Inasmuch as the spatial and temporal scales over which these models must be applied in performance assessment are very large, validation of these models against laboratory and small-scale field experiments may be considered inadequate. Natural analogs may provide validation data that are representative of physico-chemicalmore » processes that occur over spatial and temporal scales as large or larger than those relevant to repository design. The authors discuss the manner in which natural analog data may be used to increase confidence in performance assessment models and conclude that, while these data may be suitable for testing the basic laws governing flow and transport, there is insufficient control of boundary and initial conditions and forcing functions to permit quantitative validation of complex, spatially distributed flow and transport models. The authors also express their opinion that, for collecting adequate data from natural analogs, resources will have to be devoted to them that are much larger than are devoted to them at present.« less

  20. DataFed: A Federated Data System for Visualization and Analysis of Spatio-Temporal Air Quality Data

    NASA Astrophysics Data System (ADS)

    Husar, R. B.; Hoijarvi, K.

    2017-12-01

    DataFed is a distributed web-services-based computing environment for accessing, processing, and visualizing atmospheric data in support of air quality science and management. The flexible, adaptive environment facilitates the access and flow of atmospheric data from provider to users by enabling the creation of user-driven data processing/visualization applications. DataFed `wrapper' components, non-intrusively wrap heterogeneous, distributed datasets for access by standards-based GIS web services. The mediator components (also web services) map the heterogeneous data into a spatio-temporal data model. Chained web services provide homogeneous data views (e.g., geospatial, time views) using a global multi-dimensional data model. In addition to data access and rendering, the data processing component services can be programmed for filtering, aggregation, and fusion of multidimensional data. A complete application software is written in a custom made data flow language. Currently, the federated data pool consists of over 50 datasets originating from globally distributed data providers delivering surface-based air quality measurements, satellite observations, emissions data as well as regional and global-scale air quality models. The web browser-based user interface allows point and click navigation and browsing the XYZT multi-dimensional data space. The key applications of DataFed are for exploring spatial pattern of pollutants, seasonal, weekly, diurnal cycles and frequency distributions for exploratory air quality research. Since 2008, DataFed has been used to support EPA in the implementation of the Exceptional Event Rule. The data system is also used at universities in the US, Europe and Asia.

Top