Single machine scheduling with slack due dates assignment
NASA Astrophysics Data System (ADS)
Liu, Weiguo; Hu, Xiangpei; Wang, Xuyin
2017-04-01
This paper considers a single machine scheduling problem in which each job is assigned an individual due date based on a common flow allowance (i.e. all jobs have slack due date). The goal is to find a sequence for jobs, together with a due date assignment, that minimizes a non-regular criterion comprising the total weighted absolute lateness value and common flow allowance cost, where the weight is a position-dependent weight. In order to solve this problem, an ? time algorithm is proposed. Some extensions of the problem are also shown.
Saenz-Agudelo, P; Jones, G P; Thorrold, S R; Planes, S
2009-04-01
The application of spatially explicit models of population dynamics to fisheries management and the design marine reserve network systems has been limited due to a lack of empirical estimates of larval dispersal. Here we compared assignment tests and parentage analysis for examining larval retention and connectivity under two different gene flow scenarios using panda clownfish (Amphiprion polymnus) in Papua New Guinea. A metapopulation of panda clownfish in Bootless Bay with little or no genetic differentiation among five spatially discrete locations separated by 2-6 km provided the high gene flow scenario. The low gene flow scenario compared the Bootless Bay metapopulation with a genetically distinct population (F(ST )= 0.1) located at Schumann Island, New Britain, 1500 km to the northeast. We used assignment tests and parentage analysis based on microsatellite DNA data to identify natal origins of 177 juveniles in Bootless Bay and 73 juveniles at Schumann Island. At low rates of gene flow, assignment tests correctly classified juveniles to their source population. On the other hand, parentage analysis led to an overestimate of self-recruitment within the two populations due to the significant deviation from panmixia when both populations were pooled. At high gene flow (within Bootless Bay), assignment tests underestimated self-recruitment and connectivity among subpopulations, and grossly overestimated self-recruitment within the overall metapopulation. However, the assignment tests did identify immigrants from distant (genetically distinct) populations. Parentage analysis clearly provided the most accurate estimates of connectivity in situations of high gene flow.
Deriving flow directions for coarse-resolution (1-4 km) gridded hydrologic modeling
NASA Astrophysics Data System (ADS)
Reed, Seann M.
2003-09-01
The National Weather Service Hydrology Laboratory (NWS-HL) is currently testing a grid-based distributed hydrologic model at a resolution (4 km) commensurate with operational, radar-based precipitation products. To implement distributed routing algorithms in this framework, a flow direction must be assigned to each model cell. A new algorithm, referred to as cell outlet tracing with an area threshold (COTAT) has been developed to automatically, accurately, and efficiently assign flow directions to any coarse-resolution grid cells using information from any higher-resolution digital elevation model. Although similar to previously published algorithms, this approach offers some advantages. Use of an area threshold allows more control over the tendency for producing diagonal flow directions. Analyses of results at different output resolutions ranging from 300 m to 4000 m indicate that it is possible to choose an area threshold that will produce minimal differences in average network flow lengths across this range of scales. Flow direction grids at a 4 km resolution have been produced for the conterminous United States.
Adaptive protection algorithm and system
Hedrick, Paul [Pittsburgh, PA; Toms, Helen L [Irwin, PA; Miller, Roger M [Mars, PA
2009-04-28
An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.
NASA Technical Reports Server (NTRS)
Mielke, Roland V. (Inventor); Stoughton, John W. (Inventor)
1990-01-01
Computationally complex primitive operations of an algorithm are executed concurrently in a plurality of functional units under the control of an assignment manager. The algorithm is preferably defined as a computationally marked graph contianing data status edges (paths) corresponding to each of the data flow edges. The assignment manager assigns primitive operations to the functional units and monitors completion of the primitive operations to determine data availability using the computational marked graph of the algorithm. All data accessing of the primitive operations is performed by the functional units independently of the assignment manager.
Improvement of a 2D numerical model of lava flows
NASA Astrophysics Data System (ADS)
Ishimine, Y.
2013-12-01
I propose an improved procedure that reduces an improper dependence of lava flow directions on the orientation of Digital Elevation Model (DEM) in two-dimensional simulations based on Ishihara et al. (in Lava Flows and Domes, Fink, JH eds., 1990). The numerical model for lava flow simulations proposed by Ishihara et al. (1990) is based on two-dimensional shallow water model combined with a constitutive equation for a Bingham fluid. It is simple but useful because it properly reproduces distributions of actual lava flows. Thus, it has been regarded as one of pioneer work of numerical simulations of lava flows and it is still now widely used in practical hazard prediction map for civil defense officials in Japan. However, the model include an improper dependence of lava flow directions on the orientation of DEM because the model separately assigns the condition for the lava flow to stop due to yield stress for each of two orthogonal axes of rectangular calculating grid based on DEM. This procedure brings a diamond-shaped distribution as shown in Fig. 1 when calculating a lava flow supplied from a point source on a virtual flat plane although the distribution should be circle-shaped. To improve the drawback, I proposed a modified procedure that uses the absolute value of yield stress derived from both components of two orthogonal directions of the slope steepness to assign the condition for lava flows to stop. This brings a better result as shown in Fig. 2. Fig. 1. (a) Contour plots calculated with the original model of Ishihara et al. (1990). (b) Contour plots calculated with a proposed model.
NASA Astrophysics Data System (ADS)
Li, Xue-yan; Li, Xue-mei; Yang, Lingrun; Li, Jing
2018-07-01
Most of the previous studies on dynamic traffic assignment are based on traditional analytical framework, for instance, the idea of Dynamic User Equilibrium has been widely used in depicting both the route choice and the departure time choice. However, some recent studies have demonstrated that the dynamic traffic flow assignment largely depends on travelers' rationality degree, travelers' heterogeneity and what the traffic information the travelers have. In this paper, we develop a new self-adaptive multi agent model to depict travelers' behavior in Dynamic Traffic Assignment. We use Cumulative Prospect Theory with heterogeneous reference points to illustrate travelers' bounded rationality. We use reinforcement-learning model to depict travelers' route and departure time choosing behavior under the condition of imperfect information. We design the evolution rule of travelers' expected arrival time and the algorithm of traffic flow assignment. Compared with the traditional model, the self-adaptive multi agent model we proposed in this paper can effectively help travelers avoid the rush hour. Finally, we report and analyze the effect of travelers' group behavior on the transportation system, and give some insights into the relation between travelers' group behavior and the performance of transportation system.
Bacles, C F E; Ennos, R A
2008-10-01
Paternity analysis based on microsatellite marker genotyping was used to infer contemporary genetic connectivity by pollen of three population remnants of the wind-pollinated, wind-dispersed tree Fraxinus excelsior, in a deforested Scottish landscape. By deterministically accounting for genotyping error and comparing a range of assignment methods, individual-based paternity assignments were used to derive population-level estimates of gene flow. Pollen immigration into a 300 ha landscape represents between 43 and 68% of effective pollination, mostly depending on assignment method. Individual male reproductive success is unequal, with 31 of 48 trees fertilizing one seed or more, but only three trees fertilizing more than ten seeds. Spatial analysis suggests a fat-tailed pollen dispersal curve with 85% of detected pollination occurring within 100 m, and 15% spreading between 300 and 1900 m from the source. Identification of immigrating pollen sourced from two neighbouring remnants indicates further effective dispersal at 2900 m. Pollen exchange among remnants is driven by population size rather than geographic distance, with larger remnants acting predominantly as pollen donors, and smaller remnants as pollen recipients. Enhanced wind dispersal of pollen in a barren landscape ensures that the seed produced within the catchment includes genetic material from a wide geographic area. However, gene flow estimates based on analysis of non-dispersed seeds were shown to underestimate realized gene immigration into the remnants by a factor of two suggesting that predictive landscape conservation requires integrated estimates of post-recruitment gene flow occurring via both pollen and seed.
Flow assignment model for quantitative analysis of diverting bulk freight from road to railway
Liu, Chang; Wang, Jiaxi; Xiao, Jie; Liu, Siqi; Wu, Jianping; Li, Jian
2017-01-01
Since railway transport possesses the advantage of high volume and low carbon emissions, diverting some freight from road to railway will help reduce the negative environmental impacts associated with transport. This paper develops a flow assignment model for quantitative analysis of diverting truck freight to railway. First, a general network which considers road transportation, railway transportation, handling and transferring is established according to all the steps in the whole transportation process. Then general functions which embody the factors which the shippers will pay attention to when choosing mode and path are formulated. The general functions contain the congestion cost on road, the capacity constraints of railways and freight stations. Based on the general network and general cost function, a user equilibrium flow assignment model is developed to simulate the flow distribution on the general network under the condition that all shippers choose transportation mode and path independently. Since the model is nonlinear and challenging, we adopt a method that uses tangent lines to constitute envelope curve to linearize it. Finally, a numerical example is presented to test the model and show the method of making quantitative analysis of bulk freight modal shift between road and railway. PMID:28771536
Cell transmission model of dynamic assignment for urban rail transit networks.
Xu, Guangming; Zhao, Shuo; Shi, Feng; Zhang, Feilian
2017-01-01
For urban rail transit network, the space-time flow distribution can play an important role in evaluating and optimizing the space-time resource allocation. For obtaining the space-time flow distribution without the restriction of schedules, a dynamic assignment problem is proposed based on the concept of continuous transmission. To solve the dynamic assignment problem, the cell transmission model is built for urban rail transit networks. The priority principle, queuing process, capacity constraints and congestion effects are considered in the cell transmission mechanism. Then an efficient method is designed to solve the shortest path for an urban rail network, which decreases the computing cost for solving the cell transmission model. The instantaneous dynamic user optimal state can be reached with the method of successive average. Many evaluation indexes of passenger flow can be generated, to provide effective support for the optimization of train schedules and the capacity evaluation for urban rail transit network. Finally, the model and its potential application are demonstrated via two numerical experiments using a small-scale network and the Beijing Metro network.
Using parentage analysis to examine gene flow and spatial genetic structure.
Kane, Nolan C; King, Matthew G
2009-04-01
Numerous approaches have been developed to examine recent and historical gene flow between populations, but few studies have used empirical data sets to compare different approaches. Some methods are expected to perform better under particular scenarios, such as high or low gene flow, but this, too, has rarely been tested. In this issue of Molecular Ecology, Saenz-Agudelo et al. (2009) apply assignment tests and parentage analysis to microsatellite data from five geographically proximal (2-6 km) and one much more distant (1500 km) panda clownfish populations, showing that parentage analysis performed better in situations of high gene flow, while their assignment tests did better with low gene flow. This unusually complete data set is comprised of multiple exhaustively sampled populations, including nearly all adults and large numbers of juveniles, enabling the authors to ask questions that in many systems would be impossible to answer. Their results emphasize the importance of selecting the right analysis to use, based on the underlying model and how well its assumptions are met by the populations to be analysed.
Fleet Assignment Using Collective Intelligence
NASA Technical Reports Server (NTRS)
Antoine, Nicolas E.; Bieniawski, Stefan R.; Kroo, Ilan M.; Wolpert, David H.
2004-01-01
Airline fleet assignment involves the allocation of aircraft to a set of flights legs in order to meet passenger demand, while satisfying a variety of constraints. Over the course of the day, the routing of each aircraft is determined in order to minimize the number of required flights for a given fleet. The associated flow continuity and aircraft count constraints have led researchers to focus on obtaining quasi-optimal solutions, especially at larger scales. In this paper, the authors propose the application of an agent-based integer optimization algorithm to a "cold start" fleet assignment problem. Results show that the optimizer can successfully solve such highly- constrained problems (129 variables, 184 constraints).
Leadership Stability in Army Reserve Component Units
2013-01-01
or recognized, RC units could have more time because they may appear late in the force flow , particularly if AC units go earlier or the flow is...8, or deployment minus eight months), many new arrivals flowed into the unit, including many who would eventually deploy with the unit. Almost all...mobilization. Thus, those assigned are treated as 100 percent. To the right, we display the percentage (out of those assigned) who flowed into various
Take-Home Experiments in Undergraduate Fluid Mechanics Education
NASA Astrophysics Data System (ADS)
Cimbala, John
2007-11-01
Hands-on take-home experiments, assigned as homework, are useful as supplements to traditional in-class demonstrations and laboratories. Students borrow the equipment from the department's equipment room, and perform the experiment either at home or in the student lounge or student shop work area. Advantages include: (1) easy implementation, especially for large classes, (2) low cost and easy duplication of multiple units, (3) no loss of lecture time since the take-home experiment is self-contained with all necessary instructions, and (4) negligible increase in student or teaching assistant work load since the experiment is assigned as a homework problem in place of a traditional pen and paper problem. As an example, a pump flow take-home experiment was developed, implemented, and assessed in our introductory junior-level fluid mechanics course at Penn State. The experimental apparatus consists of a bucket, tape measure, submersible aquarium pump, tubing, measuring cup, and extension cord. We put together twenty sets at a total cost of less than 20 dollars per set. Students connect the tube to the pump outlet, submerge the pump in water, and measure the volume flow rate produced at various outflow elevations. They record and plot volume flow rate as a function of outlet elevation, and compare with predictions based on the manufacturer's pump performance curve (head versus volume flow rate) and flow losses. The homework assignment includes an online pre-test and post-test to assess the change in students' understanding of the principles of pump performance. The results of the assessment support a significant learning gain following the completion of the take-home experiment.
Xiong, Guanglei; Figueroa, C. Alberto; Xiao, Nan; Taylor, Charles A.
2011-01-01
SUMMARY Simulation of blood flow using image-based models and computational fluid dynamics has found widespread application to quantifying hemodynamic factors relevant to the initiation and progression of cardiovascular diseases and for planning interventions. Methods for creating subject-specific geometric models from medical imaging data have improved substantially in the last decade but for many problems, still require significant user interaction. In addition, while fluid–structure interaction methods are being employed to model blood flow and vessel wall dynamics, tissue properties are often assumed to be uniform. In this paper, we propose a novel workflow for simulating blood flow using subject-specific geometry and spatially varying wall properties. The geometric model construction is based on 3D segmentation and geometric processing. Variable wall properties are assigned to the model based on combining centerline-based and surface-based methods. We finally demonstrate these new methods using an idealized cylindrical model and two subject-specific vascular models with thoracic and cerebral aneurysms. PMID:21765984
Wisconsin Recertification Manual for Public Librarians.
ERIC Educational Resources Information Center
Fox, Robert; And Others
Designed to assist public librarians certified after May 1, 1979, this manual explains Wisconsin recertification requirements based on continuing education. It provides continuing education guidelines, a flow chart of the recertification process, an individual learning activity form, an annual report form, a conversion chart for assignment of…
Efficient Trajectory Options Allocation for the Collaborative Trajectory Options Program
NASA Technical Reports Server (NTRS)
Rodionova, Olga; Arneson, Heather; Sridhar, Banavar; Evans, Antony
2017-01-01
The Collaborative Trajectory Options Program (CTOP) is a Traffic Management Initiative (TMI) intended to control the air traffic flow rates at multiple specified Flow Constrained Areas (FCAs), where demand exceeds capacity. CTOP allows flight operators to submit the desired Trajectory Options Set (TOS) for each affected flight with associated Relative Trajectory Cost (RTC) for each option. CTOP then creates a feasible schedule that complies with capacity constraints by assigning affected flights with routes and departure delays in such a way as to minimize the total cost while maintaining equity across flight operators. The current version of CTOP implements a Ration-by-Schedule (RBS) scheme, which assigns the best available options to flights based on a First-Scheduled-First-Served heuristic. In the present study, an alternative flight scheduling approach is developed based on linear optimization. Results suggest that such an approach can significantly reduce flight delays, in the deterministic case, while maintaining equity as defined using a Max-Min fairness scheme.
2011-07-31
officers select their own BOLC-B dates completely divorced of their unit assignment and that unit’s ARFORGEN cycle. We reschedule all FY10 cohort LTs...for BOLC-B based upon unit priority based upon number of days until LAD. Rescheduling all FY10 cohort LTs for BOLC-B based upon unit priority...with specialty branches (doctors, lawyers, nurses , chaplains, etc) which have minimal representation in BCT-level units. DCs are not generally
NASA Astrophysics Data System (ADS)
Bag, S.; de, A.
2010-09-01
The transport phenomena based heat transfer and fluid flow calculations in weld pool require a number of input parameters. Arc efficiency, effective thermal conductivity, and viscosity in weld pool are some of these parameters, values of which are rarely known and difficult to assign a priori based on the scientific principles alone. The present work reports a bi-directional three-dimensional (3-D) heat transfer and fluid flow model, which is integrated with a real number based genetic algorithm. The bi-directional feature of the integrated model allows the identification of the values of a required set of uncertain model input parameters and, next, the design of process parameters to achieve a target weld pool dimension. The computed values are validated with measured results in linear gas-tungsten-arc (GTA) weld samples. Furthermore, a novel methodology to estimate the overall reliability of the computed solutions is also presented.
Krebs, Georg; Becker, Thomas; Gastl, Martina
2017-09-01
Cereal-based beverages contain a complex mixture of various polymeric macromolecules including polysaccharides, peptides, and polyphenols. The molar mass of polymers and their degradation products affect different technological and especially sensory parameters of beverages. Asymmetrical flow field-flow fractionation (AF4) coupled with multi-angle light scattering (MALS) and refractive index detection (dRI) or UV detection (UV) is a technique for structure and molar mass distribution analysis of macromolecules commonly used for pure compound solutions. The objective of this study was to develop a systematic approach for identifying the polymer classes in an AF4//MALS/dRI/UV fractogram of the complex matrix in beer, a yeast-fermented cereal-based beverage. Assignment of fractogram fractions to polymer substance classes was achieved by targeted precipitations, enzymatic hydrolysis, and alignments with purified polymer standards. Corresponding effects on dRI and UV signals were evaluated according to the detector's sensitivities. Using these techniques, the AF4 fractogram of beer was classified into different fractions: (1) the low molar mass fraction was assigned to proteinaceous molecules with different degrees of glycosylation, (2) the middle molar mass fraction was attributed to protein-polyphenol complexes with a coelution of non-starch polysaccharides, and (3) the high molar mass fraction was identified as a mixture of the cell wall polysaccharides (i.e., β-glucan and arabinoxylan) with a low content of polysaccharide-protein association. In addition, dextrins derived from incomplete starch hydrolysis were identified in all fractions and over the complete molar mass range. The ability to assess the components of an AF4 fractogram is beneficial for the targeted design and evaluation of polymers in fermented cereal-based beverages and for controlling and monitoring quality parameters.
Modeling regional freight flow assignment through intermodal terminals
DOT National Transportation Integrated Search
2005-03-01
An analytical model is developed to assign regional freight across a multimodal highway and railway network using geographic information systems. As part of the regional planning process, the model is an iterative procedure that assigns multimodal fr...
Zhang, Xuejun; Lei, Jiaxing
2015-01-01
Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840
NASA Astrophysics Data System (ADS)
Rahman, S. M. Rakibur; Roshid, S. M. Al Mamun Or; Nishan, Ishtiaque Ahmed
2017-12-01
This paper deals with the design of a drive system of traversing mechanism used to position the pitot tube in desired position of the jet flow field. In this system a stepper motor is driven by a `dual H bridge' motor driver and programmed Arduino microcontroller. The stepper motor is made to move in precise steps to obtain desired movement of the traversing mechanism. The jet flow is characterized in three distinct zones - initial zone, transition zone and developed zone. Each zone can be divided into required number of segments based on variation of velocity. By assigning number of segments, step range and number of steps in each segment as inputs, it is possible to collect data in all the flow zones according to our programmed schedule. The system will allow taking a large number of readings automatically.
Can You Build It? Using Manipulatives to Assess Student Understanding of Food-Web Concepts
ERIC Educational Resources Information Center
Grumbine, Richard
2012-01-01
This article outlines an exercise that assesses student knowledge of food-web and energy-flow concepts. Students work in teams and use manipulatives to build food-web models based on criteria assigned by the instructor. The models are then peer reviewed according to guidelines supplied by the instructor.
Scott-Hamilton, John; Schutte, Nicola S; Brown, Rhonda F
2016-03-01
This study investigated whether mindfulness training increases athletes' mindfulness and flow experience and decreases sport-specific anxiety and sport-specific pessimism. Cyclists were assigned to an eight-week mindfulness intervention, which incorporated a mindful spin-bike training component, or a wait-list control condition. Participants completed baseline and post-test measures of mindfulness, flow, sport-anxiety, and sport-related pessimistic attributions. Analyses of covariance showed significant positive effects on mindfulness, flow, and pessimism for the 27 cyclists in the mindfulness intervention condition compared with the 20 cyclists in the control condition. Changes in mindfulness experienced by the intervention participants were positively associated with changes in flow. Results suggest that mindfulness-based interventions tailored to specific athletic pursuits can be effective in facilitating flow experiences. © 2016 The International Association of Applied Psychology.
NASA Astrophysics Data System (ADS)
Ganjeh-Ghazvini, Mostafa; Masihi, Mohsen; Ghaedi, Mojtaba
2014-07-01
Fluid flow modeling in porous media has many applications in waste treatment, hydrology and petroleum engineering. In any geological model, flow behavior is controlled by multiple properties. These properties must be known in advance of common flow simulations. When uncertainties are present, deterministic modeling often produces poor results. Percolation and Random Walk (RW) methods have recently been used in flow modeling. Their stochastic basis is useful in dealing with uncertainty problems. They are also useful in finding the relationship between porous media descriptions and flow behavior. This paper employs a simple methodology based on random walk and percolation techniques. The method is applied to a well-defined model reservoir in which the breakthrough time distributions are estimated. The results of this method and the conventional simulation are then compared. The effect of the net to gross ratio on the breakthrough time distribution is studied in terms of Shannon entropy. Use of the entropy plot allows one to assign the appropriate net to gross ratio to any porous medium.
Improving Long-term Post-wildfire hydrologic simulations using ParFlow
NASA Astrophysics Data System (ADS)
Lopez, S. R.; Kinoshita, A. M.
2015-12-01
Wildfires alter the natural hydrologic processes within a watershed. After vegetation is burned, the combustion of organic material and debris settles into the soil creating a hydrophobic layer beneath the soil surface with varying degree of thickness and depth. Vegetation regrowth rates vary as a function of radiative exposure, burn severity, and precipitation patterns. Hydrologic models used by the Burned Area Emergency Response (BAER) teams use input data and model calibration constraints that are generally either one-dimensional, empirically-based models, or two-dimensional, conceptually-based models with lumped parameter distributions. These models estimate runoff measurements at the watershed outlet; however, do not provide a distributed hydrologic simulation at each point within the watershed. This work uses ParFlow, a three-dimensional, distributed hydrologic model to (1) correlate burn severity with hydrophobicity, (2) evaluate vegetation recovery rate on water components, and (3) improve flood prediction for managers to help with resource allocation and management operations in burned watersheds. ParFlow is applied to Devil Canyon (43 km2) in San Bernardino, California, which was 97% burned in the 2003 Old Fire. The model set-up uses a 30m-cell size resolution over a 6.7 km by 6.4 km lateral extent. The subsurface reaches 30 m and is assigned a variable cell thickness. Variable subsurface thickness allows users to explicitly consider the degree of recovery throughout the stages of regrowth. Burn severity maps from remotely sensed imagery are used to assign initial hydrophobic layer parameters and thickness. Vegetation regrowth is represented with satellite an Enhanced Vegetation Index. Pre and post-fire hydrologic response is evaluated using runoff measurements at the watershed outlet, and using water component (overland flow, lateral flow, baseflow) measurements.
USDA-ARS?s Scientific Manuscript database
A 4-unit dual-flow continuous culture fermentor system was used to assess the effect of increasing flax supplementation of an herbage-based diet on nutrient digestibility, bacterial N synthesis and methane output. Treatments were randomly assigned to fermentors in a 4 x 4 Latin square design with 7 ...
Passenger flow analysis of Beijing urban rail transit network using fractal approach
NASA Astrophysics Data System (ADS)
Li, Xiaohong; Chen, Peiwen; Chen, Feng; Wang, Zijia
2018-04-01
To quantify the spatiotemporal distribution of passenger flow and the characteristics of an urban rail transit network, we introduce four radius fractal dimensions and two branch fractal dimensions by combining a fractal approach with passenger flow assignment model. These fractal dimensions can numerically describe the complexity of passenger flow in the urban rail transit network and its change characteristics. Based on it, we establish a fractal quantification method to measure the fractal characteristics of passenger follow in the rail transit network. Finally, we validate the reasonability of our proposed method by using the actual data of Beijing subway network. It has been shown that our proposed method can effectively measure the scale-free range of the urban rail transit network, network development and the fractal characteristics of time-varying passenger flow, which further provides a reference for network planning and analysis of passenger flow.
VisFlow - Web-based Visualization Framework for Tabular Data with a Subset Flow Model.
Yu, Bowen; Silva, Claudio T
2017-01-01
Data flow systems allow the user to design a flow diagram that specifies the relations between system components which process, filter or visually present the data. Visualization systems may benefit from user-defined data flows as an analysis typically consists of rendering multiple plots on demand and performing different types of interactive queries across coordinated views. In this paper, we propose VisFlow, a web-based visualization framework for tabular data that employs a specific type of data flow model called the subset flow model. VisFlow focuses on interactive queries within the data flow, overcoming the limitation of interactivity from past computational data flow systems. In particular, VisFlow applies embedded visualizations and supports interactive selections, brushing and linking within a visualization-oriented data flow. The model requires all data transmitted by the flow to be a data item subset (i.e. groups of table rows) of some original input table, so that rendering properties can be assigned to the subset unambiguously for tracking and comparison. VisFlow features the analysis flexibility of a flow diagram, and at the same time reduces the diagram complexity and improves usability. We demonstrate the capability of VisFlow on two case studies with domain experts on real-world datasets showing that VisFlow is capable of accomplishing a considerable set of visualization and analysis tasks. The VisFlow system is available as open source on GitHub.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sossoe, K.S., E-mail: kwami.sossoe@irt-systemx.fr; Lebacque, J-P., E-mail: jean-patrick.lebacque@ifsttar.fr
2015-03-10
We present in this paper a model of vehicular traffic flow for a multimodal transportation road network. We introduce the notion of class of vehicles to refer to vehicles of different transport modes. Our model describes the traffic on highways (which may contain several lanes) and network transit for pubic transportation. The model is drafted with Eulerian and Lagrangian coordinates and uses a Logit model to describe the traffic assignment of our multiclass vehicular flow description on shared roads. The paper also discusses traffic streams on dedicated lanes for specific class of vehicles with event-based traffic laws. An Euler-Lagrangian-remap schememore » is introduced to numerically approximate the model’s flow equations.« less
Liu, Lei; Peng, Wei-Ren; Casellas, Ramon; Tsuritani, Takehiro; Morita, Itsuro; Martínez, Ricardo; Muñoz, Raül; Yoo, S J B
2014-01-13
Optical Orthogonal Frequency Division Multiplexing (O-OFDM), which transmits high speed optical signals using multiple spectrally overlapped lower-speed subcarriers, is a promising candidate for supporting future elastic optical networks. In contrast to previous works which focus on Coherent Optical OFDM (CO-OFDM), in this paper, we consider the direct-detection optical OFDM (DDO-OFDM) as the transport technique, which leads to simpler hardware and software realizations, potentially offering a low-cost solution for elastic optical networks, especially in metro networks, and short or medium distance core networks. Based on this network scenario, we design and deploy a software-defined networking (SDN) control plane enabled by extending OpenFlow, detailing the network architecture, the routing and spectrum assignment algorithm, OpenFlow protocol extensions and the experimental validation. To the best of our knowledge, it is the first time that an OpenFlow-based control plane is reported and its performance is quantitatively measured in an elastic optical network with DDO-OFDM transmission.
Using the red/yellow/green discharge tool to improve the timeliness of hospital discharges.
Mathews, Kusum S; Corso, Philip; Bacon, Sandra; Jenq, Grace Y
2014-06-01
As part of Yale-New Haven Hospital (Connecticut)'s Safe Patient Flow Initiative, the physician leadership developed the Red/Yellow/Green (RYG) Discharge Tool, an electronic medical record-based prompt to identify likelihood of patients' next-day discharge: green (very likely), yellow (possibly), and red (unlikely). The tool's purpose was to enhance communication with nursing/care coordination and trigger earlier discharge steps for patients identified as "green" or "yellow." Data on discharge assignments, discharge dates/ times, and team designation were collected for all adult medicine patients discharged in October-December 2009 (Study Period 1) and October-December 2011 (Study Period 2), between which the tool's placement changed from the sign-out note to the daily progress note. In Study Period 1, 75.9% of the patients had discharge assignments, compared with 90.8% in Period 2 (p < .001). The overall 11 A.M. discharge rate improved from 10.4% to 21.2% from 2007 to 2011. "Green" patients were more likely to be discharged before 11 A.M. than "yellow" or "red" patients (p < .001). Patients with RYG assignments discharged by 11 A.M. had a lower length of stay than those without assignments and did not have an associated increased risk of readmission. Discharge prediction accuracy worsened after the change in placement, decreasing from 75.1% to 59.1% for "green" patients (p < .001), and from 34.5% to 29.2% (p < .001) for "yellow" patients. In both periods, hospitalists were more accurate than house staff in discharge predictions, suggesting that education and/or experience may contribute to discharge assignment. The RYG Discharge Tool helped facilitate earlier discharges, but accuracy depends on placement in daily work flow and experience.
Using the Red/Yellow/Green Discharge Tool to Improve the Timeliness of Hospital Discharges
Mathews, Kusum S.; Corso, Philip; Bacon, Sandra; Jenq, Grace Y.
2015-01-01
Background As part of Yale-New Haven Hospital (Connecticut)’s Safe Patient Flow Initiative, the physician leadership developed the Red/Yellow/Green (RYG) Discharge Tool, an electronic medical record–based prompt to identify likelihood of patients’ next-day discharge: green (very likely), yellow (possibly), and red (unlikely). The tool’s purpose was to enhance communication with nursing/care coordination and trigger earlier discharge steps for patients identified as “green” or “yellow”. Methods Data on discharge assignments, discharge dates/times, and team designation were collected for all adult medicine patients discharged from October – December 2009 (Study Period 1) and October – December 2011 (Study Period 2), between which the tool’s placement changed from the sign-out note to the daily progress note. Results In Study Period 1, 75.9% of the patients had discharge assignments, compared with 90.8% in Period 2 (p < .001). The overall 11 A.M. discharge rate improved from 10.4% to 21.2% from 2007 to 2011. “Green” patients were more likely to be discharged before 11 A.M. than “yellow” or “red” patients (p < .001). Patients with RYG assignments discharged by 11 A.M. had a lower length of stay than those without assignments and did not have an associated increased risk of readmission. Discharge prediction accuracy worsened after the change in placement, decreasing from 75.1% to 59.1% for “green” patients (p < .001), and from 34.5% to 29.2% (p < .001) for “yellow” patients. In both periods, hospitalists were more accurate than housestaff in discharge predictions, suggesting that education and/or experience may contribute to discharge assignment. Conclusions The RYG Discharge Tool helped facilitate earlier discharges, but accuracy depends on placement in daily work flow and experience. PMID:25016672
Traffic and Driving Simulator Based on Architecture of Interactive Motion.
Paz, Alexander; Veeramisti, Naveen; Khaddar, Romesh; de la Fuente-Mella, Hanns; Modorcea, Luiza
2015-01-01
This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination.
Traffic and Driving Simulator Based on Architecture of Interactive Motion
Paz, Alexander; Veeramisti, Naveen; Khaddar, Romesh; de la Fuente-Mella, Hanns; Modorcea, Luiza
2015-01-01
This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination. PMID:26491711
A proposal for an SDN-based SIEPON architecture
NASA Astrophysics Data System (ADS)
Khalili, Hamzeh; Sallent, Sebastià; Piney, José Ramón; Rincón, David
2017-11-01
Passive Optical Network (PON) elements such as Optical Line Terminal (OLT) and Optical Network Units (ONUs) are currently managed by inflexible legacy network management systems. Software-Defined Networking (SDN) is a new networking paradigm that improves the operation and management of networks. In this paper, we propose a novel architecture, based on the SDN concept, for Ethernet Passive Optical Networks (EPON) that includes the Service Interoperability standard (SIEPON). In our proposal, the OLT is partially virtualized and some of its functionalities are allocated to the core network management system, while the OLT itself is replaced by an OpenFlow (OF) switch. A new MultiPoint MAC Control (MPMC) sublayer extension based on the OpenFlow protocol is presented. This would allow the SDN controller to manage and enhance the resource utilization, flow monitoring, bandwidth assignment, quality-of-service (QoS) guarantees, and energy management of the optical network access, to name a few possibilities. The OpenFlow switch is extended with synchronous ports to retain the time-critical nature of the EPON network. OpenFlow messages are also extended with new functionalities to implement the concept of EPON Service Paths (ESPs). Our simulation-based results demonstrate the effectiveness of the new architecture, while retaining a similar (or improved) performance in terms of delay and throughput when compared to legacy PONs.
Algorithmic localisation of noise sources in the tip region of a low-speed axial flow fan
NASA Astrophysics Data System (ADS)
Tóth, Bence; Vad, János
2017-04-01
An objective and algorithmised methodology is proposed to analyse beamform data obtained for axial fans. Its application is demonstrated in a case study regarding the tip region of a low-speed cooling fan. First, beamforming is carried out in a co-rotating frame of reference. Then, a distribution of source strength is extracted along the circumference of the rotor at the blade tip radius in each analysed third-octave band. The circumferential distributions are expanded into Fourier series, which allows for filtering out the effects of perturbations, on the basis of an objective criterion. The remaining Fourier components are then considered as base sources to determine the blade-passage-periodic flow mechanisms responsible for the broadband noise. Based on their frequency and angular location, the base sources are grouped together. This is done using the fuzzy c-means clustering method to allow the overlap of the source mechanisms. The number of clusters is determined in a validity analysis. Finally, the obtained clusters are assigned to source mechanisms based on the literature. Thus, turbulent boundary layer - trailing edge interaction noise, tip leakage flow noise, and double leakage flow noise are identified.
Fleet Assignment Using Collective Intelligence
NASA Technical Reports Server (NTRS)
Antoine, Nicolas E.; Bieniawski, Stefan R.; Kroo, Ilan M.; Wolpert, David H.
2004-01-01
Product distribution theory is a new collective intelligence-based framework for analyzing and controlling distributed systems. Its usefulness in distributed stochastic optimization is illustrated here through an airline fleet assignment problem. This problem involves the allocation of aircraft to a set of flights legs in order to meet passenger demand, while satisfying a variety of linear and non-linear constraints. Over the course of the day, the routing of each aircraft is determined in order to minimize the number of required flights for a given fleet. The associated flow continuity and aircraft count constraints have led researchers to focus on obtaining quasi-optimal solutions, especially at larger scales. In this paper, the authors propose the application of this new stochastic optimization algorithm to a non-linear objective cold start fleet assignment problem. Results show that the optimizer can successfully solve such highly-constrained problems (130 variables, 184 constraints).
Duymaz, Gökçen; Yağar, Seyhan; Özgök, Ayşegül
2017-01-01
Objective Numerous studies have indicated nephrotoxic effects of sevoflurane because of its two bioproducts compound A and fluoride. Cystatin C (CyC) is a more sensitive biomarker than creatinine to show early and mild changes in kidney function. We designed this prospective randomised study to compare the effects of low-flow sevoflurane anaesthesia and low-flow desflurane anaesthesia on renal functions based on CyC levels. No studies have evaluated the effects of low-flow sevoflurane anaesthesia on renal functions based on CyC levels to date. Methods Thirty American Society of Anesthesiologists (ASA) physical status I–II patients who were scheduled for urological procedures were enrolled in this study. The patients were randomly assigned to 2 groups: low-flow sevoflurane anaesthesia or low-flow desflurane anaesthesia. Serum urea, creatinine and CyC levels were measured before the operation, just before extubation and 24 h after the operation. Creatinine clearance was calculated in the first 24-h urine sample. Results There were no significant differences in serum urea, creatinine and CyC levels or 24 h creatinine clearance between the groups. Conclusion Our study demonstrates with a more sensitive biomarker, CyC, that low-flow sevoflurane anaesthesia is safe in terms of the effects on renal function. PMID:28439441
A classification scheme for turbulent flows based on their joint velocity-intermittency structure
NASA Astrophysics Data System (ADS)
Keylock, C. J.; Nishimura, K.; Peinke, J.
2011-12-01
Kolmogorov's classic theory for turbulence assumed an independence between velocity increments and the value for the velocity itself. However, this assumption is questionable, particularly in complex geophysical flows. Here we propose a framework for studying velocity-intermittency coupling that is similar in essence to the popular quadrant analysis method for studying near-wall flows. However, we study the dominant (longitudinal) velocity component along with a measure of the roughness of the signal, given mathematically by its series of Hölder exponents. Thus, we permit a possible dependence between velocity and intermittency. We compare boundary layer data obtained in a wind tunnel to turbulent jets and wake flows. These flow classes all have distinct velocity-intermittency characteristics, which cause them to be readily distinguished using our technique. Our method is much simpler and quicker to apply than approaches that condition the velocity increment statistics at some scale, r, on the increment statistics at a neighbouring, larger spatial scale, r+Δ, and the velocity itself. Classification of environmental flows is then possible based on their similarities to the idealised flow classes and we demonstrate this using laboratory data for flow in a parallel-channel confluence where the region of flow recirculation in the lee of the step is discriminated as a flow class distinct from boundary layer, jet and wake flows. Hence, using our method, it is possible to assign a flow classification to complex geophysical, turbulent flows depending upon which idealised flow class they most resemble.
Final report for the Multiprotocol Label Switching (MPLS) control plane security LDRD project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torgerson, Mark Dolan; Michalski, John T.; Tarman, Thomas David
2003-09-01
As rapid Internet growth continues, global communications becomes more dependent on Internet availability for information transfer. Recently, the Internet Engineering Task Force (IETF) introduced a new protocol, Multiple Protocol Label Switching (MPLS), to provide high-performance data flows within the Internet. MPLS emulates two major aspects of the Asynchronous Transfer Mode (ATM) technology. First, each initial IP packet is 'routed' to its destination based on previously known delay and congestion avoidance mechanisms. This allows for effective distribution of network resources and reduces the probability of congestion. Second, after route selection each subsequent packet is assigned a label at each hop, whichmore » determines the output port for the packet to reach its final destination. These labels guide the forwarding of each packet at routing nodes more efficiently and with more control than traditional IP forwarding (based on complete address information in each packet) for high-performance data flows. Label assignment is critical in the prompt and accurate delivery of user data. However, the protocols for label distribution were not adequately secured. Thus, if an adversary compromises a node by intercepting and modifying, or more simply injecting false labels into the packet-forwarding engine, the propagation of improperly labeled data flows could create instability in the entire network. In addition, some Virtual Private Network (VPN) solutions take advantage of this 'virtual channel' configuration to eliminate the need for user data encryption to provide privacy. VPN's relying on MPLS require accurate label assignment to maintain user data protection. This research developed a working distributive trust model that demonstrated how to deploy confidentiality, authentication, and non-repudiation in the global network label switching control plane. Simulation models and laboratory testbed implementations that demonstrated this concept were developed, and results from this research were transferred to industry via standards in the Optical Internetworking Forum (OIF).« less
Kuhn, Stefan; Schlörer, Nils E
2015-08-01
nmrshiftdb2 supports with its laboratory information management system the integration of an electronic lab administration and management into academic NMR facilities. Also, it offers the setup of a local database, while full access to nmrshiftdb2's World Wide Web database is granted. This freely available system allows on the one hand the submission of orders for measurement, transfers recorded data automatically or manually, and enables download of spectra via web interface, as well as the integrated access to prediction, search, and assignment tools of the NMR database for lab users. On the other hand, for the staff and lab administration, flow of all orders can be supervised; administrative tools also include user and hardware management, a statistic functionality for accounting purposes, and a 'QuickCheck' function for assignment control, to facilitate quality control of assignments submitted to the (local) database. Laboratory information management system and database are based on a web interface as front end and are therefore independent of the operating system in use. Copyright © 2015 John Wiley & Sons, Ltd.
Adaptive hydrological flow field modeling based on water body extraction and surface information
NASA Astrophysics Data System (ADS)
Puttinaovarat, Supattra; Horkaew, Paramate; Khaimook, Kanit; Polnigongit, Weerapong
2015-01-01
Hydrological flow characteristic is one of the prime indicators for assessing flood. It plays a major part in determining drainage capability of the affected basin and also in the subsequent simulation and rainfall-runoff prediction. Thus far, flow directions were typically derived from terrain data which for flat landscapes are obscured by other man-made structures, hence undermining the practical potential. In the absence (or diminutive) of terrain slopes, water passages have a more pronounced effect on flow directions than elevations. This paper, therefore, presents detailed analyses and implementation of hydrological flow modeling from satellite and topographic images. Herein, gradual assignment based on support vector machine was applied to modified normalized difference water index and a digital surface model, in order to ensure reliable water labeling while suppressing modality-inherited artifacts and noise. Gradient vector flow was subsequently employed to reconstruct the flow field. Experiments comparing the proposed scheme with conventional water boundary delineation and flow reconstruction were presented. Respective assessments revealed its advantage over the generic stream burning. Specifically, it could extract water body from studied areas with 98.70% precision, 99.83% recall, 98.76% accuracy, and 99.26% F-measure. The correlations between resultant flows and those obtained from the stream burning were as high as 0.80±0.04 (p≤0.01 in all resolutions).
NASA Astrophysics Data System (ADS)
Schubotz, Florence; Lipp, Julius S.; Elvert, Marcus; Hinrichs, Kai-Uwe
2011-08-01
Seepage of asphalt forms the basis of a cold seep system at 3000 m water depth at the Chapopote Knoll in the southern Gulf of Mexico. Anaerobic microbial communities are stimulated in the oil-impregnated sediments as evidenced by the presence of intact polar membrane lipids (IPLs) derived from archaea and Bacteria at depths up to 7 m below the seafloor. Detailed investigation of stable carbon isotope composition (δ 13C) of alkyl and acyl moieties derived from a range of IPL precursors with distinct polar head groups resolved the complexity of carbon metabolisms and utilization of diverse carbon sources by uncultured microbial communities. In surface sediments most of the polar lipid-derived fatty acids with phosphatidylethanolamine (PE), phosphatidylglycerol (PG) and diphosphatidylglycerol (DPG) head groups could be tentatively assigned to autotrophic sulfate-reducing bacteria, with a relatively small proportion involved in the anaerobic oxidation of methane. Derivatives of phosphatidyl-( N)-methylethanolamine (PME) were abundant and could be predominantly assigned to heterotrophic oil-degrading bacteria. Archaeal IPLs with phosphate-based hydroxyarchaeols and diglycosidic glyceroldibiphytanylglyceroltetraethers (GDGTs) were assigned to methanotrophic archaea of the ANME-2 and ANME-1 cluster, respectively, whereas δ 13C values of phosphate-based archaeols and mixed phosphate-based and diglycosidic GDGTs point to methanogenic archaea. At a 7 m deep sulfate-methane transition zone that is linked to the upward movement of gas-laden petroleum, a distinct increase in abundance of archaeal IPLs such as phosphate-based hydroxyarchaeols and diglycosidic archaeol and GDGTs is observed; their δ 13C values are consistent with their origin from both methanotrophic and methanogenic archaea. This study reveals previously hidden, highly complex patterns in the carbon-flow of versatile microbial communities involved in the degradation of heavy oil including hydrocarbon gases that would not have been evident from classical compound-specific isotope analyses of either bulk IPL or apolar lipid derivatives.
Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin
2015-10-19
The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.
NASA Astrophysics Data System (ADS)
Schlueter, Kristy; Dabiri, John
2016-11-01
Coherent structure identification is important in many fluid dynamics applications, including transport phenomena in ocean flows and mixing and diffusion in turbulence. However, many of the techniques currently available for measuring such flows, including ocean drifter datasets and particle tracking velocimetry, only result in sparse velocity data. This is often insufficient for the use of current coherent structure detection algorithms based on analysis of the deformation gradient. Here, we present a frame-invariant method for detecting coherent structures from Lagrangian flow trajectories that can be sparse in number. The method, based on principles used in graph coloring algorithms, examines a measure of the kinematic dissimilarity of all pairs of flow trajectories, either measured experimentally, e.g. using particle tracking velocimetry; or numerically, by advecting fluid particles in the Eulerian velocity field. Coherence is assigned to groups of particles whose kinematics remain similar throughout the time interval for which trajectory data is available, regardless of their physical proximity to one another. Through the use of several analytical and experimental validation cases, this algorithm is shown to robustly detect coherent structures using significantly less flow data than is required by existing methods. This research was supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, Ho-Ling; Hargrove, Stephanie; Chin, Shih-Miao
The Freight Analysis Framework (FAF) integrates data from a variety of sources to create a comprehensive national picture of freight movements among states and major metropolitan areas by all modes of transportation. It provides a national picture of current freight flows to, from, and within the United States, assigns the flows to the transportation network, and projects freight flow patterns into the future. The FAF4 is the fourth database of its kind, FAF1 provided estimates for truck, rail, and water tonnage for calendar year 1998, FAF2 provided a more complete picture based on the 2002 Commodity Flow Survey (CFS) andmore » FAF3 made further improvements building on the 2007 CFS. Since the first FAF effort, a number of changes in both data sources and products have taken place. The FAF4 flow matrix described in this report is used as the base-year data to forecast future freight activities, projecting shipment weights and values from year 2020 to 2045 in five-year intervals. It also provides the basis for annual estimates to the FAF4 flow matrix, aiming to provide users with the timeliest data. Furthermore, FAF4 truck freight is routed on the national highway network to produce the FAF4 network database and flow assignments for truck. This report details the data sources and methodologies applied to develop the base year 2012 FAF4 database. An overview of the FAF4 components is briefly discussed in Section 2. Effects on FAF4 from the changes in the 2012 CFS are highlighted in Section 3. Section 4 provides a general discussion on the process used in filling data gaps within the domestic CFS matrix, specifically on the estimation of CFS suppressed/unpublished cells. Over a dozen CFS OOS components of FAF4 are then addressed in Section 5 through Section 11 of this report. This includes discussions of farm-based agricultural shipments in Section 5, shipments from fishery and logging sectors in Section 6. Shipments of municipal solid wastes and debris from construction and demolition activities are covered in Section 7. Movements involving OOS industry sectors on Retail, Services, and Household/Business Moves are addressed in Section 8. Flows of OOS commodity on crude petroleum and natural gas are presented in Sections 9 and 10, respectively. Discussions regarding shipments of foreign trade, including trade with Canada/Mexico, international airfreight, and waterborne foreign trade, are then discussed in Section 11. Several appendices are also provided at the end of this report to offer additional information.« less
The Numerical Simulation of Time Dependent Flow Structures Over a Natural Gravel Surface.
NASA Astrophysics Data System (ADS)
Hardy, R. J.; Lane, S. N.; Ferguson, R. I.; Parsons, D. R.
2004-05-01
Research undertaken over the last few years has demonstrated the importance of the structure of gravel river beds for understanding the interaction between fluid flow and sediment transport processes. This includes the observation of periodic high-speed fluid wedges interconnected by low-speed flow regions. Our understanding of these flows has been enhanced significantly through a series of laboratory experiments and supported by field observations. However, the potential of high resolution three dimensional Computational Fluid Dynamics (CFD) modeling has yet to be fully developed. This is largely the result of the problems of designing numerically stable meshes for use with complex bed topographies and that Reynolds averaged turbulence schemes are applied. This paper develops two novel techniques for dealing with these issues. The first is the development and validation of a method for representing the complex surface topography of gravel-bed rivers in high resolution three-dimensional computational fluid dynamic models. This is based upon a porosity treatment with a regular structured grid and the application of a porosity modification to the mass conservation equation in which: fully blocked cells are assigned a porosity of zero; fully unblocked cells are assigned a porosity of one; and partly blocked cells are assigned a porosity of between 0 and 1, according to the percentage of the cell volume that is blocked. The second is the application of Large Eddy Simulation (LES) which enables time dependent flow structures to be numerically predicted over the complex bed topographies. The regular structured grid with the embedded porosity algorithm maintains a constant grid cell size throughout the domain implying a constant filter scale for the LES simulation. This enables the prediction of coherent structures, repetitive quasi-cyclic large-scale turbulent motions, over the gravel surface which are of a similar magnitude and frequency to those previously observed in both flume and field studies. These structures are formed by topographic forcing within the domain and are scaled with the flow depth. Finally, this provides the numerical framework for the prediction of sediment transport within a time dependent framework. The turbulent motions make a significant contribution to the turbulent shear stress and the pressure fluctuations which significantly affect the forces acting on the bed and potentially control sediment motion.
Coherent Structure Detection using Persistent Homology and other Topological Tools
NASA Astrophysics Data System (ADS)
Smith, Spencer; Roberts, Eric; Sindi, Suzanne; Mitchell, Kevin
2017-11-01
For non-autonomous, aperiodic fluid flows, coherent structures help organize the dynamics, much as invariant manifolds and periodic orbits do for autonomous or periodic systems. The prevalence of such flows in nature and industry has motivated many successful techniques for defining and detecting coherent structures. However, often these approaches require very fine trajectory data to reconstruct velocity fields and compute Cauchy-Green-tensor-related quantities. We use topological techniques to help detect coherent trajectory sets in relatively sparse 2D advection problems. More specifically, we have developed a homotopy-based algorithm, the ensemble-based topological entropy calculation (E-tec), which assigns to each edge in an initial triangulation of advected points a topologically forced lower bound on its future stretching rate. The triangulation and its weighted edges allow us to analyze flows using persistent homology. This topological data analysis tool detects clusters and loops in the triangulation that are robust in the presence of noise and in this case correspond to coherent trajectory sets.
Hamaneh, Mehdi Bagheri; Yu, Yi-Kuo
2014-01-01
Identifying similar diseases could potentially provide deeper understanding of their underlying causes, and may even hint at possible treatments. For this purpose, it is necessary to have a similarity measure that reflects the underpinning molecular interactions and biological pathways. We have thus devised a network-based measure that can partially fulfill this goal. Our method assigns weights to all proteins (and consequently their encoding genes) by using information flow from a disease to the protein interaction network and back. Similarity between two diseases is then defined as the cosine of the angle between their corresponding weight vectors. The proposed method also provides a way to suggest disease-pathway associations by using the weights assigned to the genes to perform enrichment analysis for each disease. By calculating pairwise similarities between 2534 diseases, we show that our disease similarity measure is strongly correlated with the probability of finding the diseases in the same disease family and, more importantly, sharing biological pathways. We have also compared our results to those of MimMiner, a text-mining method that assigns pairwise similarity scores to diseases. We find the results of the two methods to be complementary. It is also shown that clustering diseases based on their similarities and performing enrichment analysis for the cluster centers significantly increases the term association rate, suggesting that the cluster centers are better representatives for biological pathways than the diseases themselves. This lends support to the view that our similarity measure is a good indicator of relatedness of biological processes involved in causing the diseases. Although not needed for understanding this paper, the raw results are available for download for further study at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbpmn/DiseaseRelations/. PMID:25360770
Hamaneh, Mehdi Bagheri; Yu, Yi-Kuo
2014-01-01
Identifying similar diseases could potentially provide deeper understanding of their underlying causes, and may even hint at possible treatments. For this purpose, it is necessary to have a similarity measure that reflects the underpinning molecular interactions and biological pathways. We have thus devised a network-based measure that can partially fulfill this goal. Our method assigns weights to all proteins (and consequently their encoding genes) by using information flow from a disease to the protein interaction network and back. Similarity between two diseases is then defined as the cosine of the angle between their corresponding weight vectors. The proposed method also provides a way to suggest disease-pathway associations by using the weights assigned to the genes to perform enrichment analysis for each disease. By calculating pairwise similarities between 2534 diseases, we show that our disease similarity measure is strongly correlated with the probability of finding the diseases in the same disease family and, more importantly, sharing biological pathways. We have also compared our results to those of MimMiner, a text-mining method that assigns pairwise similarity scores to diseases. We find the results of the two methods to be complementary. It is also shown that clustering diseases based on their similarities and performing enrichment analysis for the cluster centers significantly increases the term association rate, suggesting that the cluster centers are better representatives for biological pathways than the diseases themselves. This lends support to the view that our similarity measure is a good indicator of relatedness of biological processes involved in causing the diseases. Although not needed for understanding this paper, the raw results are available for download for further study at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbpmn/DiseaseRelations/.
NASA Astrophysics Data System (ADS)
Yeh, Cheng-Ta; Lin, Yi-Kuei; Yang, Jo-Yun
2018-07-01
Network reliability is an important performance index for many real-life systems, such as electric power systems, computer systems and transportation systems. These systems can be modelled as stochastic-flow networks (SFNs) composed of arcs and nodes. Most system supervisors respect the network reliability maximization by finding the optimal multi-state resource assignment, which is one resource to each arc. However, a disaster may cause correlated failures for the assigned resources, affecting the network reliability. This article focuses on determining the optimal resource assignment with maximal network reliability for SFNs. To solve the problem, this study proposes a hybrid algorithm integrating the genetic algorithm and tabu search to determine the optimal assignment, called the hybrid GA-TS algorithm (HGTA), and integrates minimal paths, recursive sum of disjoint products and the correlated binomial distribution to calculate network reliability. Several practical numerical experiments are adopted to demonstrate that HGTA has better computational quality than several popular soft computing algorithms.
Interaction Between Strategic and Local Traffic Flow Controls
NASA Technical Reports Server (NTRS)
Grabbe, Son; Sridhar, Banavar; Mukherjee, Avijit; Morando, Alexander
2010-01-01
The loosely coordinated sets of traffic flow management initiatives that are operationally implemented at the national- and local-levels have the potential to under, over, and inconsistently control flights. This study is designed to explore these interactions through fast-time simulations with an emphasis on identifying inequitable situations in which flights receive multiple uncoordinated delays. Two operationally derived scenarios were considered in which flights arriving into the Dallas/Fort Worth International Airport were first controlled at the national-level, either with a Ground Delay Program or a playbook reroute. These flights were subsequently controlled at the local level. The Traffic Management Advisor assigned them arrival scheduling delays. For the Ground Delay Program scenarios, between 51% and 53% of all arrivals experience both pre-departure delays from the Ground Delay Program and arrival scheduling delays from the Traffic Management Advisor. Of the subset of flights that received multiple delays, between 5.7% and 6.4% of the internal departures were first assigned a pre-departure delay by the Ground Delay Program, followed by a second pre-departure delay as a result of the arrival scheduling. For the playbook reroute scenario, Dallas/Fort Worth International Airport arrivals were first assigned pre-departure reroutes based on the MW_2_DALLAS playbook plan, and were subsequently assigned arrival scheduling delays by the Traffic Management Advisor. Since the airport was operating well below capacity when the playbook reroute was in effect, only 7% of the arrivals were observed to receive both rerouting and arrival scheduling delays. Findings from these initial experiments confirm field observations that Ground Delay Programs operated in conjunction with arrival scheduling can result in inequitable situations in which flights receive multiple uncoordinated delays.
Heat flow, seismic cut-off depth and thermal modeling of the Fennoscandian Shield
NASA Astrophysics Data System (ADS)
Veikkolainen, Toni; Kukkonen, Ilmo T.; Tiira, Timo
2017-12-01
Being far from plate boundaries but covered with seismograph networks, the Fennoscandian Shield features an ideal test laboratory for studies of intraplate seismicity. For this purpose, this study applies 4190 earthquake events from years 2000-2015 with magnitudes ranging from 0.10 to 5.22 in Finnish and Swedish national catalogues. In addition, 223 heat flow determinations from both countries and their immediate vicinity were used to analyse the potential correlation of earthquake focal depths and the spatially interpolated heat flow field. Separate subset analyses were performed for five areas of notable seismic activity: the southern Gulf of Bothnia coast of Sweden (area 1), the northern Gulf of Bothnia coast of Sweden (area 2), the Swedish Norrbotten and western Finnish Lapland (area 3), the Kuusamo region of Finland (area 4) and the southernmost Sweden (area 5). In total, our subsets incorporated 3619 earthquake events. No obvious relation of heat flow and focal depth exists, implying that variations of heat flow are primarily caused by shallow lying heat producing units instead of deeper sources. This allows for construction of generic geotherms for the range of representative palaeoclimatically corrected (steady-state) surface heat flow values (40-60 mW m-2). The 1-D geotherms constructed for a three-layer crust and lithospheric upper mantle are based on mantle heat flow constrained with the aid of mantle xenolith thermobarometry (9-15 mW m-2), upper crustal heat production values (3.3-1.1 μWm-3) and the brittle-ductile transition temperature (350 °C) assigned to the cut-off depth of seismicity (28 ± 4 km). For the middle and lower crust heat production values of 0.6 and 0.2 μWm-3 were assigned, respectively. The models suggest a Moho temperature range of 460-500 °C.
Pyne, Matthew I.; Carlisle, Daren M.; Konrad, Christopher P.; Stein, Eric D.
2017-01-01
Regional classification of streams is an early step in the Ecological Limits of Hydrologic Alteration framework. Many stream classifications are based on an inductive approach using hydrologic data from minimally disturbed basins, but this approach may underrepresent streams from heavily disturbed basins or sparsely gaged arid regions. An alternative is a deductive approach, using watershed climate, land use, and geomorphology to classify streams, but this approach may miss important hydrological characteristics of streams. We classified all stream reaches in California using both approaches. First, we used Bayesian and hierarchical clustering to classify reaches according to watershed characteristics. Streams were clustered into seven classes according to elevation, sedimentary rock, and winter precipitation. Permutation-based analysis of variance and random forest analyses were used to determine which hydrologic variables best separate streams into their respective classes. Stream typology (i.e., the class that a stream reach is assigned to) is shaped mainly by patterns of high and mean flow behavior within the stream's landscape context. Additionally, random forest was used to determine which hydrologic variables best separate minimally disturbed reference streams from non-reference streams in each of the seven classes. In contrast to stream typology, deviation from reference conditions is more difficult to detect and is largely defined by changes in low-flow variables, average daily flow, and duration of flow. Our combined deductive/inductive approach allows us to estimate flow under minimally disturbed conditions based on the deductive analysis and compare to measured flow based on the inductive analysis in order to estimate hydrologic change.
Ant colony optimization for solving university facility layout problem
NASA Astrophysics Data System (ADS)
Mohd Jani, Nurul Hafiza; Mohd Radzi, Nor Haizan; Ngadiman, Mohd Salihin
2013-04-01
Quadratic Assignment Problems (QAP) is classified as the NP hard problem. It has been used to model a lot of problem in several areas such as operational research, combinatorial data analysis and also parallel and distributed computing, optimization problem such as graph portioning and Travel Salesman Problem (TSP). In the literature, researcher use exact algorithm, heuristics algorithm and metaheuristic approaches to solve QAP problem. QAP is largely applied in facility layout problem (FLP). In this paper we used QAP to model university facility layout problem. There are 8 facilities that need to be assigned to 8 locations. Hence we have modeled a QAP problem with n ≤ 10 and developed an Ant Colony Optimization (ACO) algorithm to solve the university facility layout problem. The objective is to assign n facilities to n locations such that the minimum product of flows and distances is obtained. Flow is the movement from one to another facility, whereas distance is the distance between one locations of a facility to other facilities locations. The objective of the QAP is to obtain minimum total walking (flow) of lecturers from one destination to another (distance).
Dynamic Load Balancing Based on Constrained K-D Tree Decomposition for Parallel Particle Tracing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru
Particle tracing is a fundamental technique in flow field data visualization. In this work, we present a novel dynamic load balancing method for parallel particle tracing. Specifically, we employ a constrained k-d tree decomposition approach to dynamically redistribute tasks among processes. Each process is initially assigned a regularly partitioned block along with duplicated ghost layer under the memory limit. During particle tracing, the k-d tree decomposition is dynamically performed by constraining the cutting planes in the overlap range of duplicated data. This ensures that each process is reassigned particles as even as possible, and on the other hand the newmore » assigned particles for a process always locate in its block. Result shows good load balance and high efficiency of our method.« less
The Role of Extra-Vestibular Inputs in Maintaining Spatial Orientation in Military Vehicles
2003-02-01
flow contribute to spatial orientation. Disordered regulation of any of these factors can be identified in land based tests and allows us to study pre...adaptation disorders . 1,2 The sensory conflict theory of motion sickness states that motion sickness arises when one or several inputs from the body’s sensory...several episodes of severe motion sickness during an operational military assignment (usually aboard ship), but demonstrate no balance disorder or ear
Research on a dynamic workflow access control model
NASA Astrophysics Data System (ADS)
Liu, Yiliang; Deng, Jinxia
2007-12-01
In recent years, the access control technology has been researched widely in workflow system, two typical technologies of that are RBAC (Role-Based Access Control) and TBAC (Task-Based Access Control) model, which has been successfully used in the role authorizing and assigning in a certain extent. However, during the process of complicating a system's structure, these two types of technology can not be used in minimizing privileges and separating duties, and they are inapplicable when users have a request of frequently changing on the workflow's process. In order to avoid having these weakness during the applying, a variable flow dynamic role_task_view (briefly as DRTVBAC) of fine-grained access control model is constructed on the basis existed model. During the process of this model applying, an algorithm is constructed to solve users' requirements of application and security needs on fine-grained principle of privileges minimum and principle of dynamic separation of duties. The DRTVBAC model is implemented in the actual system, the figure shows that the task associated with the dynamic management of role and the role assignment is more flexible on authority and recovery, it can be met the principle of least privilege on the role implement of a specific task permission activated; separated the authority from the process of the duties completing in the workflow; prevented sensitive information discovering from concise and dynamic view interface; satisfied with the requirement of the variable task-flow frequently.
49 CFR 173.121 - Class 3-Assignment of packing group.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) The viscosity and flash point are in accordance with the following table: Flow time t in seconds Jet... shall be performed are as follows: (i) Viscosity test. The flow time in seconds is determined at 23 °C...
A time-based concept for terminal-area traffic management
NASA Technical Reports Server (NTRS)
Erzberger, H.; Tobias, L.
1986-01-01
An automated air-traffic-management concept that has the potential for significantly increasing the efficiency of traffic flows in high-density terminal areas is discussed. The concept's implementation depends on the techniques for controlling the landing time of all aircraft entering the terminal area, both those that are equipped with on-board four dimensional guidance systems as well as those aircraft types that are conventionally equipped. The two major ground-based elements of the system are a scheduler which assigns conflict-free landing times and a profile descent advisor. Landing times provided by the scheduler are uplinked to equipped aircraft and translated into the appropriate four dimensional trajectory by the on-board flight-management system. The controller issues descent advisories to unequipped aircraft to help them achieve the assigned landing times. Air traffic control simulations have established that the concept provides an efficient method for controlling various mixes of four dimensional-equipped and unequipped, as well as low-and high-performance, aircraft.
49 CFR 173.124 - Class 4, Divisions 4.1, 4.2 and 4.3-Definitions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... material must be determined using the testing protocol from Figure 14.2 (Flow Chart for Assigning Self... HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION HAZARDOUS MATERIALS REGULATIONS... Assignments and Exceptions for Hazardous Materials Other Than Class 1 and Class 7 § 173.124 Class 4, Divisions...
Workflow-Based Software Development Environment
NASA Technical Reports Server (NTRS)
Izygon, Michel E.
2013-01-01
The Software Developer's Assistant (SDA) helps software teams more efficiently and accurately conduct or execute software processes associated with NASA mission-critical software. SDA is a process enactment platform that guides software teams through project-specific standards, processes, and procedures. Software projects are decomposed into all of their required process steps or tasks, and each task is assigned to project personnel. SDA orchestrates the performance of work required to complete all process tasks in the correct sequence. The software then notifies team members when they may begin work on their assigned tasks and provides the tools, instructions, reference materials, and supportive artifacts that allow users to compliantly perform the work. A combination of technology components captures and enacts any software process use to support the software lifecycle. It creates an adaptive workflow environment that can be modified as needed. SDA achieves software process automation through a Business Process Management (BPM) approach to managing the software lifecycle for mission-critical projects. It contains five main parts: TieFlow (workflow engine), Business Rules (rules to alter process flow), Common Repository (storage for project artifacts, versions, history, schedules, etc.), SOA (interface to allow internal, GFE, or COTS tools integration), and the Web Portal Interface (collaborative web environment
Parallel processing methods for space based power systems
NASA Technical Reports Server (NTRS)
Berry, F. C.
1993-01-01
This report presents a method for doing load-flow analysis of a power system by using a decomposition approach. The power system for the Space Shuttle is used as a basis to build a model for the load-flow analysis. To test the decomposition method for doing load-flow analysis, simulations were performed on power systems of 16, 25, 34, 43, 52, 61, 70, and 79 nodes. Each of the power systems was divided into subsystems and simulated under steady-state conditions. The results from these tests have been found to be as accurate as tests performed using a standard serial simulator. The division of the power systems into different subsystems was done by assigning a processor to each area. There were 13 transputers available, therefore, up to 13 different subsystems could be simulated at the same time. This report has preliminary results for a load-flow analysis using a decomposition principal. The report shows that the decomposition algorithm for load-flow analysis is well suited for parallel processing and provides increases in the speed of execution.
Simple Queueing Model Applied to the City of Portland
NASA Astrophysics Data System (ADS)
Simon, Patrice M.; Esser, Jörg; Nagel, Kai
We use a simple traffic micro-simulation model based on queueing dynamics as introduced by Gawron [IJMPC, 9(3):393, 1998] in order to simulate traffic in Portland/Oregon. Links have a flow capacity, that is, they do not release more vehicles per second than is possible according to their capacity. This leads to queue built-up if demand exceeds capacity. Links also have a storage capacity, which means that once a link is full, vehicles that want to enter the link need to wait. This leads to queue spill-back through the network. The model is compatible with route-plan-based approaches such as TRANSIMS, where each vehicle attempts to follow its pre-computed path. Yet, both the data requirements and the computational requirements are considerably lower than for the full TRANSIMS microsimulation. Indeed, the model uses standard emme/2 network data, and runs about eight times faster than real time with more than 100 000 vehicles simultaneously in the simulation on a single Pentium-type CPU. We derive the model's fundamental diagrams and explain it. The simulation is used to simulate traffic on the emme/2 network of the Portland (Oregon) metropolitan region (20 000 links). Demand is generated by a simplified home-to-work destination assignment which generates about half a million trips for the morning peak. Route assignment is done by iterative feedback between micro-simulation and router. An iterative solution of the route assignment for the above problem can be achieved within about half a day of computing time on a desktop workstation. We compare results with field data and with results of traditional assignment runs by the Portland Metropolitan Planning Organization. Thus, with a model such as this one, it is possible to use a dynamic, activities-based approach to transportation simulation (such as in TRANSIMS) with affordable data and hardware. This should enable systematic research about the coupling of demand generation, route assignment, and micro-simulation output.
NASA Astrophysics Data System (ADS)
Dorko, E. A.; Glessner, J. W.; Ritchey, C. M.; Rutger, L. L.; Pow, J. J.; Brasure, L. D.; Duray, J. P.; Snyder, S. R.
1986-03-01
The chemiluminescence from electronically excited lead oxide formed during the reaction between lead vapor and either 3Σ O 2 or 1Δ O 2 has been studied. The reactions were accomplished in a flow tube reactor. A microwave discharge was used to generate 1Δ O 2. The vibronic spectrum was analyzed and the band head assignments were used in a linear least-squares calculation to obtain the vibronic molecular constants for the X, a, b, A, B, C, C', D, and E electronic states of lead oxide. Based on these and other molecular constants, Franck-Condon factors were calculated for the transitions to the ground state and also for the A-a and D-a transitions. Evidence was presented to support a kinetic analysis of the mechanism leading to chemiluminescence under the experimental conditions encountered in the flow tube reactor. Mechanisms presented earlier were verified by the present data.
A static data flow simulation study at Ames Research Center
NASA Technical Reports Server (NTRS)
Barszcz, Eric; Howard, Lauri S.
1987-01-01
Demands in computational power, particularly in the area of computational fluid dynamics (CFD), led NASA Ames Research Center to study advanced computer architectures. One architecture being studied is the static data flow architecture based on research done by Jack B. Dennis at MIT. To improve understanding of this architecture, a static data flow simulator, written in Pascal, has been implemented for use on a Cray X-MP/48. A matrix multiply and a two-dimensional fast Fourier transform (FFT), two algorithms used in CFD work at Ames, have been run on the simulator. Execution times can vary by a factor of more than 2 depending on the partitioning method used to assign instructions to processing elements. Service time for matching tokens has proved to be a major bottleneck. Loop control and array address calculation overhead can double the execution time. The best sustained MFLOPS rates were less than 50% of the maximum capability of the machine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bloom, M. H.
1980-01-01
The aim of this program is to contribute to certain facets of the development of the MHD/coal power system, and particularly the CDIF of DOE with regard to its flow train. Consideration is given specifically to the electrical power take-off, the diagnostic and instrumentation systems, the combustor and MHD channel technology, and electrode alternatives. Within the constraints of the program, high priorities were assigned to the problems of power take-off and the related characteristics of the MHD channel, and to the establishment of a non-intrusive, laser-based diagnostic system. The next priority was given to the combustor modeling and to amore » significantly improved analysis of particle combustion. Separate abstracts were prepared for nine of the ten papers included. One paper was previously included in the data base. (WHK)« less
Simulation of hypersonic rarefied flows with the immersed-boundary method
NASA Astrophysics Data System (ADS)
Bruno, D.; De Palma, P.; de Tullio, M. D.
2011-05-01
This paper provides a validation of an immersed boundary method for computing hypersonic rarefied gas flows. The method is based on the solution of the Navier-Stokes equation and is validated versus numerical results obtained by the DSMC approach. The Navier-Stokes solver employs a flexible local grid refinement technique and is implemented on parallel machines using a domain-decomposition approach. Thanks to the efficient grid generation process, based on the ray-tracing technique, and the use of the METIS software, it is possible to obtain the partitioned grids to be assigned to each processor with a minimal effort by the user. This allows one to by-pass the expensive (in terms of time and human resources) classical generation process of a body fitted grid. First-order slip-velocity boundary conditions are employed and tested for taking into account rarefied gas effects.
Lava-flow hazard on the SE flank of Mt. Etna (Southern Italy)
NASA Astrophysics Data System (ADS)
Crisci, G. M.; Iovine, G.; Di Gregorio, S.; Lupiano, V.
2008-11-01
A method for mapping lava-flow hazard on the SE flank of Mt. Etna (Sicily, Southern Italy) by applying the Cellular Automata model SCIARA -fv is described, together with employed techniques of calibration and validation through a parallel Genetic Algorithm. The study area is partly urbanised; it has repeatedly been affected by lava flows from flank eruptions in historical time, and shows evidence of a dominant SSE-trending fracture system. Moreover, a dormant deep-seated gravitational deformation, associated with a larger volcano-tectonic phenomenon, affects the whole south-eastern flank of the volcano. The Etnean 2001 Mt. Calcarazzi lava-flow event has been selected for model calibration, while validation has been performed by considering the 2002 Linguaglossa and the 1991-93 Valle del Bove events — suitable data for back analysis being available for these recent eruptions. Quantitative evaluation of the simulations, with respect to the real events, has been performed by means of a couple of fitness functions, which consider either the areas affected by the lava flows, or areas and eruption duration. Sensitivity analyses are in progress for thoroughly evaluating the role of parameters, topographic input data, and mesh geometry on model performance; though, preliminary results have already given encouraging responses on model robustness. In order to evaluate lava-flow hazard in the study area, a regular grid of n.340 possible vents, uniformly covering the study area and located at 500 m intervals, has been hypothesised. For each vent, a statistically-significant number of simulations has been planned, by adopting combinations of durations, lava volumes, and effusion-rate functions, selected by considering available volcanological data. Performed simulations have been stored in a GIS environment for successive analyses and map elaboration. Probabilities of activation, empirically based on past behaviour of the volcano, can be assigned to each vent of the grid, by considering its elevation, location with respect to the volcanic edifice, and proximity to its main weakness zones. Similarly, different probabilities can be assigned to the simulated event types (combinations of durations and lava volumes, and to the effusion-rate functions considered). In such a way, an implicit assumption is made that the volcanic style will not dramatically change in the near future. Depending on adopted criteria for probability evaluation, different maps of lava-flow hazard can be compiled, by taking into account both the overlapping of the simulated lava flows and their assumed probabilities, and by finally ranking computed values into few relative classes. The adopted methodology allows to rapidly exploring changes in lava-flow hazard as a function of varying probabilities of occurrence, by simply re-processing the database of the simulations stored in the GIS. For Civil Protection purposes, in case of expected imminent opening of a vent in a given sector of the volcano, re-processing may help in real-time forecasting the presumable affected areas, and thus in better managing the eruptive crisis. Moreover, further simulations can be added to the GIS data base at any time new different event types were recognised to be of interest. In this paper, three examples of maps of lava-flow hazard for the SE flank of Mt. Etna are presented: the first has been realised without assigning any probability to the performed simulations, by simply counting the frequencies of lava flows affecting each site; in the second map, information on past eruptions is taken into account, and probabilities are empirically attributed to each simulation based on location of vents and types of eruption; in the third one, a stronger role is ascribed to the main SSE-trending weakness zone, which crosses the study area between Nicolosi and Trecastagni, associated with the right flank of the above-cited deep-seated deformation. Despite being only preliminary (as based on a sub-set of the overall planned simulations), the maps clearly depict the most hazardous sectors of the volcano, which have been identified by applying the coupled modelling-GIS method here described.
Shera, E. Brooks
1988-01-01
A detection system is provided for identifying individual particles or molecules having characteristic emission in a flow train of the particles in a flow cell. A position sensitive sensor is located adjacent the flow cell in a position effective to detect the emissions from the particles within the flow cell and to assign spatial and temporal coordinates for the detected emissions. A computer is then enabled to predict spatial and temporal coordinates for the particle in the flow train as a function of a first detected emission. Comparison hardware or software then compares subsequent detected spatial and temporal coordinates with the predicted spatial and temporal coordinates to determine whether subsequently detected emissions originate from a particle in the train of particles. In one embodiment, the particles include fluorescent dyes which are excited to fluoresce a spectrum characteristic of the particular particle. Photones are emitted adjacent at least one microchannel plate sensor to enable spatial and temporal coordinates to be assigned. The effect of comparing detected coordinates with predicted coordinates is to define a moving sample volume which effectively precludes the effects of background emissions.
Shera, E.B.
1987-10-07
A detection system is provided for identifying individual particles or molecules having characteristic emission in a flow train of the particles in a flow cell. A position sensitive sensor is located adjacent the flow cell in a position effective to detect the emissions from the particles within the flow cell and to assign spatial and temporal coordinates for the detected emissions. A computer is then enabled to predict spatial and temporal coordinates for the particle in the flow train as a function of a first detected emission. Comparison hardware or software then compares subsequent detected spatial and temporal coordinates with the predicted spatial and temporal coordinates to determine whether subsequently detected emissions originate from a particle in the train of particles. In one embodiment, the particles include fluorescent dyes which are excited to fluoresce a spectrum characteristic of the particular particle. Photons are emitted adjacent at least one microchannel plate sensor to enable spatial and temporal coordinates to be assigned. The effect of comparing detected coordinates with predicted coordinates is to define a moving sample volume which effectively precludes the effects of background emissions. 3 figs.
2015-01-01
Objectives This study aimed to determine the effect of mobile-based discussion versus computer-based discussion on self-directed learning readiness, academic motivation, learner-interface interaction, and flow state. Methods This randomized controlled trial was conducted at one university. Eighty-six nursing students who were able to use a computer, had home Internet access, and used a mobile phone were recruited. Participants were randomly assigned to either the mobile phone app-based discussion group (n = 45) or a computer web-based discussion group (n = 41). The effect was measured at before and after an online discussion via self-reported surveys that addressed academic motivation, self-directed learning readiness, time distortion, learner-learner interaction, learner-interface interaction, and flow state. Results The change in extrinsic motivation on identified regulation in the academic motivation (p = 0.011) as well as independence and ability to use basic study (p = 0.047) and positive orientation to the future in self-directed learning readiness (p = 0.021) from pre-intervention to post-intervention was significantly more positive in the mobile phone app-based group compared to the computer web-based discussion group. Interaction between learner and interface (p = 0.002), having clear goals (p = 0.012), and giving and receiving unambiguous feedback (p = 0.049) in flow state was significantly higher in the mobile phone app-based discussion group than it was in the computer web-based discussion group at post-test. Conclusions The mobile phone might offer more valuable learning opportunities for discussion teaching and learning methods in terms of self-directed learning readiness, academic motivation, learner-interface interaction, and the flow state of the learning process compared to the computer. PMID:25995965
Procrastination, Flow, and Academic Performance in Real Time Using the Experience Sampling Method.
Sumaya, Isabel C; Darling, Emily
2018-01-01
The authors' aim was to first provide an alternative methodology in the assessment of procrastination and flow that would not reply on retrospective or prospective self-reports. Using real-time assessment of both procrastination and flow, the authors investigated how these factors impact academic performance by using the Experience Sampling Method. They assessed flow by measuring student self-reported skill versus challenge, and procrastination by measuring the days to completion of an assignment. Procrastination and flow were measured for six days before a writing assignment due date while students (n = 14) were enrolled in a research methods course. Regardless of status of flow, both the nonflow and flow groups showed high levels of procrastination. Students who experienced flow as they worked on their paper, in real time, earned significantly higher grades (M = 3.05 ± 0.30: an average grade of B) as compared with the nonflow group (M = 1.16 ± 0.33: an average grade of D; p = .007). Additionally, students experiencing flow were more accurate in predicting their grade (difference scores, flow M = 0.12 ± 0.33 vs. nonflow M = 1.39 ± 0.29; p = .015). Students in the nonflow group were nearly a grade and a half off in their prediction of their grade on the paper. To the authors' knowledge, the study is the first to provide experimental evidence showing differences in academic performance between students experiencing flow and nonflow students.
Lee, James S; Franc, Jeffrey M
2015-08-01
A high influx of patients during a mass-casualty incident (MCI) may disrupt patient flow in an already overcrowded emergency department (ED) that is functioning beyond its operating capacity. This pilot study examined the impact of a two-step ED triage model using Simple Triage and Rapid Treatment (START) for pre-triage, followed by triage with the Canadian Triage and Acuity Scale (CTAS), on patient flow during a MCI simulation exercise. Hypothesis/Problem It was hypothesized that there would be no difference in time intervals nor patient volumes at each patient-flow milestone. Physicians and nurses participated in a computer-based tabletop disaster simulation exercise. Physicians were randomized into the intervention group using START, then CTAS, or the control group using START alone. Patient-flow milestones including time intervals and patient volumes from ED arrival to triage, ED arrival to bed assignment, ED arrival to physician assessment, and ED arrival to disposition decision were compared. Triage accuracy was compared for secondary purposes. There were no significant differences in the time interval from ED arrival to triage (mean difference 108 seconds; 95% CI, -353 to 596 seconds; P=1.0), ED arrival to bed assignment (mean difference 362 seconds; 95% CI, -1,269 to 545 seconds; P=1.0), ED arrival to physician assessment (mean difference 31 seconds; 95% CI, -1,104 to 348 seconds; P=0.92), and ED arrival to disposition decision (mean difference 175 seconds; 95% CI, -1,650 to 1,300 seconds; P=1.0) between the two groups. There were no significant differences in the volume of patients to be triaged (32% vs 34%; 95% CI for the difference -16% to 21%; P=1.0), assigned a bed (16% vs 21%; 95% CI for the difference -11% to 20%; P=1.0), assessed by a physician (20% vs 22%; 95% CI for the difference -14% to 19%; P=1.0), and with a disposition decision (20% vs 9%; 95% CI for the difference -25% to 4%; P=.34) between the two groups. The accuracy of triage was similar in both groups (57% vs 70%; 95% CI for the difference -15% to 41%; P=.46). Experienced triage nurses were able to apply CTAS effectively during a MCI simulation exercise. A two-step ED triage model using START, then CTAS, had similar patient flow and triage accuracy when compared to START alone.
Bertlich, Mattis; Ihler, Fritz; Sharaf, Kariem; Weiss, Bernhard G; Strupp, Michael; Canis, Martin
2014-10-01
Betahistine is a histamine-like drug that is used in the treatment of Ménière's disease. It is commonly believed that betahistine increases cochlear blood flow and thus decreases the endolymphatic hydrops that is the cause of Ménière's. Despite common clinical use, there is little understanding of the kinetics or effects of its metabolites. This study investigated the effect of the betahistine metabolites aminoethylpyridine, hydroxyethylpyridine, and pyridylacetic acid on cochlear microcirculation. Guinea pigs were randomly assigned to one of the groups: placebo, betahistine, or equimolar amounts of aminoethylpyridine, hydroxyethylpyridine, or pyridylacetic acid. Cochlear blood flow and mean arterial pressure were recorded for three minutes before and 15 minutes after treatment. Thirty Dunkin-Hartley guinea pigs assigned to one of five groups with six guinea pigs per group. Betahistine, aminoethylpyridine, and hydroxyethylpyridine caused a significant increase in cochlear blood flow in comparison to placebo. The effect seen under aminoethylpyridin was greatest. The group treated with pyridylacetic acid showed no significant effect on cochlear blood flow. Aminoethylpyridine and hydroxyethylpyridine are, like betahistine, able to increase cochlear blood flow significantly. The effect of aminoethylpyridine was greatest. Pyridylacetic acid had no effect on cochlear microcirculation.
NASA Astrophysics Data System (ADS)
Ellwood, Robin; Abrams, Eleanor
2017-02-01
This research investigated how student social interactions within two approaches to an inquiry-based science curriculum could be related to student motivation and achievement outcomes. This qualitative case study consisted of two cases, Off-Campus and On-Campus, and used ethnographic techniques of participant observation. Research participants included eight eighth grade girls, aged 13-14 years old. Data sources included formal and informal participant interviews, participant journal reflections, curriculum artifacts including quizzes, worksheets, and student-generated research posters, digital video and audio recordings, photographs, and researcher field notes. Data were transcribed verbatim and coded, then collapsed into emergent themes using NVIVO 9. The results of this research illustrate how setting conditions that promote focused concentration and communicative interactions can be positively related to student motivation and achievement outcomes in inquiry-based science. Participants in the Off-Campus case experienced more frequent states of focused concentration and out performed their peers in the On-Campus case on 46 % of classroom assignments. Off-Campus participants also designed and implemented a more cognitively complex research project, provided more in-depth analyses of their research results, and expanded their perceptions of what it means to act like a scientist to a greater extent than participants in the On-Campus case. These results can be understood in relation to Flow Theory. Student interactions that promoted the criteria necessary for initiating flow, which included having clearly defined goals, receiving immediate feedback, and maintaining a balance between challenges and skills, fostered enhanced student motivation and achievement outcomes. Implications for science teaching and future research include shifting the current focus in inquiry-based science from a continuum that progresses from teacher-directed to open inquiry experiences to a continuum that also deliberately includes and promotes the necessary criteria for establishing flow. Attending to Flow Theory and incorporating student experiences with flow into inquiry-based science lessons will enhance student motivation and achievement outcomes in science and bolster the success of inquiry-based science.
ERIC Educational Resources Information Center
Stanford Univ., CA. School Mathematics Study Group.
This is the second unit of a 15-unit School Mathematics Study Group (SMSG) mathematics text for high school students. Topics presented in the first chapter (Informal Algorithms and Flow Charts) include: changing a flat tire; algorithms, flow charts, and computers; assignment and variables; input and output; using a variable as a counter; decisions…
Empirical model for the volume-change behavior of debris flows
Cannon, S.H.; ,
1993-01-01
The potential travel down hillsides; movement stops where the volume-change behavior of flows as they travel down hillsides ; movement stops where the volume of actively flowing debris becomes negligible. The average change in volume over distance for 26 recent debris flows in the Honolulu area was assumed to be a function of the slope over which the debris flow traveled, the degree of flow confinement by the channel, and an assigned value for the type of vegetation through which the debris flow traveled. Analysis of the data yielded a relation that can be incorporated into digital elevation models to characterize debris-flow travel on Oahu.
Sawant, Onkar B.; Ramadoss, Jayanth; Hankins, Gary D.; Wu, Guoyao
2014-01-01
Not much is known about effects of gestational alcohol exposure on maternal and fetal cardiovascular adaptations. This study determined whether maternal binge alcohol exposure and L-glutamine supplementation could affect maternal-fetal hemodynamics and fetal regional brain blood flow during the brain growth spurt period. Pregnant sheep were randomly assigned to one of four groups: saline control, alcohol (1.75–2.5 g/kg body weight), glutamine (100 mg/kg body weight) or alcohol + glutamine. A chronic weekend binge drinking paradigm between gestational days (GD) 99 and 115 was utilized. Fetuses were surgically instrumented on GD 117 ± 1 and studied on GD 120 ± 1. Binge alcohol exposure caused maternal acidemia, hypercapnea, and hypoxemia. Fetuses were acidemic and hypercapnic, but not hypoxemic. Alcohol exposure increased fetal mean arterial pressure, whereas fetal heart rate was unaltered. Alcohol exposure resulted in ~40 % reduction in maternal uterine artery blood flow. Labeled microsphere analyses showed that alcohol induced >2-fold increases in fetal whole brain blood flow. The elevation in fetal brain blood flow was region-specific, particularly affecting the developing cerebellum, brain stem, and olfactory bulb. Maternal L-glutamine supplementation attenuated alcohol-induced maternal hypercapnea, fetal acidemia and increases in fetal brain blood flow. L-Glutamine supplementation did not affect uterine blood flow. Collectively, alcohol exposure alters maternal and fetal acid–base balance, decreases uterine blood flow, and alters fetal regional brain blood flow. Importantly, L-glutamine supplementation mitigates alcohol-induced acid–base imbalances and alterations in fetal regional brain blood flow. Further studies are warranted to elucidate mechanisms responsible for alcohol-induced programming of maternal uterine artery and fetal circulation adaptations in pregnancy. PMID:24810329
Sawant, Onkar B; Ramadoss, Jayanth; Hankins, Gary D; Wu, Guoyao; Washburn, Shannon E
2014-08-01
Not much is known about effects of gestational alcohol exposure on maternal and fetal cardiovascular adaptations. This study determined whether maternal binge alcohol exposure and L-glutamine supplementation could affect maternal-fetal hemodynamics and fetal regional brain blood flow during the brain growth spurt period. Pregnant sheep were randomly assigned to one of four groups: saline control, alcohol (1.75-2.5 g/kg body weight), glutamine (100 mg/kg body weight) or alcohol + glutamine. A chronic weekend binge drinking paradigm between gestational days (GD) 99 and 115 was utilized. Fetuses were surgically instrumented on GD 117 ± 1 and studied on GD 120 ± 1. Binge alcohol exposure caused maternal acidemia, hypercapnea, and hypoxemia. Fetuses were acidemic and hypercapnic, but not hypoxemic. Alcohol exposure increased fetal mean arterial pressure, whereas fetal heart rate was unaltered. Alcohol exposure resulted in ~40 % reduction in maternal uterine artery blood flow. Labeled microsphere analyses showed that alcohol induced >2-fold increases in fetal whole brain blood flow. The elevation in fetal brain blood flow was region-specific, particularly affecting the developing cerebellum, brain stem, and olfactory bulb. Maternal L-glutamine supplementation attenuated alcohol-induced maternal hypercapnea, fetal acidemia and increases in fetal brain blood flow. L-Glutamine supplementation did not affect uterine blood flow. Collectively, alcohol exposure alters maternal and fetal acid-base balance, decreases uterine blood flow, and alters fetal regional brain blood flow. Importantly, L-glutamine supplementation mitigates alcohol-induced acid-base imbalances and alterations in fetal regional brain blood flow. Further studies are warranted to elucidate mechanisms responsible for alcohol-induced programming of maternal uterine artery and fetal circulation adaptations in pregnancy.
Delay functions in trip assignment for transport planning process
NASA Astrophysics Data System (ADS)
Leong, Lee Vien
2017-10-01
In transportation planning process, volume-delay and turn-penalty functions are the functions needed in traffic assignment to determine travel time on road network links. Volume-delay function is the delay function describing speed-flow relationship while turn-penalty function is the delay function associated to making a turn at intersection. The volume-delay function used in this study is the revised Bureau of Public Roads (BPR) function with the constant parameters, α and β values of 0.8298 and 3.361 while the turn-penalty functions for signalized intersection were developed based on uniform, random and overflow delay models. Parameters such as green time, cycle time and saturation flow were used in the development of turn-penalty functions. In order to assess the accuracy of the delay functions, road network in areas of Nibong Tebal, Penang and Parit Buntar, Perak was developed and modelled using transportation demand forecasting software. In order to calibrate the models, phase times and traffic volumes at fourteen signalised intersections within the study area were collected during morning and evening peak hours. The prediction of assigned volumes using the revised BPR function and the developed turn-penalty functions show close agreement to actual recorded traffic volume with the lowest percentage of accuracy, 80.08% and the highest, 93.04% for the morning peak model. As for the evening peak model, they were 75.59% and 95.33% respectively for lowest and highest percentage of accuracy. As for the yield left-turn lanes, the lowest percentage of accuracy obtained for the morning and evening peak models were 60.94% and 69.74% respectively while the highest percentage of accuracy obtained for both models were 100%. Therefore, can be concluded that the development and utilisation of delay functions based on local road conditions are important as localised delay functions can produce better estimate of link travel times and hence better planning for future scenarios.
Hydrochemical analysis of groundwater using a tree-based model
NASA Astrophysics Data System (ADS)
Litaor, M. Iggy; Brielmann, H.; Reichmann, O.; Shenker, M.
2010-06-01
SummaryHydrochemical indices are commonly used to ascertain aquifer characteristics, salinity problems, anthropogenic inputs and resource management, among others. This study was conducted to test the applicability of a binary decision tree model to aquifer evaluation using hydrochemical indices as input. The main advantage of the tree-based model compared to other commonly used statistical procedures such as cluster and factor analyses is the ability to classify groundwater samples with assigned probability and the reduction of a large data set into a few significant variables without creating new factors. We tested the model using data sets collected from headwater springs of the Jordan River, Israel. The model evaluation consisted of several levels of complexity, from simple separation between the calcium-magnesium-bicarbonate water type of karstic aquifers to the more challenging separation of calcium-sodium-bicarbonate water type flowing through perched and regional basaltic aquifers. In all cases, the model assigned measures for goodness of fit in the form of misclassification errors and singled out the most significant variable in the analysis. The model proceeded through a sequence of partitions providing insight into different possible pathways and changing lithology. The model results were extremely useful in constraining the interpretation of geological heterogeneity and constructing a conceptual flow model for a given aquifer. The tree model clearly identified the hydrochemical indices that were excluded from the analysis, thus providing information that can lead to a decrease in the number of routinely analyzed variables and a significant reduction in laboratory cost.
A Gas-Kinetic Scheme for Reactive Flows
NASA Technical Reports Server (NTRS)
Lian,Youg-Sheng; Xu, Kun
1998-01-01
In this paper, the gas-kinetic BGK scheme for the compressible flow equations is extended to chemical reactive flow. The mass fraction of the unburnt gas is implemented into the gas kinetic equation by assigning a new internal degree of freedom to the particle distribution function. The new variable can be also used to describe fluid trajectory for the nonreactive flows. Due to the gas-kinetic BGK model, the current scheme basically solves the Navier-Stokes chemical reactive flow equations. Numerical tests validate the accuracy and robustness of the current kinetic method.
Di Vecchi-Staraz, Manuel; Laucou, Valérie; Bruno, Gérard; Lacombe, Thierry; Gerber, Sophie; Bourse, Thibaut; Boselli, Maurizio; This, Patrice
2009-01-01
A parentage and a paternity-based approach were tested for estimation of pollen-mediated gene flow in wild grapevine (Vitis vinifera L. subsp. silvestris), a wind-pollinated species occurring in Mediterranean Europe and southwestern Asia. For this purpose, 305 seedlings collected in 2 years at 2 locations in France from 4 wild female individuals and 417 wild individuals prospected from France and Italy were analyzed using 20 highly polymorphic microsatellite loci. Their profiles were compared with a database consisting of 3203 accessions from the Institut National de la Recherche Agronomique Vassal collection including cultivars, rootstocks, interspecific hybrids, and other wild individuals. Paternity was assigned for 202 (66.2%) of the 305 seedlings, confirming the feasibility of the method. Most of the fertilizing pollen could be assigned to wild males growing nearby. Estimates of pollen immigration from the cultivated compartment (i.e., the totality of cultivars) ranged from 4.2% to 26% from nearby vineyards and from hidden pollinators such as cultivars and rootstocks that had escaped from farms. In an open landscape, the pollen flow was correlated to the distance between individuals, the main pollinator being the closest wild male (accounting for 51.4-86.2% of the pollen flow). In a closed landscape, more complex pollination occurred. Analysis of the parentage of the 417 wild individuals also revealed relationships between nearby wild individuals, but in the case of 12 individuals (3%), analysis revealed pollen immigration from vineyards, confirming the fitness of the hybrid seedlings. These pollen fluxes may have a significant effect on the evolution of wild populations: on the one hand, the low level of pollen-mediated gene flow from cultivated to wild grapevine could contribute to a risk of extinction of the wild compartment (i.e., the totality of the wild individuals). On the other hand, pollen dispersal within the wild populations may induce inbreeding depression of wild grapevines.
Characterizing the Fundamental Intellectual Steps Required in the Solution of Conceptual Problems
NASA Astrophysics Data System (ADS)
Stewart, John
2010-02-01
At some level, the performance of a science class must depend on what is taught, the information content of the materials and assignments of the course. The introductory calculus-based electricity and magnetism class at the University of Arkansas is examined using a catalog of the basic reasoning steps involved in the solution of problems assigned in the class. This catalog was developed by sampling popular physics textbooks for conceptual problems. The solution to each conceptual problem was decomposed into its fundamental reasoning steps. These fundamental steps are, then, used to quantify the distribution of conceptual content within the course. Using this characterization technique, an exceptionally detailed picture of the information flow and structure of the class can be produced. The intellectual structure of published conceptual inventories is compared with the information presented in the class and the dependence of conceptual performance on the details of coverage extracted. )
Spatial reasoning to determine stream network from LANDSAT imagery
NASA Technical Reports Server (NTRS)
Haralick, R. M.; Wang, S.; Elliott, D. B.
1983-01-01
In LANDSAT imagery, spectral and spatial information can be used to detect the drainage network as well as the relative elevation model in mountainous terrain. To do this, mixed information of material reflectance in the original LANDSAT imagery must be separated. From the material reflectance information, big visible rivers can be detected. From the topographic modulation information, ridges and valleys can be detected and assigned relative elevations. A complete elevation model can be generated by interpolating values for nonridge and non-valley pixels. The small streams not detectable from material reflectance information can be located in the valleys with flow direction known from the elevation model. Finally, the flow directions of big visible rivers can be inferred by solving a consistent labeling problem based on a set of spatial reasoning constraints.
Snel, G G M; Malvisi, M; Pilla, R; Piccinini, R
2014-12-05
It was hypothesized that biofilm could play an important role in the establishment of chronic Staphylococcus aureus bovine mastitis. The in vitro evaluation of biofilm formation can be performed either in closed/static or in flow-based systems. Efforts have been made to characterize the biofilm-forming ability of S. aureus mastitis isolates, however most authors used static systems and matrices other than UHT milk. It is not clear whether such results could be extrapolated to the mammary gland environment. Therefore, the present study aimed to investigate the biofilm-forming ability of S. aureus strains from subclinical bovine mastitis using the static method and a flow-based one. One hundred and twelve strains were tested by the classic tissue culture plate assay (TCP) and 30 out of them were also tested by a dynamic semi-quantitative assay using commercial UHT milk as culture medium (Milk Flow Culture, MFC) or Tryptic Soy Broth as control medium (TS Flow Culture, TSFC). Only 6 (20%) strains formed biofilm in milk under flow conditions, while 36.6% were considered biofilm-producers in TCP, and 93.3% produced biofilm in TSFC. No agreement was found between TCP, MFC and TSFC results. The association between strain genetic profile, determined by microarray, and biofilm-forming ability in milk was evaluated. Biofilm formation in MFC was significantly associated with the presence of those genes commonly found in bovine-associated strains, assigned to clonal complexes typically detected in mastitis. Based on our results, biofilm-forming potential of bovine strains should be critically analysed and tested applying conditions similar to mammary environment. Copyright © 2014 Elsevier B.V. All rights reserved.
Flight Departure Delay and Rerouting Under Uncertainty in En Route Convective Weather
NASA Technical Reports Server (NTRS)
Mukherjee, Avijit; Grabbe, Shon; Sridhar, Banavar
2011-01-01
Delays caused by uncertainty in weather forecasts can be reduced by improving traffic flow management decisions. This paper presents a methodology for traffic flow management under uncertainty in convective weather forecasts. An algorithm for assigning departure delays and reroutes to aircraft is presented. Departure delay and route assignment are executed at multiple stages, during which, updated weather forecasts and flight schedules are used. At each stage, weather forecasts up to a certain look-ahead time are treated as deterministic and flight scheduling is done to mitigate the impact of weather on four-dimensional flight trajectories. Uncertainty in weather forecasts during departure scheduling results in tactical airborne holding of flights. The amount of airborne holding depends on the accuracy of forecasts as well as the look-ahead time included in the departure scheduling. The weather forecast look-ahead time is varied systematically within the experiments performed in this paper to analyze its effect on flight delays. Based on the results, longer look-ahead times cause higher departure delays and additional flying time due to reroutes. However, the amount of airborne holding necessary to prevent weather incursions reduces when the forecast look-ahead times are higher. For the chosen day of traffic and weather, setting the look-ahead time to 90 minutes yields the lowest total delay cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, Shih-Miao; Hwang, Ho-Ling; Davidson, Diane
2016-07-01
The Freight Analysis Framework (FAF) integrates data from a variety of sources to create a comprehensive picture of nationwide freight movements among states and major metropolitan areas for all modes of transportation. It provides a national picture of current freight flows to, from, and within the United States, assigns selected flows to the transportation network, and projects freight flow patterns into the future. The latest release of FAF is known as FAF4 with a base year of 2012. The FAF4 origin-destination-commodity-mode (ODCM) matrix is provided at national, state, major metropolitan areas, and major gateways with significant freight activities (e.g., Elmore » Paso, Texas). The U.S. Department of Energy (DOE) is interested in using FAF4 database for its strategic planning and policy analysis, particularly in association with the transportation of energy commodities. However, the geographic specification that DOE requires is a county-level ODCM matrix. Unfortunately, the geographic regions in the FAF4 database were not available at the DOE desired detail. Due to this limitation, DOE tasked Oak Ridge National Laboratory (ORNL) to assist in generating estimates of county-level flows for selected energy commodities by mode of transportation.« less
A time-based concept for terminal-area traffic management
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Tobias, Leonard
1986-01-01
An automated air-traffic-management concept that has the potential for significantly increasing the efficiency of traffic flows in high-density terminal areas is discussed. The concept's implementation depends on techniques for controlling the landing time of all aircraft entering the terminal area, both those that are equipped with on-board four-dimensional (4D) guidance systems as well as those aircraft types that are conventionally equipped. The two major ground-based elements of the system are a scheduler which assigns conflict-free landing times and a profile descent advisor. Landing time provided by the scheduler is uplinked to equipped aircraft and translated into the appropriate 4D trajectory by the-board flight-management system. The controller issues descent advisories to unequipped aircraft to help them achieve the assigned landing times. Air traffic control simulations have established that the concept provides an efficient method for controlling various mixes of 4D-equipped and unequipped, as well as low- and high-performance, aircraft. Piloted simulations of profiles flown with the aid of advisories have verified the ability to meet specified descent times with prescribed accuracy.
Tsibikov, V B; Ragozin, S I; Mikheeva, L V
1985-01-01
A flow-chart is developed demonstrating the relation between medical and prophylactic institutions within the organizational structure of the rehabilitation system and main types of rehabilitation procedures. In order to ascertain the priority in equipping rehabilitation services with adequate hardware the special priority criterion is introduced. The highest priority is assigned to balneotherapeutic and fangotherapeutic services. Based on the operation-by-operation analysis of clinical processes related to service and performance of balneologic procedures the preliminary set of clinical devices designed for baths, basins and showers in hospitals and rehabilitation departments is defined in a generalized form.
USDA-ARS?s Scientific Manuscript database
The objective was to examine the effect of maternal nutrient restriction followed by realimentation during mid-gestation on uterine blood flow (BF). On Day 30 of pregnancy, lactating, multiparous Simmental beef cows were assigned randomly to treatments: control (CON; 100% National Research Council; ...
50 CFR 679.93 - Amendment 80 Program recordkeeping, permits, monitoring, and catch accounting.
Code of Federal Regulations, 2014 CFR
2014-10-01
... CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE (CONTINUED... space to accommodate a minimum of 10 observer sampling baskets. This space must be within or adjacent to... observers assigned to the vessel. (8) Belt and flow operations. The vessel operator stops the flow of fish...
76 FR 30322 - Notice of Availability of Government-Owned Inventions; Available for Licensing
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-25
... below are assigned to the United States Government as represented by the Secretary of the Navy. U.S... ``Automatic Clock Synchronization and Distribution Circuit for Counter Clock Flow Pipelined Systems'' issued... Flow and Metallic Conformal Coating of Conductive Templates'' issued on October 12, 2010; U.S. Patent...
Nelms, D.L.; Harlow, G.E.; Hayes, Donald C.
1995-01-01
Growth within the Valley and Ridge, Blue Ridge, and Piedmont Physiographic Provinces of Virginia has focussed concern about allocation of surface-water flow and increased demands on the ground-water resources. The purpose of this report is to (1) describe the base-flow characteristics of streams, (2) identify regional differences in these flow characteristics, and (3) describe, if possible, the potential surface-water and ground-water yields of basins on the basis of the base-flow character- istics. Base-flow characteristics are presented for streams in the Valley and Ridge, Blue Ridge, and Piedmont Physiographic Provinces of Virginia. The provinces are separated into five regions: (1) Valley and Ridge, (2) Blue Ridge, (3) Piedmont/Blue Ridge transition, (4) Piedmont northern, and (5) Piedmont southern. Different flow statistics, which represent streamflows predominantly comprised of base flow, were determined for 217 continuous-record streamflow-gaging stations from historical mean daily discharge and for 192 partial-record streamflow-gaging stations by means of correlation of discharge measurements. Variability of base flow is represented by a duration ratio developed during this investigation. Effective recharge rates were also calculated. Median values for the different flow statistics range from 0.05 cubic foot per second per square mile for the 90-percent discharge on the streamflow-duration curve to 0.61 cubic foot per second per square mile for mean base flow. An excellent estimator of mean base flow for the Piedmont/Blue Ridge transition region and Piedmont southern region is the 50-percent discharge on the streamflow-duration curve, but tends to under- estimate mean base flow for the remaining regions. The base-flow variability index ranges from 0.07 to 2.27, with a median value of 0.55. Effective recharge rates range from 0.07 to 33.07 inches per year, with a median value of 8.32 inches per year. Differences in the base-flow characteristics exist between regions. The median discharges for the Valley and Ridge, Blue Ridge, and Piedmont/Blue Ridge transition regions are higher than those for the Piedmont regions. Results from statistical analysis indicate that the regions can be ranked in terms of base-flow characteristics from highest to lowest as follows: (1) Piedmont/Blue Ridge transition, (2) Valley and Ridge and Blue Ridge, (3) Piedmont southern, and (4) Piedmont northern. The flow statistics are consistently higher and the values for base-flow variability are lower for basins within the Piedmont/Blue Ridge transition region relative to those from the other regions, whereas the basins within the Piedmont northern region show the opposite pattern. The group rankings of the base-flow characteristics were used to designate the potential surface-water yield for the regions. In addition, an approach developed for this investigation assigns a rank for potential surface- water yield to a basin according to the quartiles in which the values for the base-flow character- istics are located. Both procedures indicate that the Valley and Ridge, Blue Ridge, and Piedmont/Blue Ridge transition regions have moderate-to-high potential surface-water yield and the Piedmont regions have low-to-moderate potential surface- water yield. In order to indicate potential ground-water yield from base-flow characteristics, aquifer properties for 51 streamflow-gaging stations with continuous record of streamflow data were determined by methods that use streamflow records and basin characteristics. Areal diffusivity ranges from 17,100 to 88,400 feet squared per day, with a median value of 38,400 feet squared per day. Areal transmissivity ranges from 63 to 830 feet squared per day, with a median value of 270 feet squared per day. Storage coefficients, which were estimated by dividing areal transmissivity by areal diffusivity, range from approximately 0.001 to 0.019 (dimensionless), with a median value of 0.007. The median value for areal diffus
A hybrid quantum-inspired genetic algorithm for multiobjective flow shop scheduling.
Li, Bin-Bin; Wang, Ling
2007-06-01
This paper proposes a hybrid quantum-inspired genetic algorithm (HQGA) for the multiobjective flow shop scheduling problem (FSSP), which is a typical NP-hard combinatorial optimization problem with strong engineering backgrounds. On the one hand, a quantum-inspired GA (QGA) based on Q-bit representation is applied for exploration in the discrete 0-1 hyperspace by using the updating operator of quantum gate and genetic operators of Q-bit. Moreover, random-key representation is used to convert the Q-bit representation to job permutation for evaluating the objective values of the schedule solution. On the other hand, permutation-based GA (PGA) is applied for both performing exploration in permutation-based scheduling space and stressing exploitation for good schedule solutions. To evaluate solutions in multiobjective sense, a randomly weighted linear-sum function is used in QGA, and a nondominated sorting technique including classification of Pareto fronts and fitness assignment is applied in PGA with regard to both proximity and diversity of solutions. To maintain the diversity of the population, two trimming techniques for population are proposed. The proposed HQGA is tested based on some multiobjective FSSPs. Simulation results and comparisons based on several performance metrics demonstrate the effectiveness of the proposed HQGA.
Hoffman, Robert A; Wang, Lili; Bigos, Martin; Nolan, John P
2012-09-01
Results from a standardization study cosponsored by the International Society for Advancement of Cytometry (ISAC) and the US National Institute of Standards and Technology (NIST) are reported. The study evaluated the variability of assigning intensity values to fluorophore standard beads by bead manufacturers and the variability of cross calibrating the standard beads to stained polymer beads (hard-dyed beads) using different flow cytometers. Hard dyed beads are generally not spectrally matched to the fluorophores used to stain cells, and spectral response varies among flow cytometers. Thus if hard dyed beads are used as fluorescence calibrators, one expects calibration for specific fluorophores (e.g., FITC or PE) to vary among different instruments. Using standard beads surface-stained with specific fluorophores (FITC, PE, APC, and Pacific Blue™), the study compared the measured intensity of fluorophore standard beads to that of hard dyed beads through cross calibration on 133 different flow cytometers. Using robust CV as a measure of variability, the variation of cross calibrated values was typically 20% or more for a particular hard dyed bead in a specific detection channel. The variation across different instrument models was often greater than the variation within a particular instrument model. As a separate part of the study, NIST and four bead manufacturers used a NIST supplied protocol and calibrated fluorophore solution standards to assign intensity values to the fluorophore beads. Values assigned to the reference beads by different groups varied by orders of magnitude in most cases, reflecting differences in instrumentation used to perform the calibration. The study concluded that the use of any spectrally unmatched hard dyed bead as a general fluorescence calibrator must be verified and characterized for every particular instrument model. Close interaction between bead manufacturers and NIST is recommended to have reliable and uniformly assigned fluorescence standard beads. Copyright © 2012 International Society for Advancement of Cytometry.
47 CFR 32.6532 - Network administration expense.
Code of Federal Regulations, 2013 CFR
2013-10-01
... includes such activities as controlling traffic flow, administering traffic measuring and monitoring devices, assigning equipment and load balancing, collecting and summarizing traffic data, administering...
47 CFR 32.6532 - Network administration expense.
Code of Federal Regulations, 2012 CFR
2012-10-01
... includes such activities as controlling traffic flow, administering traffic measuring and monitoring devices, assigning equipment and load balancing, collecting and summarizing traffic data, administering...
47 CFR 32.6532 - Network administration expense.
Code of Federal Regulations, 2014 CFR
2014-10-01
... includes such activities as controlling traffic flow, administering traffic measuring and monitoring devices, assigning equipment and load balancing, collecting and summarizing traffic data, administering...
A mixed-mode traffic assignment model with new time-flow impedance function
NASA Astrophysics Data System (ADS)
Lin, Gui-Hua; Hu, Yu; Zou, Yuan-Yang
2018-01-01
Recently, with the wide adoption of electric vehicles, transportation network has shown different characteristics and been further developed. In this paper, we present a new time-flow impedance function, which may be more realistic than the existing time-flow impedance functions. Based on this new impedance function, we present an optimization model for a mixed-mode traffic network in which battery electric vehicles (BEVs) and gasoline vehicles (GVs) are chosen. We suggest two approaches to handle the model: One is to use the interior point (IP) algorithm and the other is to employ the sequential quadratic programming (SQP) algorithm. Three numerical examples are presented to illustrate the efficiency of these approaches. In particular, our numerical results show that more travelers prefer to choosing BEVs when the distance limit of BEVs is long enough and the unit operating cost of GVs is higher than that of BEVs, and the SQP algorithm is faster than the IP algorithm.
Interaction of Airspace Partitions and Traffic Flow Management Delay with Weather
NASA Technical Reports Server (NTRS)
Lee, Hak-Tae; Chatterji, Gano B.; Palopo, Kee
2011-01-01
The interaction of partitioning the airspace and delaying flights in the presence of convective weather is explored to study how re-partitioning the airspace can help reduce congestion and delay. Three approaches with varying complexities are employed to compute the ground delays.In the first approach, an airspace partition of 335 high-altitude sectors that is based on clear weather day traffic is used. Routes are then created to avoid regions of convective weather. With traffic flow management, this approach establishes the baseline with per-flight delay of 8.4 minutes. In the second approach, traffic flow management is used to select routes and assign departure delays such that only the airport capacity constraints are met. This results in 6.7 minutes of average departure delay. The airspace is then partitioned with a specific capacity. It is shown that airspace-capacity-induced delay can be reduced to zero ata cost of 20percent more sectors for the examined scenario.
A genetic algorithm-based approach to flexible flow-line scheduling with variable lot sizes.
Lee, I; Sikora, R; Shaw, M J
1997-01-01
Genetic algorithms (GAs) have been used widely for such combinatorial optimization problems as the traveling salesman problem (TSP), the quadratic assignment problem (QAP), and job shop scheduling. In all of these problems there is usually a well defined representation which GA's use to solve the problem. We present a novel approach for solving two related problems-lot sizing and sequencing-concurrently using GAs. The essence of our approach lies in the concept of using a unified representation for the information about both the lot sizes and the sequence and enabling GAs to evolve the chromosome by replacing primitive genes with good building blocks. In addition, a simulated annealing procedure is incorporated to further improve the performance. We evaluate the performance of applying the above approach to flexible flow line scheduling with variable lot sizes for an actual manufacturing facility, comparing it to such alternative approaches as pair wise exchange improvement, tabu search, and simulated annealing procedures. The results show the efficacy of this approach for flexible flow line scheduling.
Gardner-Santana, L C; Norris, D E; Fornadel, C M; Hinson, E R; Klein, S L; Glass, G E
2009-07-01
Movement of individuals promotes colonization of new areas, gene flow among local populations, and has implications for the spread of infectious agents and the control of pest species. Wild Norway rats (Rattus norvegicus) are common in highly urbanized areas but surprisingly little is known of their population structure. We sampled individuals from 11 locations within Baltimore, Maryland, to characterize the genetic structure and extent of gene flow between areas within the city. Clustering methods and a neighbour-joining tree based on pairwise genetic distances supported an east-west division in the inner city, and a third cluster comprised of historically more recent sites. Most individuals (approximately 95%) were assigned to their area of capture, indicating strong site fidelity. Moreover, the axial dispersal distance of rats (62 m) fell within typical alley length. Several rats were assigned to areas 2-11.5 km away, indicating some, albeit infrequent, long-distance movement within the city. Although individual movement appears to be limited (30-150 m), locations up to 1.7 km are comprised of relatives. Moderate F(ST), differentiation between identified clusters, and high allelic diversity indicate that regular gene flow, either via recruitment or migration, has prevented isolation. Therefore, ecology of commensal rodents in urban areas and life-history characteristics of Norway rats likely counteract many expected effects of isolation or founder events. An understanding of levels of connectivity of rat populations inhabiting urban areas provides information about the spatial scale at which populations of rats may spread disease, invade new areas, or be eradicated from an existing area without reinvasion.
NASA Technical Reports Server (NTRS)
Mccanna, R. W.; Sims, W. H.
1972-01-01
Results are presented for an experimental space shuttle stage separation plume impingement program conducted in the NASA-Marshall Space Flight Center's impulse base flow facility (IBFF). Major objectives of the investigation were to: (1)determine the degree of dual engine exhaust plume simulation obtained using the equivalent engine; (2) determine the applicability of the analytical techniques; and (3) obtain data applicable for use in full-scale studies. The IBFF tests determined the orbiter rocket motor plume impingement loads, both pressure and heating, on a 3 percent General Dynamics B-15B booster configuration in a quiescent environment simulating a nominal staging altitude of 73.2 km (240,00 ft). The data included plume surveys of two 3 percent scale orbiter nozzles, and a 4.242 percent scaled equivalent nozzle - equivalent in the sense that it was designed to have the same nozzle-throat-to-area ratio as the two 3 percent nozzles and, within the tolerances assigned for machining the hardware, this was accomplished.
NASA Astrophysics Data System (ADS)
Assari, Amin; Mohammadi, Zargham
2017-09-01
Karst systems show high spatial variability of hydraulic parameters over small distances and this makes their modeling a difficult task with several uncertainties. Interconnections of fractures have a major role on the transport of groundwater, but many of the stochastic methods in use do not have the capability to reproduce these complex structures. A methodology is presented for the quantification of tortuosity using the single normal equation simulation (SNESIM) algorithm and a groundwater flow model. A training image was produced based on the statistical parameters of fractures and then used in the simulation process. The SNESIM algorithm was used to generate 75 realizations of the four classes of fractures in a karst aquifer in Iran. The results from six dye tracing tests were used to assign hydraulic conductivity values to each class of fractures. In the next step, the MODFLOW-CFP and MODPATH codes were consecutively implemented to compute the groundwater flow paths. The 9,000 flow paths obtained from the MODPATH code were further analyzed to calculate the tortuosity factor. Finally, the hydraulic conductivity values calculated from the dye tracing experiments were refined using the actual flow paths of groundwater. The key outcomes of this research are: (1) a methodology for the quantification of tortuosity; (2) hydraulic conductivities, that are incorrectly estimated (biased low) with empirical equations that assume Darcian (laminar) flow with parallel rather than tortuous streamlines; and (3) an understanding of the scale-dependence and non-normal distributions of tortuosity.
High Energy Boundary Conditions for a Cartesian Mesh Euler Solver
NASA Technical Reports Server (NTRS)
Pandya, Shishir; Murman, Scott; Aftosmis, Michael
2003-01-01
Inlets and exhaust nozzles are common place in the world of flight. Yet, many aerodynamic simulation packages do not provide a method of modelling such high energy boundaries in the flow field. For the purposes of aerodynamic simulation, inlets and exhausts are often fared over and it is assumed that the flow differences resulting from this assumption are minimal. While this is an adequate assumption for the prediction of lift, the lack of a plume behind the aircraft creates an evacuated base region thus effecting both drag and pitching moment values. In addition, the flow in the base region is often mis-predicted resulting in incorrect base drag. In order to accurately predict these quantities, a method for specifying inlet and exhaust conditions needs to be available in aerodynamic simulation packages. A method for a first approximation of a plume without accounting for chemical reactions is added to the Cartesian mesh based aerodynamic simulation package CART3D. The method consists of 3 steps. In the first step, a components approach where each triangle is assigned a component number is used. Here, a method for marking the inlet or exhaust plane triangles as separate components is discussed. In step two, the flow solver is modified to accept a reference state for the components marked inlet or exhaust. In the third step, the flow solver uses these separated components and the reference state to compute the correct flow condition at that triangle. The present method is implemented in the CART3D package which consists of a set of tools for generating a Cartesian volume mesh from a set of component triangulations. The Euler equations are solved on the resulting unstructured Cartesian mesh. The present methods is implemented in this package and its usefulness is demonstrated with two validation cases. A generic missile body is also presented to show the usefulness of the method on a real world geometry.
Delay Banking for Managing Air Traffic
NASA Technical Reports Server (NTRS)
Green, Steve
2008-01-01
Delay banking has been invented to enhance air-traffic management in a way that would increase the degree of fairness in assigning arrival, departure, and en-route delays and trajectory deviations to aircraft impacted by congestion in the national airspace system. In delay banking, an aircraft operator (airline, military, general aviation, etc.) would be assigned a numerical credit when any of their flights are delayed because of an air-traffic flow restriction. The operator could subsequently bid against other operators competing for access to congested airspace to utilize part or all of its accumulated credit. Operators utilize credits to obtain higher priority for the same flight, or other flights operating at the same time, or later, in the same airspace, or elsewhere. Operators could also trade delay credits, according to market rules that would be determined by stakeholders in the national airspace system. Delay banking would be administered by an independent third party who would use delay banking automation to continually monitor flights, allocate delay credits, maintain accounts of delay credits for participating airlines, mediate bidding and the consumption of credits of winning bidders, analyze potential transfers of credits within and between operators, implement accepted transfers, and ensure fair treatment of all participating operators. A flow restriction can manifest itself in the form of a delay in assigned takeoff time, a reduction in assigned airspeed, a change in the position for the aircraft in a queue of all aircraft in a common stream of traffic (e.g., similar route), a change in the planned altitude profile for an aircraft, or change in the planned route for the aircraft. Flow restrictions are typically imposed to mitigate traffic congestion at an airport or in a region of airspace, particularly congestion due to inclement weather, or the unavailability of a runway or region of airspace. A delay credit would be allocated to an operator of a flight that has accepted, or upon which was imposed, a flow restriction. The amount of the credit would increase with the amount of delay caused by the flow restriction, the exact amount depending on which of several candidate formulas is eventually chosen. For example, according to one formula, there would be no credit for a delay smaller than some threshold value (e.g., 30 seconds) and the amount of the credit for a longer delay would be set at the amount of the delay minus the threshold value. Optionally, the value of a delay credit could be made to decay with time according to a suitable formula (e.g., an exponential decay). Also, optionally, a transaction charge could be assessed against the value of a delay credit that an operator used on a flight different from the one for which the delay originated or that was traded with a different operator. The delay credits accumulated by a given airline could be utilized in various ways. For example, an operator could enter a bid for priority handling in a new flow restriction that impacts one or more of the operator s flights; if the bid were unsuccessful, all or a portion of the credit would be returned to the bidder. If the bid pertained to a single aircraft that was in a queue, delay credits could be consumed in moving the aircraft to an earlier position within the queue. In the case of a flow restriction involving a choice of alternate routes, planned altitude profile, aircraft spacing, or other non-queue flow restrictions, delay credits could be used to bid for an alternative assignment.
Study on store-space assignment based on logistic AGV in e-commerce goods to person picking pattern
NASA Astrophysics Data System (ADS)
Xu, Lijuan; Zhu, Jie
2017-10-01
This paper studied on the store-space assignment based on logistic AGV in E-commerce goods to person picking pattern, and established the store-space assignment model based on the lowest picking cost, and design for store-space assignment algorithm after the cluster analysis based on similarity coefficient. And then through the example analysis, compared the picking cost between store-space assignment algorithm this paper design and according to item number and storage according to ABC classification allocation, and verified the effectiveness of the design of the store-space assignment algorithm.
Evaluation of the impacts of traffic states on crash risks on freeways.
Xu, Chengcheng; Liu, Pan; Wang, Wei; Li, Zhibin
2012-07-01
The primary objective of this study is to divide freeway traffic flow into different states, and to evaluate the safety performance associated with each state. Using traffic flow data and crash data collected from a northbound segment of the I-880 freeway in the state of California, United States, K-means clustering analysis was conducted to classify traffic flow into five different states. Conditional logistic regression models using case-controlled data were then developed to study the relationship between crash risks and traffic states. Traffic flow characteristics in each traffic state were compared to identify the underlying phenomena that made certain traffic states more hazardous than others. Crash risk models were also developed for different traffic states to identify how traffic flow characteristics such as speed and speed variance affected crash risks in different traffic states. The findings of this study demonstrate that the operations of freeway traffic can be divided into different states using traffic occupancy measured from nearby loop detector stations, and each traffic state can be assigned with a certain safety level. The impacts of traffic flow parameters on crash risks are different across different traffic flow states. A method based on discriminant analysis was further developed to identify traffic states given real-time freeway traffic flow data. Validation results showed that the method was of reasonably high accuracy for identifying freeway traffic states. Copyright © 2012 Elsevier Ltd. All rights reserved.
Athrey, Giridhar; Lance, Richard F.; Leberg, Paul L.
2015-01-01
Dispersal is a key demographic process, ultimately responsible for genetic connectivity among populations. Despite its importance, quantifying dispersal within and between populations has proven difficult for many taxa. Even in passerines, which are among the most intensely studied, individual movement and its relation to gene flow remains poorly understood. In this study we used two parallel genetic approaches to quantify natal dispersal distances in a Neotropical migratory passerine, the black-capped vireo. First, we employed a strategy of sampling evenly across the landscape coupled with parentage assignment to map the genealogical relationships of individuals across the landscape, and estimate dispersal distances; next, we calculated Wright’s neighborhood size to estimate gene dispersal distances. We found that a high percentage of captured individuals were assigned at short distances within the natal population, and males were assigned to the natal population more often than females, confirming sex-biased dispersal. Parentage-based dispersal estimates averaged 2400m, whereas gene dispersal estimates indicated dispersal distances ranging from 1600–4200 m. Our study was successful in quantifying natal dispersal distances, linking individual movement to gene dispersal distances, while also providing a detailed look into the dispersal biology of Neotropical passerines. The high-resolution information was obtained with much reduced effort (sampling only 20% of breeding population) compared to mark-resight approaches, demonstrating the potential applicability of parentage-based approaches for quantifying dispersal in other vagile passerine species. PMID:26461257
Using MODFLOW drains to simulate groundwater flow in a karst environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, J.; Tomasko, D.; Glennon, M.A.
1998-07-01
Modeling groundwater flow in a karst environment is both numerically challenging and highly uncertain because of potentially complex flowpaths and a lack of site-specific information. This study presents the results of MODFLOW numerical modeling in which drain cells in a finite-difference model are used as analogs for preferential flowpaths or conduits in karst environments. In this study, conduits in mixed-flow systems are simulated by assigning connected pathways of drain cells from the locations of tracer releases, sinkholes, or other karst features to outlet springs along inferred flowpaths. These paths are determined by the locations of losing stream segments, ephemeral streammore » beds, geophysical surveys, fracture lineaments, or other surficial characteristics, combined with the results of dye traces. The elevations of the drains at the discharge ends of the inferred flowpaths are estimated from field data and are adjusted when necessary during model calibration. To simulate flow in a free-flowing conduit, a high conductance is assigned to each drain to eliminate the need for drain-specific information that would be very difficult to obtain. Calculations were performed for a site near Hohenfels, Germany. The potentiometric surface produced by the simulations agreed well with field data. The head contours in the vicinity of the karst features behaved in a manner consistent with a flow system having both diffuse and conduit components, and the sum of the volumetric flow out of the drain cells agreed closely with spring discharges and stream flows. Because of the success of this approach, it is recommended for regional studies in which little site-specific information (e.g., location, number, size, and conductivity of fractures and conduits) is available, and general flow characteristics are desired.« less
NASA Astrophysics Data System (ADS)
Zhao, Ke-Yu; Jiang, Xiao-Wei; Wang, Xu-Sheng; Wan, Li; Wang, Jun-Zhi; Wang, Heng; Li, Hailong
2018-01-01
Classical understanding on basin-scale groundwater flow patterns is based on Tóth's findings of a single flow system in a unit basin (Tóth, 1962) and nested flow systems in a complex basin (Tóth, 1963), both of which were based on steady state models. Vandenberg (1980) extended Tóth (1962) by deriving a transient solution under a periodically changing water table in a unit basin and examined the flow field distortion under different dimensionless response time, τ∗. Following Vandenberg's (1980) approach, we extended Tóth (1963) by deriving the transient solution under a periodically changing water table in a complex basin and examined the transient behavior of nested flow systems. Due to the effect of specific storage, the flow field is asymmetric with respect to the midline, and the trajectory of internal stagnation points constitutes a non-enclosed loop, whose width decreases when τ∗ decreases. The distribution of the relative magnitude of hydraulic head fluctuation, Δh∗ , is dependent on the horizontal distance away from a divide and the depth below the land surface. In the shallow part, Δh∗ decreases from 1 at the divide to 0 at its neighboring valley under all τ∗, while in the deep part, Δh∗ reaches a threshold, whose value decreases when τ∗ increases. The zones with flowing wells are also found to change periodically. As water table falls, there is a general trend of shrinkage in the area of zones with flowing wells, which has a lag to the declining water table under a large τ∗. Although fluxes have not been assigned in our model, the recharge/discharge flux across the top boundary can be obtained. This study is critical to understand a series of periodically changing hydrogeological phenomena in large-scale basins.
Development of high-accuracy convection schemes for sequential solvers
NASA Technical Reports Server (NTRS)
Thakur, Siddharth; Shyy, Wei
1993-01-01
An exploration is conducted of the applicability of such high resolution schemes as TVD to the resolving of sharp flow gradients using a sequential solution approach borrowed from pressure-based algorithms. It is shown that by extending these high-resolution shock-capturing schemes to a sequential solver that treats the equations as a collection of scalar conservation equations, the speed of signal propagation in the solution has to be coordinated by assigning the local convection speed as the characteristic speed for the entire system. A higher amount of dissipation is therefore needed to eliminate oscillations near discontinuities.
Stochastic Rotation Dynamics simulations of wetting multi-phase flows
NASA Astrophysics Data System (ADS)
Hiller, Thomas; Sanchez de La Lama, Marta; Brinkmann, Martin
2016-06-01
Multi-color Stochastic Rotation Dynamics (SRDmc) has been introduced by Inoue et al. [1,2] as a particle based simulation method to study the flow of emulsion droplets in non-wetting microchannels. In this work, we extend the multi-color method to also account for different wetting conditions. This is achieved by assigning the color information not only to fluid particles but also to virtual wall particles that are required to enforce proper no-slip boundary conditions. To extend the scope of the original SRDmc algorithm to e.g. immiscible two-phase flow with viscosity contrast we implement an angular momentum conserving scheme (SRD+mc). We perform extensive benchmark simulations to show that a mono-phase SRDmc fluid exhibits bulk properties identical to a standard SRD fluid and that SRDmc fluids are applicable to a wide range of immiscible two-phase flows. To quantify the adhesion of a SRD+mc fluid in contact to the walls we measure the apparent contact angle from sessile droplets in mechanical equilibrium. For a further verification of our wettability implementation we compare the dewetting of a liquid film from a wetting stripe to experimental and numerical studies of interfacial morphologies on chemically structured surfaces.
NASA Astrophysics Data System (ADS)
Youn, Joo-Sang; Seok, Seung-Joon; Kang, Chul-Hee
This paper presents a new QoS model for end-to-end service provisioning in multi-hop wireless networks. In legacy IEEE 802.11e based multi-hop wireless networks, the fixed assignment of service classes according to flow's priority at every node causes priority inversion problem when performing end-to-end service differentiation. Thus, this paper proposes a new QoS provisioning model called Dynamic Hop Service Differentiation (DHSD) to alleviate the problem and support effective service differentiation between end-to-end nodes. Many previous works for QoS model through the 802.11e based service differentiation focus on packet scheduling on several service queues with different service rate and service priority. Our model, however, concentrates on a dynamic class selection scheme, called Per Hop Class Assignment (PHCA), in the node's MAC layer, which selects a proper service class for each packet, in accordance with queue states and service requirement, in every node along the end-to-end route of the packet. The proposed QoS solution is evaluated using the OPNET simulator. The simulation results show that the proposed model outperforms both best-effort and 802.11e based strict priority service models in mobile ad hoc environments.
Towards Automated Structure-Based NMR Resonance Assignment
NASA Astrophysics Data System (ADS)
Jang, Richard; Gao, Xin; Li, Ming
We propose a general framework for solving the structure-based NMR backbone resonance assignment problem. The core is a novel 0-1 integer programming model that can start from a complete or partial assignment, generate multiple assignments, and model not only the assignment of spins to residues, but also pairwise dependencies consisting of pairs of spins to pairs of residues. It is still a challenge for automated resonance assignment systems to perform the assignment directly from spectra without any manual intervention. To test the feasibility of this for structure-based assignment, we integrated our system with our automated peak picking and sequence-based resonance assignment system to obtain an assignment for the protein TM1112 with 91% recall and 99% precision without manual intervention. Since using a known structure has the potential to allow one to use only N-labeled NMR data and avoid the added expense of using C-labeled data, we work towards the goal of automated structure-based assignment using only such labeled data. Our system reduced the assignment error of Xiong-Pandurangan-Bailey-Kellogg's contact replacement (CR) method, which to our knowledge is the most error-tolerant method for this problem, by 5 folds on average. By using an iterative algorithm, our system has the added capability of using the NOESY data to correct assignment errors due to errors in predicting the amino acid and secondary structure type of each spin system. On a publicly available data set for Ubiquitin, where the type prediction accuracy is 83%, we achieved 91% assignment accuracy, compared to the 59% accuracy that was obtained without correcting for typing errors.
Moving beyond "Bookish Knowledge": Using Film-Based Assignments to Promote Deep Learning
ERIC Educational Resources Information Center
Olson, Joann S.; Autry, Linda; Moe, Jeffry
2016-01-01
This article investigates the effectiveness of a film-based assignment given to adult learners in a graduate-level group counseling class. Semi-structured interviews were conducted with four students; data analysis suggested film-based assignments may promote deep approaches to learning (DALs). Participants indicated the assignment helped them…
A new mutually reinforcing network node and link ranking algorithm
Wang, Zhenghua; Dueñas-Osorio, Leonardo; Padgett, Jamie E.
2015-01-01
This study proposes a novel Normalized Wide network Ranking algorithm (NWRank) that has the advantage of ranking nodes and links of a network simultaneously. This algorithm combines the mutual reinforcement feature of Hypertext Induced Topic Selection (HITS) and the weight normalization feature of PageRank. Relative weights are assigned to links based on the degree of the adjacent neighbors and the Betweenness Centrality instead of assigning the same weight to every link as assumed in PageRank. Numerical experiment results show that NWRank performs consistently better than HITS, PageRank, eigenvector centrality, and edge betweenness from the perspective of network connectivity and approximate network flow, which is also supported by comparisons with the expensive N-1 benchmark removal criteria based on network efficiency. Furthermore, it can avoid some problems, such as the Tightly Knit Community effect, which exists in HITS. NWRank provides a new inexpensive way to rank nodes and links of a network, which has practical applications, particularly to prioritize resource allocation for upgrade of hierarchical and distributed networks, as well as to support decision making in the design of networks, where node and link importance depend on a balance of local and global integrity. PMID:26492958
Forment, Josep V.; Jackson, Stephen P.
2016-01-01
Protein accumulation on chromatin has traditionally been studied using immunofluorescence microscopy or biochemical cellular fractionation followed by western immunoblot analysis. As a way to improve the reproducibility of this kind of analysis, make it easier to quantify and allow a stream-lined application in high-throughput screens, we recently combined a classical immunofluorescence microscopy detection technique with flow cytometry1. In addition to the features described above, and by combining it with detection of both DNA content and DNA replication, this method allows unequivocal and direct assignment of cell-cycle distribution of protein association to chromatin without the need for cell culture synchronization. Furthermore, it is relatively quick (no more than a working day from sample collection to quantification), requires less starting material compared to standard biochemical fractionation methods and overcomes the need for flat, adherent cell types that are required for immunofluorescence microscopy. PMID:26226461
Fernandez Castelao, Ezequiel; Russo, Sebastian G; Cremer, Stephan; Strack, Micha; Kaminski, Lea; Eich, Christoph; Timmermann, Arnd; Boos, Margarete
2011-10-01
To evaluate the impact of video-based interactive crisis resource management (CRM) training on no-flow time (NFT) and on proportions of team member verbalisations (TMV) during simulated cardiopulmonary resuscitation (CPR). Further, to investigate the link between team leader verbalisation accuracy and NFT. The randomised controlled study was embedded in the obligatory advanced life support (ALS) course for final-year medical students. Students (176; 25.35±1.03 years, 63% female) were alphabetically assigned to 44 four-person teams that were then randomly (computer-generated) assigned to either CRM intervention (n=26), receiving interactive video-based CRM-training, or to control intervention (n=18), receiving an additional ALS-training. Primary outcomes were NFT and proportions of TMV, which were subdivided into eight categories: four team leader verbalisations (TLV) with different accuracy levels and four follower verbalisation categories (FV). Measurements were made of all groups administering simulated adult CPR. NFT rates were significantly lower in the CRM-training group (31.4±6.1% vs. 36.3±6.6%, p=0.014). Proportions of all TLV categories were higher in the CRM-training group (p<0.001). Differences in FV were only found for one category (unsolicited information) (p=0.012). The highest correlation with NFT was found for high accuracy TLV (direct orders) (p=0.06). The inclusion of CRM training in undergraduate medical education reduces NFT in simulated CPR and improves TLV proportions during simulated CPR. Further research will test how these results translate into clinical performance and patient outcome. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Vanden Broeck, An; Van Landuyt, Wouter; Cox, Karen; De Bruyn, Luc; Gyselings, Ralf; Oostermeijer, Gerard; Valentin, Bertille; Bozic, Gregor; Dolinar, Branko; Illyés, Zoltán; Mergeay, Joachim
2014-07-07
Gene flow and adaptive divergence are key aspects of metapopulation dynamics and ecological speciation. Long-distance dispersal is hard to detect and few studies estimate dispersal in combination with adaptive divergence. The aim of this study was to investigate effective long-distance dispersal and adaptive divergence in the fen orchid (Liparis loeselii (L.) Rich.). We used amplified fragment length polymorphism (AFLP)-based assignment tests to quantify effective long-distance dispersal at two different regions in Northwest Europe. In addition, genomic divergence between fen orchid populations occupying two distinguishable habitats, wet dune slacks and alkaline fens, was investigated by a genome scan approach at different spatial scales (continental, landscape and regional) and based on 451 AFLP loci. We expected that different habitats would contribute to strong divergence and restricted gene flow resulting in isolation-by-adaptation. Instead, we found remarkably high levels of effective long-distance seed dispersal and low levels of adaptive divergence. At least 15% of the assigned individuals likely originated from among-population dispersal events with dispersal distances up to 220 km. Six (1.3%) 'outlier' loci, potentially reflecting local adaptation to habitat-type, were identified with high statistical support. Of these, only one (0.22%) was a replicated outlier in multiple independent dune-fen population comparisons and thus possibly reflecting truly parallel divergence. Signals of adaptation in response to habitat type were most evident at the scale of individual populations. The findings of this study suggest that the homogenizing effect of effective long-distance seed dispersal may overwhelm divergent selection associated to habitat type in fen orchids in Northwest Europe.
Creating Data and Modeling Enabled Hydrology Instruction Using Collaborative Approach
NASA Astrophysics Data System (ADS)
Merwade, V.; Rajib, A.; Ruddell, B. L.; Fox, S.
2017-12-01
Hydrology instruction typically involves teaching of the hydrologic cycle and the processes associated with it such as precipitation, evapotranspiration, infiltration, runoff generation and hydrograph analysis. With the availability of observed and remotely sensed data related to many hydrologic fluxes, there is an opportunity to use these data for place based learning in hydrology classrooms. However, it is not always easy and possible for an instructor to complement an existing hydrology course with new material that requires both the time and technical expertise, which the instructor may not have. The work presented here describes an effort where students create the data and modeling driven instruction material as a part of their class assignment for a hydrology course at Purdue University. The data driven hydrology education project within Science Education Resources Center (SERC) is used as a platform to publish and share the instruction material so it can be used by future students in the same course or any other course anywhere in the world. Students in the class were divided into groups, and each group was assigned a topic such as precipitation, evapotranspiration, streamflow, flow duration curve and frequency analysis. Each student in the group was then asked to get data and do some analysis for an area with specific landuse characteristic such as urban, rural and agricultural. The student contribution were then organized into learning units such that someone can do a flow duration curve analysis or flood frequency analysis to see how it changes for rural area versus urban area. The hydrology education project within SERC cyberinfrastructure enables any other instructor to adopt this material as is or through modification to suit his/her place based instruction needs.
2014-01-01
Background Gene flow and adaptive divergence are key aspects of metapopulation dynamics and ecological speciation. Long-distance dispersal is hard to detect and few studies estimate dispersal in combination with adaptive divergence. The aim of this study was to investigate effective long-distance dispersal and adaptive divergence in the fen orchid (Liparis loeselii (L.) Rich.). We used amplified fragment length polymorphism (AFLP)-based assignment tests to quantify effective long-distance dispersal at two different regions in Northwest Europe. In addition, genomic divergence between fen orchid populations occupying two distinguishable habitats, wet dune slacks and alkaline fens, was investigated by a genome scan approach at different spatial scales (continental, landscape and regional) and based on 451 AFLP loci. Results We expected that different habitats would contribute to strong divergence and restricted gene flow resulting in isolation-by-adaptation. Instead, we found remarkably high levels of effective long-distance seed dispersal and low levels of adaptive divergence. At least 15% of the assigned individuals likely originated from among-population dispersal events with dispersal distances up to 220 km. Six (1.3%) ‘outlier’ loci, potentially reflecting local adaptation to habitat-type, were identified with high statistical support. Of these, only one (0.22%) was a replicated outlier in multiple independent dune-fen population comparisons and thus possibly reflecting truly parallel divergence. Signals of adaptation in response to habitat type were most evident at the scale of individual populations. Conclusions The findings of this study suggest that the homogenizing effect of effective long-distance seed dispersal may overwhelm divergent selection associated to habitat type in fen orchids in Northwest Europe. PMID:24998243
Genetic structure of cougar populations across the Wyoming basin: Metapopulation or megapopulation
Anderson, C.R.; Lindzey, F.G.; McDonald, D.B.
2004-01-01
We examined the genetic structure of 5 Wyoming cougar (Puma concolor) populations surrounding the Wyoming Basin, as well as a population from southwestern Colorado. When using 9 microsatellite DNA loci, observed heterozygosity was similar among populations (HO = 0.49-0.59) and intermediate to that of other large carnivores. Estimates of genetic structure (FST = 0.028, RST = 0.029) and number of migrants per generation (Nm) suggested high gene flow. Nm was lowest between distant populations and highest among adjacent populations. Examination of these data, plus Mantel test results of genetic versus geographic distance (P ??? 0.01), suggested both isolation by distance and an effect of habitat matrix. Bayesian assignment to population based on individual genotypes showed that cougars in this region were best described as a single panmictic population. Total effective population size for cougars in this region ranged from 1,797 to 4,532 depending on mutation model and analytical method used. Based on measures of gene flow, extinction risk in the near future appears low. We found no support for the existence of metapopulation structure among cougars in this region.
Oddou-Muratorio, S; Houot, M-L; Demesure-Musch, B; Austerlitz, F
2003-12-01
The joint development of polymorphic molecular markers and paternity analysis methods provides new approaches to investigate ongoing patterns of pollen flow in natural plant populations. However, paternity studies are hindered by false paternity assignment and the nondetection of true fathers. To gauge the risk of these two types of errors, we performed a simulation study to investigate the impact on paternity analysis of: (i) the assumed values for the size of the breeding male population (NBMP), and (ii) the rate of scoring error in genotype assessment. Our simulations were based on microsatellite data obtained from a natural population of the entomophilous wild service tree, Sorbus torminalis (L.) Crantz. We show that an accurate estimate of NBMP is required to minimize both types of errors, and we assess the reliability of a technique used to estimate NBMP based on parent-offspring genetic data. We then show that scoring errors in genotype assessment only slightly affect the assessment of paternity relationships, and conclude that it is generally better to neglect the scoring error rate in paternity analyses within a nonisolated population.
NASA Astrophysics Data System (ADS)
Kincaid, T. R.; Meyer, B. A.
2009-12-01
In groundwater flow modeling, aquifer permeability is typically defined through model calibration. Since the pattern and size of conduits are part of a karstic permeability framework, those parameters should be constrainable through the same process given a sufficient density of measured conditions. H2H Associates has completed a dual-permeability steady-state model of groundwater flow through the western Santa Fe River Basin, Florida from which a 380.9 km network of saturated conduits was delineated through model calibration to heads and spring discharges. Two calibration datasets were compiled describing average high-water and average low-water conditions based on heads at 145 wells and discharge from 18 springs for the high-water scenario and heads at 188 wells and discharge from 9 springs for the low-water scenario. An initial conduit network was defined by assigning paths along mapped conduits and inferring paths along potentiometric troughs between springs and swallets that had been connected by groundwater tracing. These initial conduit assignments accounted for only 13.75 and 34.1 km of the final conduit network respectively. The model was setup using FEFLOW™ where conduits were described as discrete features embedded in a porous matrix. Flow in the conduits was described by the Manning-Strickler equation where variables for conduit area and roughness were used to adjust the volume and velocity of spring flows. Matrix flow was described by Darcy’s law where hydraulic conductivity variations were limited to three geologically defined internally homogeneous zones that ranged from ~2E-6 m/s to ~4E-3 m/s. Recharge for both the high-water and low-water periods was determined through a water budget analysis where variations were restricted to nine zones defined by land-use. All remaining variations in observed head were then assumed to be due to conduits. The model was iteratively calibrated to the high-water and low-water datasets wherein the location, size and roughness of the conduits were assigned as needed to accurately simulate observed heads and spring discharges while bounding simulated velocities by the tracer test results. Conduit diameters were adjusted to support high-water spring discharges but the locations were best determined by calibration to the low-water head field. The final model calibrated to within 5% of the total head change across the model region at 143 of the 145 wells in the high-water scenario and at 176 of the 188 wells in the low-water scenario. Simulated spring discharges fell within 13% of the observed range under high-water conditions and to within 100% of the observed range under low-water conditions. Simulated velocities ranged from as low as 10-4 m/day in the matrix to as high as 10+3 m/day in the largest conduits. The significance of these results that we emphasize here is two-fold. First, plausible karstic groundwater flow conditions can be reasonably simulated if adequate efforts are made to include springs, swallets, caves, and traced flow paths. And second, detailed saturated conduit networks can be delineated from careful evaluation of hydraulic head data particularly when dense datasets can be constructed by correlating values obtained from different wells under similar hydraulic periods.
The PAH Emission Characteristics of the Reflection Nebula NGC 2023
NASA Astrophysics Data System (ADS)
Peeters, Els; Bauschlicher, Charles W., Jr.; Allamandola, Louis J.; Tielens, Alexander G. G. M.; Ricca, Alessandra; Wolfire, Mark G.
2017-02-01
We present 5-20 μm spectral maps of the reflection nebula NGC 2023 obtained with the Infrared Spectrograph SL and SH modes on board the Spitzer Space Telescope, which reveal emission from polycyclic aromatic hydrocarbons (PAHs), C60, and H2 superposed on a dust continuum. We show that several PAH emission bands correlate with each other and exhibit distinct spatial distributions that reveal a spatial sequence with distance from the illuminating star. We explore the distinct morphology of the 6.2, 7.7, and 8.6 μm PAH bands and find that at least two spatially distinct components contribute to the 7-9 μm PAH emission in NGC 2023. We report that the PAH features behave independently of the underlying plateaus. We present spectra of compact, oval PAHs ranging in size from C66 to C210, determined computationally using density functional theory, and we investigate trends in the band positions and relative intensities as a function of PAH size, charge, and geometry. Based on the NASA Ames PAH database, we discuss the 7-9 μm components in terms of band assignments and relative intensities. We assign the plateau emission to very small grains with possible contributions from PAH clusters and identify components in the 7-9 μm emission that likely originate in these structures. Based on the assignments and the observed spatial sequence, we discuss the photochemical evolution of the interstellar PAH family as the PAHs are more and more exposed to the radiation field of the central star in the evaporative flows associated with the Photo-Dissociation Regions in NGC 2023.
Jakob, Sabine S.; Rödder, Dennis; Engler, Jan O.; Shaaf, Salar; Özkan, Hakan; Blattner, Frank R.; Kilian, Benjamin
2014-01-01
Studies of Hordeum vulgare subsp. spontaneum, the wild progenitor of cultivated barley, have mostly relied on materials collected decades ago and maintained since then ex situ in germplasm repositories. We analyzed spatial genetic variation in wild barley populations collected rather recently, exploring sequence variations at seven single-copy nuclear loci, and inferred the relationships among these populations and toward the genepool of the crop. The wild barley collection covers the whole natural distribution area from the Mediterranean to Middle Asia. In contrast to earlier studies, Bayesian assignment analyses revealed three population clusters, in the Levant, Turkey, and east of Turkey, respectively. Genetic diversity was exceptionally high in the Levant, while eastern populations were depleted of private alleles. Species distribution modeling based on climate parameters and extant occurrence points of the taxon inferred suitable habitat conditions during the ice-age, particularly in the Levant and Turkey. Together with the ecologically wide range of habitats, they might contribute to structured but long-term stable populations in this region and their high genetic diversity. For recently collected individuals, Bayesian assignment to geographic clusters was generally unambiguous, but materials from genebanks often showed accessions that were not placed according to their assumed geographic origin or showed traces of introgression from cultivated barley. We assign this to gene flow among accessions during ex situ maintenance. Evolutionary studies based on such materials might therefore result in wrong conclusions regarding the history of the species or the origin and mode of domestication of the crop, depending on the accessions included. PMID:24586028
NASA Astrophysics Data System (ADS)
Agee, E.; Ivanov, V. Y.; Oliveira, R. S.; Brum, M., Jr.; Saleska, S. R.; Bisht, G.; Prohaska, N.; Taylor, T.; Oliveira Junior, R. C.; Restrepo-Coupe, N.
2017-12-01
The increased intensity and severity of droughts within the Amazon Basin region has emphasized the question of vulnerability and resilience of tropical forests to water limitation. During the recent 2015-2016 drought caused by the anomalous El Nino episode, we monitored a large, diverse sample of trees within the Tapajos National Forest, Brazil, in the footprint of the K67 eddy covariance tower. The observed trees exhibited differential responses in terms of stem water potential and sap flow among species: their regulation of ecophysiological strategies varied from very conservative (`isohydric') behavior, to much less restrained, atmosphere-controlled (`anisohydric') type of response. While much attention has been paid to forest canopies, it remains unclear how the regulation of individual tree root system and root spatial interactions contribute to the emergent individual behavior and the ecosystem-scale characterization of drought resilience. Given the inherent difficulty in monitoring below-ground phenomena, physically-based models are valuable for examining different strategies and properties to reduce the uncertainty of characterization. We use a modified version of the highly parallel DOE PFLOTRAN model to simulate the three-dimensional variably saturated flows and root water uptake for over one thousand individuals within a two-hectare area. Root morphology and intrinsic hydraulic properties are assigned based on statistical distributions developed for tropical trees, which account for the broad spectrum of hydraulic strategies in biodiverse environments. The results demonstrate the dynamic nature of active zone of root water uptake based on local soil water potential gradients. The degree of the corresponding shifts in uptake and root collar potential depend not only on assigned hydraulic properties but also on spatial orientation and size relative to community members. This response highlights the importance of not only tree individual hydraulic traits, but also dynamic spatial interactions in assessing forest drought resilience.
A network architecture for International Business Satellite communications
NASA Astrophysics Data System (ADS)
Takahata, Fumio; Nohara, Mitsuo; Takeuchi, Yoshio
Demand Assignment (DA) control is expected to be introduced in the International Business Satellte communications (IBS) network in order to cope with a growing international business traffic. The paper discusses the DA/IBS network from the viewpoints of network configuration, satellite channel configuration and DA control. The network configuration proposed here consists of one Central Station with network management function and several Network Coordination Stations with user management function. A satellite channel configuration is also presented along with a tradeoff study on transmission bit rate, high power amplifier output power requirement, and service quality. The DA control flow and protocol based on CCITT Signalling System No. 7 are also proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khaleel, Mohammad A.; Lin, Zijing; Singh, Prabhakar
2004-05-03
A 3D simulation tool for modeling solid oxide fuel cells is described. The tool combines the versatility and efficiency of a commercial finite element analysis code, MARC{reg_sign}, with an in-house developed robust and flexible electrochemical (EC) module. Based upon characteristic parameters obtained experimentally and assigned by the user, the EC module calculates the current density distribution, heat generation, and fuel and oxidant species concentration, taking the temperature profile provided by MARC{reg_sign} and operating conditions such as the fuel and oxidant flow rate and the total stack output voltage or current as the input. MARC{reg_sign} performs flow and thermal analyses basedmore » on the initial and boundary thermal and flow conditions and the heat generation calculated by the EC module. The main coupling between MARC{reg_sign} and EC is for MARC{reg_sign} to supply the temperature field to EC and for EC to give the heat generation profile to MARC{reg_sign}. The loosely coupled, iterative scheme is advantageous in terms of memory requirement, numerical stability and computational efficiency. The coupling is iterated to self-consistency for a steady-state solution. Sample results for steady states as well as the startup process for stacks with different flow designs are presented to illustrate the modeling capability and numerical performance characteristic of the simulation tool.« less
Marsh-Tootle, Wendy L; Funkhouser, Ellen; Frazier, Marcela G; Crenshaw, Katie; Wall, Terry C
2010-02-01
To evaluate knowledge, attitudes, and environment of primary care providers, and to develop a conceptual framework showing their impact on self-reported pre-school vision screening (PVS) behaviors. Eligible primary care providers were individuals who filed claims with Medicaid agencies in Alabama, South Carolina, or Illinois, for at least eight well child checks for children aged 3 or 4 years during 1 year. Responses were obtained on-line from providers who enrolled in the intervention arm of a randomized trial to improve PVS. We calculated a summary score per provider per facet: (1) for behavior and knowledge, each correct answer was assigned a value of +1; and (2) for attitudes and environment, responses indicating support for PVS were assigned a value of +1, and other responses were assigned -1. Responses were available from 53 participants (43 of 49 enrolled pediatricians, 8 of 14 enrolled family physicians, one general physician, and one nurse practitioner). Recognizing that amblyopia often presents without outward signs was positively related to good PVS: [odds ratio (OR) = 3.9; p = 0.06]. Reporting that "preschool VS interrupts patient flow" posed a significant barrier (OR = 0.2; p = 0.05). Providers with high summed scores on attitudes (OR = 6.0; p = 0.03), or knowledge and attitudes (OR = 11.4; p < 0.001) were significantly more likely to report good PVS behavior. There was a significant trend between the number of "good" scores on knowledge, attitudes or environment, and "good" PVS behavior (p = 0.04). PVS is influenced by positive attitudes, especially when combined with knowledge about amblyopia. Interventions to improve PVS should target multiple facets, emphasizing (1) asymptomatic children are at risk for amblyopia, (2) specific evidence-based tests have high testability and sensitivity for amblyopia in pre-school children, and (3) new tests minimize interruptions to patient flow.
SDN architecture for optical packet and circuit integrated networks
NASA Astrophysics Data System (ADS)
Furukawa, Hideaki; Miyazawa, Takaya
2016-02-01
We have been developing an optical packet and circuit integrated (OPCI) network, which realizes dynamic optical path, high-density packet multiplexing, and flexible wavelength resource allocation. In the OPCI networks, a best-effort service and a QoS-guaranteed service are provided by employing optical packet switching (OPS) and optical circuit switching (OCS) respectively, and users can select these services. Different wavelength resources are assigned for OPS and OCS links, and the amount of their wavelength resources are dynamically changed in accordance with the service usage conditions. To apply OPCI networks into wide-area (core/metro) networks, we have developed an OPCI node with a distributed control mechanism. Moreover, our OPCI node works with a centralized control mechanism as well as a distributed one. It is therefore possible to realize SDN-based OPCI networks, where resource requests and a centralized configuration are carried out. In this paper, we show our SDN architecture for an OPS system that configures mapping tables between IP addresses and optical packet addresses and switching tables according to the requests from multiple users via a web interface. While OpenFlow-based centralized control protocol is coming into widespread use especially for single-administrative, small-area (LAN/data-center) networks. Here, we also show an interworking mechanism between OpenFlow-based networks (OFNs) and the OPCI network for constructing a wide-area network, and a control method of wavelength resource selection to automatically transfer diversified flows from OFNs to the OPCI network.
Using SERC for creating and publishing student generated hydrology instruction materials
NASA Astrophysics Data System (ADS)
Merwade, V.; Rajib, A.; Ruddell, B.; Fox, S.
2016-12-01
Hydrology instruction typically involves teaching of the hydrologic cycle and the processes associated with it such as precipitation, evapotranspiration, infiltration, runoff generation and hydrograph analysis. With the availability of observed and remotely sensed data in public domain, there is an opportunity to incorporate place-based learning in hydrology classrooms. However, it is not always easy and possible for an instructor to complement an existing hydrology course with new material that requires both time and technical expertise, which the instructor may not have. The work presented here describes an effort where students created the data and modeling driven instruction materials as part of their class assignment for a hydrology course at Purdue University. Students in the class were divided into groups, and each group was assigned a topic such as precipitation, evapotranspiration, streamflow, flow duration curve and flood frequency analysis. Each of the student groups was then instructed to produce an instruction material showing ways to extract/process relevant data and perform some analysis for an area with specific land use characteristic. The student contributions were then organized into learning units such that someone can do a flow duration curve analysis or flood frequency analysis and see how it changes for rural area versus urban area. Science Education Resource Center (SERC) is used as a platform to publish and share these instruction materials so it can be used as-is or through modification by any instructor or student in relevant coursework anywhere in the world.
Scheduling for Emergency Tasks in Industrial Wireless Sensor Networks
Xia, Changqing; Kong, Linghe; Zeng, Peng
2017-01-01
Wireless sensor networks (WSNs) are widely applied in industrial manufacturing systems. By means of centralized control, the real-time requirement and reliability can be provided by WSNs in industrial production. Furthermore, many approaches reserve resources for situations in which the controller cannot perform centralized resource allocation. The controller assigns these resources as it becomes aware of when and where accidents have occurred. However, the reserved resources are limited, and such incidents are low-probability events. In addition, resource reservation may not be effective since the controller does not know when and where accidents will actually occur. To address this issue, we improve the reliability of scheduling for emergency tasks by proposing a method based on a stealing mechanism. In our method, an emergency task is transmitted by stealing resources allocated to regular flows. The challenges addressed in our work are as follows: (1) emergencies occur only occasionally, but the industrial system must deliver the corresponding flows within their deadlines when they occur; (2) we wish to minimize the impact of emergency flows by reducing the number of stolen flows. The contributions of this work are two-fold: (1) we first define intersections and blocking as new characteristics of flows; and (2) we propose a series of distributed routing algorithms to improve the schedulability and to reduce the impact of emergency flows. We demonstrate that our scheduling algorithm and analysis approach are better than the existing ones by extensive simulations. PMID:28726738
Evaluating post-wildfire hydrologic recovery using ParFlow in southern California
NASA Astrophysics Data System (ADS)
Lopez, S. R.; Kinoshita, A. M.; Atchley, A. L.
2016-12-01
Wildfires are naturally occurring hazards that can have catastrophic impacts. They can alter the natural processes within a watershed, such as surface runoff and subsurface water storage. Generally, post-fire hydrologic models are either one-dimensional, empirically-based models, or two-dimensional, conceptually-based models with lumped parameter distributions. These models are useful in providing runoff measurements at the watershed outlet; however, do not provide distributed hydrologic simulation at each point within the watershed. This research demonstrates how ParFlow, a three-dimensional, distributed hydrologic model can simulate post-fire hydrologic processes by representing soil burn severity (via hydrophobicity) and vegetation recovery as they vary both spatially and temporally. Using this approach, we are able to evaluate the change in post-fire water components (surface flow, lateral flow, baseflow, and evapotranspiration). This model is initially developed for a hillslope in Devil Canyon, burned in 2003 by the Old Fire in southern California (USA). The domain uses a 2m-cell size resolution over a 25 m by 25 m lateral extent. The subsurface reaches 2 m and is assigned a variable cell thickness, allowing an explicit consideration of the soil burn severity throughout the stages of recovery and vegetation regrowth. Vegetation regrowth is incorporated represented by satellite-based Enhanced Vegetation Index (EVI) products. The pre- and post-fire surface runoff, subsurface storage, and surface storage interactions are evaluated and will be used as a basis for developing a watershed-scale model. Long-term continuous simulations will advance our understanding of post-fire hydrological partitioning between water balance components and the spatial variability of watershed processes, providing improved guidance for post-fire watershed management.
Sikdar, Siddhartha; Shah, Jay P.; Gebreab, Tadesse; Yen, Ru-Huey; Gilliams, Elizabeth; Danoff, Jerome; Gerber, Lynn H.
2009-01-01
Objective Apply ultrasound (US) imaging techniques to better describe the characteristics of myofascial trigger points (MTrPs) and the immediately adjacent soft tissue. Design Descriptive (exploratory) study. Setting Biomedical research center. Participants 9 subjects meeting Travell and Simons’s criteria for MTrPs in a taut band in the upper trapezius. Interventions (None) Main Outcome Measures MTrPs were evaluated by 1) physical examination, 2) pressure algometry, and 3) three types of ultrasound imaging including grayscale (2D US), vibration sonoelastography (VSE), and Doppler. Methods Four sites in each patient were labeled based on physical examination as either active MTrP (spontaneously-painful, A-MTrP), latent MTrP (non-painful, L-MTrP), or normal myofascial tissue. US examination was performed on each subject by a team blinded to the physical findings. A 12-5 MHz US transducer was used. VSE was performed by color Doppler variance imaging while simultaneously inducing vibrations (~92Hz) with a handheld massage vibrator. Each site was assigned a tissue imaging score (TIS) as follows: 0 = uniform echogenicity and stiffness; 1 = focal hypoechoic region with stiff nodule; 2 = multiple hypoechoic regions with stiff nodules. Blood flow in the neighborhood of MTrPs was assessed using Doppler imaging. Each site was assigned a blood flow waveform score (BFS) as follows: 0 = normal arterial flow in muscle; 1 = elevated diastolic flow; 2 = high-resistance flow waveform with retrograde diastolic flow. Results MTrPs appeared as focal, hypoechoic regions on 2D US, indicating local changes in tissue echogenicity, and as focal regions of reduced vibration amplitude on VSE, indicating a localized stiff nodule. MTrPs were elliptical in shape, with a size of 0.16 ± 0.11 cm2. There were no significant differences in size between A-MTrPs and L-MTrPs. Sites containing MTrPs were more likely to have higher TIS compared to normal myofascial tissue (p<0.002). Small arteries (or enlarged arterioles) near A-MTrPs showed retrograde flow in diastole indicating a highly resistive vascular bed. A-MTrP sites were more likely to have higher BFS compared to L-MTrPs (p<0.021). Conclusions Preliminary findings show that, under the conditions of this investigation, US imaging techniques can be used to distinguish myofascial tissue containing MTrPs from normal myofascial tissue (lacking trigger points). Ultrasound enables visualization and some characterization of MTrPs and adjacent soft tissue. PMID:19887205
Cervical microleakage in Class II cavities restored with the Sonicsys approx system.
Rominu, Mihai; Florita, Zeno; Lakatos, Sorin; Rominu, Roxana Otilia
2009-04-01
To investigate the cervical microleakage in Class II cavities restored with Sonicsys approx ceramic inserts and four resin-based materials. Forty noncarious and crack-free mandibular third molars were used. These teeth were randomly assigned to four groups each containing 10 teeth. No control group was created. On each tooth, one mesial boxlike cavity was prepared using the active head Sonicsys approx no. 3. The cervical margin of each cavity was in enamel about 1 mm coronal to the cementoenamel junction. According to manufacturer's instructions, the prepared cavities were restored using a Sonicsys approx ceramic inserts no.3 and four resin-based materials as follows: group 1, Tetric Flow; group 2, Admira Flow; group 3, Nexus 2; group 4, X-Flow. After finishing and polishing, all specimens were stored in distilled water for 7 days at 37 degrees C, thermocycled 1,000 cycles between 5 degrees and 55 degrees C, and stored for 24 hours in basic fuchsine 2%. All specimens were then embedded in clear acrylic resin and sectioned along a mesial-distal plane through the middle of the cervical margin. The cervical areas of the resulting sections were examined using an optical microscope to assess the dye penetration. The registered scores were analyzed using Kruskal-Wallis and Mann-Whitney U tests. Microleakage was detected in each experimental group. Kruskal-Wallis test revealed statistically significant differences among groups (P = .009, alpha = .01). The Mann-Whitney U test showed significant differences between Admira Flow group and Tetric Flow (P = .011, alpha = .05), Nexus 2 (P = .001, alpha = .01), and X-Flow (P = .004, alpha = .01), respectively. Within the limitations of this study, the extent of microleakage in the cervical area (enamel) of Class II cavities restored with Sonicsys approx ceramic inserts depends on the material used for luting. The highest leakage occurred when Admira flow was used.
Retrieval-travel-time model for free-fall-flow-rack automated storage and retrieval system
NASA Astrophysics Data System (ADS)
Metahri, Dhiyaeddine; Hachemi, Khalid
2018-03-01
Automated storage and retrieval systems (AS/RSs) are material handling systems that are frequently used in manufacturing and distribution centers. The modelling of the retrieval-travel time of an AS/RS (expected product delivery time) is practically important, because it allows us to evaluate and improve the system throughput. The free-fall-flow-rack AS/RS has emerged as a new technology for drug distribution. This system is a new variation of flow-rack AS/RS that uses an operator or a single machine for storage operations, and uses a combination between the free-fall movement and a transport conveyor for retrieval operations. The main contribution of this paper is to develop an analytical model of the expected retrieval-travel time for the free-fall flow-rack under a dedicated storage assignment policy. The proposed model, which is based on a continuous approach, is compared for accuracy, via simulation, with discrete model. The obtained results show that the maximum deviation between the continuous model and the simulation is less than 5%, which shows the accuracy of our model to estimate the retrieval time. The analytical model is useful to optimise the dimensions of the rack, assess the system throughput, and evaluate different storage policies.
Araujo, Reno R; Ginther, O J
2009-01-01
To assess the vascular effects of detomidine and xylazine in pony mares and heifers, respectively, as determined in a major artery and by extent of vascular perfusion of reproductive organs. 10 pony mares and 10 Holstein heifers. Pony mares were assigned to receive physiologic saline (0.9% NaCl) solution (n = 5) or detomidine (3.0 mg/mare, IV; 5). Heifers were assigned to receive saline solution (5) or xylazine (14 mg/heifer, IM; 5). Color Doppler ultrasonographic examinations were performed immediately before and 10 minutes after administration of saline solution or sedative. In spectral Doppler mode, a spectral graph of blood flow velocities during a cardiac cycle was obtained at the internal iliac artery and at the ovarian pedicle. In color-flow mode, color signals of blood flow in vessels of the corpus luteum and endometrium were assessed. Systemic effects of sedation in the 2 species were evident as a decrease in heart rate; increase in duration of systole, diastole, or both; decrease in volume of blood flow; and decrease in velocity of blood flow within the internal iliac artery. However, an effect of sedatives on local vascular perfusion in the ovaries and endometrium was not detected. Sedation with detomidine in pony mares and xylazine in heifers did not affect vascular perfusion in reproductive organs. These sedatives can be used in experimental and clinical color Doppler evaluations of vascular perfusion of the corpus luteum and endometrium.
Semantic image segmentation with fused CNN features
NASA Astrophysics Data System (ADS)
Geng, Hui-qiang; Zhang, Hua; Xue, Yan-bing; Zhou, Mian; Xu, Guang-ping; Gao, Zan
2017-09-01
Semantic image segmentation is a task to predict a category label for every image pixel. The key challenge of it is to design a strong feature representation. In this paper, we fuse the hierarchical convolutional neural network (CNN) features and the region-based features as the feature representation. The hierarchical features contain more global information, while the region-based features contain more local information. The combination of these two kinds of features significantly enhances the feature representation. Then the fused features are used to train a softmax classifier to produce per-pixel label assignment probability. And a fully connected conditional random field (CRF) is used as a post-processing method to improve the labeling consistency. We conduct experiments on SIFT flow dataset. The pixel accuracy and class accuracy are 84.4% and 34.86%, respectively.
Chen, G; Wu, F Y; Liu, Z C; Yang, K; Cui, F
2015-08-01
Subject-specific finite element (FE) models can be generated from computed tomography (CT) datasets of a bone. A key step is assigning material properties automatically onto finite element models, which remains a great challenge. This paper proposes a node-based assignment approach and also compares it with the element-based approach in the literature. Both approaches were implemented using ABAQUS. The assignment procedure is divided into two steps: generating the data file of the image intensity of a bone in a MATLAB program and reading the data file into ABAQUS via user subroutines. The node-based approach assigns the material properties to each node of the finite element mesh, while the element-based approach assigns the material properties directly to each integration point of an element. Both approaches are independent from the type of elements. A number of FE meshes are tested and both give accurate solutions; comparatively the node-based approach involves less programming effort. The node-based approach is also independent from the type of analyses; it has been tested on the nonlinear analysis of a Sawbone femur. The node-based approach substantially improves the level of automation of the assignment procedure of bone material properties. It is the simplest and most powerful approach that is applicable to many types of analyses and elements. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Petri net modeling of encrypted information flow in federated cloud
NASA Astrophysics Data System (ADS)
Khushk, Abdul Rauf; Li, Xiaozhong
2017-08-01
Solutions proposed and developed for the cost-effective cloud systems suffer from a combination of secure private clouds and less secure public clouds. Need to locate applications within different clouds poses a security risk to the information flow of the entire system. This study addresses this by assigning security levels of a given lattice to the entities of a federated cloud system. A dynamic flow sensitive security model featuring Bell-LaPadula procedures is explored that tracks and authenticates the secure information flow in federated clouds. Additionally, a Petri net model is considered as a case study to represent the proposed system and further validate the performance of the said system.
Freight Transportation Energy Use : Volume 2. Methodology and Program Documentation.
DOT National Transportation Integrated Search
1978-07-01
The structure and logic of the transportation network model component of the TSC Freight Energy Model are presented. The model assigns given origin-destination commodity flows to specific transport modes and routes, thereby determining the traffic lo...
Propagation of Disturbances in Traffic Flow
DOT National Transportation Integrated Search
1977-09-01
The system-optimized static traffic-assignment problem in a freeway corridor network is the problem of choosing a distribution of vehicles in the network to minimize average travel time. It is of interest to know how sensitive the optimal steady-stat...
NASA Astrophysics Data System (ADS)
Sana, Ajaz; Hussain, Shahab; Ali, Mohammed A.; Ahmed, Samir
2007-09-01
In this paper we proposes a novel Passive Optical Network (PON) based broadband wireless access network architecture to provide multimedia services (video telephony, video streaming, mobile TV, mobile emails etc) to mobile users. In the conventional wireless access networks, the base stations (Node B) and Radio Network Controllers (RNC) are connected by point to point T1/E1 lines (Iub interface). The T1/E1 lines are expensive and add up to operating costs. Also the resources (transceivers and T1/E1) are designed for peak hours traffic, so most of the time the dedicated resources are idle and wasted. Further more the T1/E1 lines are not capable of supporting bandwidth (BW) required by next generation wireless multimedia services proposed by High Speed Packet Access (HSPA, Rel.5) for Universal Mobile Telecommunications System (UMTS) and Evolution Data only (EV-DO) for Code Division Multiple Access 2000 (CDMA2000). The proposed PON based back haul can provide Giga bit data rates and Iub interface can be dynamically shared by Node Bs. The BW is dynamically allocated and the unused BW from lightly loaded Node Bs is assigned to heavily loaded Node Bs. We also propose a novel algorithm to provide end to end Quality of Service (QoS) (between RNC and user equipment).The algorithm provides QoS bounds in the wired domain as well as in wireless domain with compensation for wireless link errors. Because of the air interface there can be certain times when the user equipment (UE) is unable to communicate with Node B (usually referred to as link error). Since the link errors are bursty and location dependent. For a proposed approach, the scheduler at the Node B maps priorities and weights for QoS into wireless MAC. The compensations for errored links is provided by the swapping of services between the active users and the user data is divided into flows, with flows allowed to lag or lead. The algorithm guarantees (1)delay and throughput for error-free flows,(2)short term fairness among error-free flows,(3)long term fairness among errored and error-free flows,(4)graceful degradation for leading flows and graceful compensation for lagging flows.
Characteristics and Impact of Imperviousness From a GIS-based Hydrological Perspective
NASA Astrophysics Data System (ADS)
Moglen, G. E.; Kim, S.
2005-12-01
With the concern that imperviousness can be differently quantified depending on data sources and methods, this study assessed imperviousness estimates using two different data sources: land use and land cover. Year 2000 land use developed by the Maryland Department of Planning was utilized to estimate imperviousness by assigning imperviousness coefficients to unique land use categories. These estimates were compared with imperviousness estimates based on satellite-derived land cover from the 2001 National Land Cover Dataset. Our study developed the relationships between these two estimates in the form of regression equations to convert imperviousness derived from one data source to the other. The regression equations are considered reliable, based on goodness-of-fit measures. Furthermore, this study examined how quantitatively different imperviousness estimates affect the prediction of hydrological response both in the flow regime and in the thermal regime. We assessed the relationships between indicators of hydrological response and imperviousness-descriptors. As indicators of flow variability, coefficient of variance, lag-one autocorrelation, and mean daily flow change were calculated based on measured mean daily stream flow from the water year 1997 to 2003. For thermal variability, indicators such as percent-days of surge, degree-day, and mean daily temperature difference were calculated base on measured stream temperature over several basins in Maryland. To describe imperviousness through the hydrological process, GIS-based spatially distributed hydrological models were developed based on a water-balance method and the SCS-CN method. Imperviousness estimates from land use and land cover were used as predictors in these models to examine the effect of imperviousness using different data sources on the prediction of hydrological response. Indicators of hydrological response were also regressed on aggregate imperviousness. This allowed for identifying if hydrological response is more sensitive to spatially distributed imperviousness or aggregate (lumped) imperviousness. The regressions between indicators of hydrological response and imperviousness-descriptors were evaluated by examining goodness-of-fit measures such as explained variance or relative standard error. The results show that imperviousness estimates using land use are better predictors of flow variability and thermal variability than imperviousness estimates using land cover. Also, this study reveals that flow variability is more sensitive to spatially distributed models than lumped models, while thermal variability is equally responsive to both models. The findings from this study can be further examined from a policy perspective with regard to policies that are based on a threshold concept for imperviousness impacts on the ecological and hydrological system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seo, Jooyeok; Lee, Chulyeon; Han, Hyemi
We report a tactile touch sensor based on a planar liquid crystal-gated-organic field-effect transistor (LC-g-OFET) structure. The LC-g-OFET touch sensors were fabricated by forming the 10 μm thick LC layer (4-cyano-4{sup ′}-pentylbiphenyl - 5CB) on top of the 50 nm thick channel layer (poly(3-hexylthiophene) - P3HT) that is coated on the in-plane aligned drain/source/gate electrodes (indium-tin oxide - ITO). As an external physical stimulation to examine the tactile touch performance, a weak nitrogen flow (83.3 μl/s) was employed to stimulate the LC layer of the touch device. The LC-g-OFET device exhibited p-type transistor characteristics with a hole mobility of 1.5more » cm{sup 2}/Vs, but no sensing current by the nitrogen flow touch was measured at sufficiently high drain (V{sub D}) and gate (V{sub G}) voltages. However, a clear sensing current signal was detected at lower voltages, which was quite sensitive to the combination of V{sub D} and V{sub G}. The best voltage combination was V{sub D} = −0.2 V and V{sub G} = −1 V for the highest ratio of signal currents to base currents (i.e., signal-to-noise ratio). The change in the LC alignment upon the nitrogen flow touch was assigned as the mechanism for the present LC-g-OFET touch sensors.« less
NASA Astrophysics Data System (ADS)
Burns, D. A.; Lawrence, G. B.; Driscoll, C. T.; Sullivan, T. J.; Shao, S.; McDonnell, T. C.
2017-12-01
Episodic acidification occurs when surface water pH and ANC decrease temporarily during rain events and snowmelt. The principal drivers of episodic acidification are increases in sulfuric acid, nitric acid, organic acids, and dilution of base cations. In regions where surface waters are sensitive to acid deposition, ANC values may approach or decline below 0 µeq/L during high flows, which may result in deleterious effects to sensitive aquatic biota. The Adirondack Mountains of New York have abundant streams and lakes, many of which are highly sensitive to the effects of acid deposition. Long-term monitoring data indicate that pH and ANC in regional surface waters are increasing in response to decreases in the acidity of atmospheric deposition that result from decreasing SO2 and NOx emissions as the Clean Air Act and its ancillary rules and amendments have been implemented. Most surface-water monitoring focuses on low-flow and broad seasonal patterns, and less is known about how episodic acidification has responded to emissions decreases. Here, we report on spatial and temporal patterns in episodic acidification through analysis of C-Q relations from surveys that target varying flow conditions as well as data from a few long-term intensively sampled stream monitoring sites. Each stream sample was assigned a Q percentile value based on a resident or nearby gage, and a statistical relation between ANC values and Q percentile was developed. The magnitude of episodic decreases in ANC increases as low-flow ANC increases, a pattern that likely results from an increasing influence of dilution, especially evident when low-flow ANC values exceed 100 µeq/L. Chronically acidic streams with low-flow ANC near 0 µeq/L show little episodic acidification, whereas streams with low-flow ANC values of about 50 µeq/L generally show ANC decreases to less than 0 µeq/L at high flow. Preliminary analysis of a 24-yr data set (1991-2014) at Buck Creek indicates that increases in high-flow ANC are more than twice those of low-flow ANC. These ANC values generally no longer decline below 0 µeq/L at the highest flows, which typically occur during spring snowmelt. Further analyses will explore how the drivers of episodic acidification vary across the region with low-flow ANC and whether clear trends in these drivers are evident across the region.
Congestion patterns of electric vehicles with limited battery capacity.
Jing, Wentao; Ramezani, Mohsen; An, Kun; Kim, Inhi
2018-01-01
The path choice behavior of battery electric vehicle (BEV) drivers is influenced by the lack of public charging stations, limited battery capacity, range anxiety and long battery charging time. This paper investigates the congestion/flow pattern captured by stochastic user equilibrium (SUE) traffic assignment problem in transportation networks with BEVs, where the BEV paths are restricted by their battery capacities. The BEV energy consumption is assumed to be a linear function of path length and path travel time, which addresses both path distance limit problem and road congestion effect. A mathematical programming model is proposed for the path-based SUE traffic assignment where the path cost is the sum of the corresponding link costs and a path specific out-of-energy penalty. We then apply the convergent Lagrangian dual method to transform the original problem into a concave maximization problem and develop a customized gradient projection algorithm to solve it. A column generation procedure is incorporated to generate the path set. Finally, two numerical examples are presented to demonstrate the applicability of the proposed model and the solution algorithm.
Congestion patterns of electric vehicles with limited battery capacity
2018-01-01
The path choice behavior of battery electric vehicle (BEV) drivers is influenced by the lack of public charging stations, limited battery capacity, range anxiety and long battery charging time. This paper investigates the congestion/flow pattern captured by stochastic user equilibrium (SUE) traffic assignment problem in transportation networks with BEVs, where the BEV paths are restricted by their battery capacities. The BEV energy consumption is assumed to be a linear function of path length and path travel time, which addresses both path distance limit problem and road congestion effect. A mathematical programming model is proposed for the path-based SUE traffic assignment where the path cost is the sum of the corresponding link costs and a path specific out-of-energy penalty. We then apply the convergent Lagrangian dual method to transform the original problem into a concave maximization problem and develop a customized gradient projection algorithm to solve it. A column generation procedure is incorporated to generate the path set. Finally, two numerical examples are presented to demonstrate the applicability of the proposed model and the solution algorithm. PMID:29543875
Sloto, Ronald A.
2008-01-01
The Pocono Creek watershed drains 46.5 square miles in eastern Monroe County, Pa. Between 2000 and 2020, the population of Monroe County is expected to increase by 70 percent, which will result in substantial changes in land-use patterns. An evaluation of the effect of reduced recharge from land-use changes and additional ground-water withdrawals on stream base flow was done by the U.S. Geological Survey (USGS) in cooperation with the U.S. Environmental Protection Agency (USEPA) and the Delaware River Basin Commission as part of the USEPA?s Framework for Sustainable Watershed Management Initiative. Two models were used. A Soil and Water Assessment Tool (SWAT) model developed by the USEPA provided areal recharge values for 2000 land use and projected full buildout land use. The USGS MODFLOW-2000 ground-water-flow model was used to estimate the effect of reduced recharge from changes in land use and additional ground-water withdrawals on stream base flow. This report describes the ground-water-flow-model simulations. The Pocono Creek watershed is underlain by sedimentary rock of Devonian age, which is overlain by a veneer of glacial deposits. All water-supply wells are cased into and derive water from the bedrock. In the ground-water-flow model, the surficial geologic units were grouped into six categories: (1) moraine deposits, (2) stratified drift, (3) lake deposits, (4) outwash, (5) swamp deposits, and (6) undifferentiated deposits. The unconsolidated surficial deposits are not used as a source of water. The ground-water and surface-water systems are well connected in the Pocono Creek watershed. Base flow measured on October 13, 2004, at 27 sites for model calibration showed that streams gained water between all sites measured except in the lower reach of Pocono Creek. The ground-water-flow model included the entire Pocono Creek watershed. Horizontally, the modeled area was divided into a 53 by 155 cell grid with 6,060 active cells. Vertically, the modeled area was discretized into four layers. Layers 1 and 2 represented the unconsolidated surficial deposits where they are present and bedrock where the surficial deposits are absent. Layer 3 represented shallow bedrock and was 200 ft (feet) thick. Layer 4 represented deep bedrock and was 300 ft thick. A total of 873 cells representing streams were assigned to layer 1. Recharge rates for model calibration were provided by the USEPA SWAT model for 2000 land-use conditions. Recharge rates for 2000 for the 29 subwatersheds in the SWAT model ranged from 6.11 to 22.66 inches per year. Because the ground-water-flow model was calibrated to base-flow data collected on October 13, 2004, the 2000 recharge rates were multiplied by 1.18 so the volume of recharge was equal to the volume of streamflow measured at the mouth of Pocono Creek. During model calibration, adjustments were made to aquifer hydraulic conductivity and streambed conductance. Simulated base flows and hydraulic heads were compared to measured base flows and hydraulic heads using the root mean squared error (RMSE) between measured and simulated values. The RMSE of the calibrated model for base flow was 4.7 cubic feet per second for 27 locations, and the RMSE for hydraulic heads for 15 locations was 35 ft. The USEPA SWAT model was used to provide areal recharge values for 2000 and full buildout land-use conditions. The change in recharge ranged from an increase of 37.8 percent to a decrease of 60.8 percent. The ground-water-flow model was used to simulate base flow for 2000 and full buildout land-use conditions using steady-state simulations. The decrease in simulated base flow ranged from 3.8 to 63 percent at the streamflow-measurement sites. Simulated base flow at streamflow-gaging station Pocono Creek above Wigwam Run near Stroudsburg, Pa. (01441495), decreased 25 percent. This is in general agreement with the SWAT model, which estimated a 30.6-percent loss in base flow at the streamflow-gaging station.
Saver, J L; Jahan, R; Levy, E I; Jovin, T G; Baxter, B; Nogueira, R; Clark, W; Budzik, R; Zaidat, O O
2014-07-01
Self-expanding stent retrievers are a promising new device class designed for rapid flow restoration in acute cerebral ischaemia. The SOLITAIRE™ Flow Restoration device (SOLITAIRE) has shown high rates of recanalization in preclinical models and in uncontrolled clinical series. (1) To demonstrate non-inferiority of SOLITAIRE compared with a legally marketed device, the MERCI Retrieval System®; (2) To demonstrate safety, feasibility, and efficacy of SOLITAIRE in subjects requiring mechanical thrombectomy diagnosed with acute ischaemic stroke. DESIGN : Multicenter, randomized, prospective, controlled trial with blinded primary end-point ascertainment. Key entry criteria include: age 22-85; National Institute of Health Stroke Scale (NIHSS) ≥8 and <30; clinical and imaging findings consistent with acute ischaemic stroke; patient ineligible or failed intravenous tissue plasminogen activator; accessible occlusion in M1 or M2 middle cerebral artery, internal carotid artery, basilar artery, or vertebral artery; and patient able to be treated within 8 h of onset. Sites first participate in a roll-in phase, treating two patients with the SOLITAIRE device, before proceeding to the randomized phase. In patients unresponsive to the initially assigned therapy, after the angiographic component of the primary end-point is ascertained (reperfusion with the initial assigned device), rescue therapy with other reperfusion techniques is permitted. The primary efficacy end-point is successful recanalization with the assigned study device (no use of rescue therapy) and with no symptomatic intracranial haemorrhage. Successful recanalization is defined as achieving Thrombolysis In Myocardial Ischemia 2 or 3 flow in all treatable vessels. The primary safety end-point is the incidence of device-related and procedure-related serious adverse events. A major secondary efficacy end-point is time to achieve initial recanalization. Additional secondary end-points include clinical outcomes at 90 days and radiologic haemorrhagic transformation. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.
Yoon, Hyun Jin; Cheon, Sang Myung; Jeong, Young Jin; Kang, Do-Young
2012-02-01
We assign the anatomical names of functional activation regions in the brain, based on the probabilistic cyto-architectonic atlas by Anatomy 1.7 from an analysis of correlations between regional cerebral blood flow (rCBF) and clinical parameters of the non-demented Parkinson's disease (PD) patients by SPM8. We evaluated Anatomy 1.7 of SPM toolbox compared to 'Talairach Daemon' (TD) Client 2.4.2 software. One hundred and thirty-six patients (mean age 60.0 ± 9.09 years; 73 women and 63 men) with non-demented PD were selected. Tc-99m-HMPAO brain single-photon emission computed tomography (SPECT) scans were performed on the patients using a two-head gamma-camera. We analyzed the brain image of PD patients by SPM8 and found the anatomical names of correlated regions of rCBF perfusion with the clinical parameters using TD Client 2.4.2 and Anatomy 1.7. The SPM8 provided a correlation coefficient between clinical parameters and cerebral hypoperfusion by a simple regression method. To the clinical parameters were added age, duration of disease, education period, Hoehn and Yahr (H&Y) stage and Korean mini-mental state examination (K-MMSE) score. Age was correlated with cerebral perfusion in the Brodmann area (BA) 6 and BA 3b assigned by Anatomy 1.7 and BA 6 and pyramis in gray matter by TD Client 2.4.2 with p < 0.001 uncorrected. Also, assigned significant correlated regions were found in the left and right lobules VI (Hem) with duration of disease, in left and right lobules VIIa crus I (Hem) with education, in left insula (Ig2), left and right lobules VI (Hem) with H&Y, and in BA 4a and 6 with K-MMSE score with p < 0.05 uncorrected by Anatomy 1.7, respectively. Most areas of correlation were overlapped by two different anatomical labeling methods, but some correlation areas were found with different names. Age was the most significantly correlated clinical parameter with rCBF. TD Client found the exact anatomical name by the peak intensity position of the cluster while Anatomy 1.7 of SPM8 toolbox, using the cyto-architectonic probability maps, assigned the anatomical name by percentage value of the probability.
49 CFR 173.121 - Class 3-Assignment of packing group.
Code of Federal Regulations, 2013 CFR
2013-10-01
... than 100 L (26.3 gallons); and (iv) The viscosity and flash point are in accordance with the following... paragraph (b)(1) of this section shall be performed are as follows: (i) Viscosity test. The flow time in...
49 CFR 173.121 - Class 3-Assignment of packing group.
Code of Federal Regulations, 2014 CFR
2014-10-01
... than 100 L (26.3 gallons); and (iv) The viscosity and flash point are in accordance with the following... paragraph (b)(1) of this section shall be performed are as follows: (i) Viscosity test. The flow time in...
A framework for the nationwide multimode transportation demand analysis.
DOT National Transportation Integrated Search
2010-09-01
This study attempts to analyze the impact of traffic on the US highway system considering both passenger vehicles and : trucks. For the analysis, a pseudo-dynamic traffic assignment model is proposed to estimate the time-dependent link flow : from th...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-16
... in the patent referred to below to Flow Applications, Inc., having a place of business in Okawville, Illinois. The patent rights in these inventions have been assigned to the government of the United States...
A Fully Magnetically Levitated Circulatory Pump for Advanced Heart Failure.
Mehra, Mandeep R; Naka, Yoshifumi; Uriel, Nir; Goldstein, Daniel J; Cleveland, Joseph C; Colombo, Paolo C; Walsh, Mary N; Milano, Carmelo A; Patel, Chetan B; Jorde, Ulrich P; Pagani, Francis D; Aaronson, Keith D; Dean, David A; McCants, Kelly; Itoh, Akinobu; Ewald, Gregory A; Horstmanshof, Douglas; Long, James W; Salerno, Christopher
2017-02-02
Continuous-flow left ventricular assist systems increase the rate of survival among patients with advanced heart failure but are associated with the development of pump thrombosis. We investigated the effects of a new magnetically levitated centrifugal continuous-flow pump that was engineered to avert thrombosis. We randomly assigned patients with advanced heart failure to receive either the new centrifugal continuous-flow pump or a commercially available axial continuous-flow pump. Patients could be enrolled irrespective of the intended goal of pump support (bridge to transplantation or destination therapy). The primary end point was a composite of survival free of disabling stroke (with disabling stroke indicated by a modified Rankin score >3; scores range from 0 to 6, with higher scores indicating more severe disability) or survival free of reoperation to replace or remove the device at 6 months after implantation. The trial was powered for noninferiority testing of the primary end point (noninferiority margin, -10 percentage points). Of 294 patients, 152 were assigned to the centrifugal-flow pump group and 142 to the axial-flow pump group. In the intention-to-treat population, the primary end point occurred in 131 patients (86.2%) in the centrifugal-flow pump group and in 109 (76.8%) in the axial-flow pump group (absolute difference, 9.4 percentage points; 95% lower confidence boundary, -2.1 [P<0.001 for noninferiority]; hazard ratio, 0.55; 95% confidence interval [CI], 0.32 to 0.95 [two-tailed P=0.04 for superiority]). There were no significant between-group differences in the rates of death or disabling stroke, but reoperation for pump malfunction was less frequent in the centrifugal-flow pump group than in the axial-flow pump group (1 [0.7%] vs. 11 [7.7%]; hazard ratio, 0.08; 95% CI, 0.01 to 0.60; P=0.002). Suspected or confirmed pump thrombosis occurred in no patients in the centrifugal-flow pump group and in 14 patients (10.1%) in the axial-flow pump group. Among patients with advanced heart failure, implantation of a fully magnetically levitated centrifugal-flow pump was associated with better outcomes at 6 months than was implantation of an axial-flow pump, primarily because of the lower rate of reoperation for pump malfunction. (Funded by St. Jude Medical; MOMENTUM 3 ClinicalTrials.gov number, NCT02224755 .).
41 CFR 102-79.20 - What standard must Executive agencies promote when assigning space?
Code of Federal Regulations, 2013 CFR
2013-07-01
... quality workspace that is delivered and occupied in a timely manner, and assign space based on mission... Executive agencies promote when assigning space? 102-79.20 Section 102-79.20 Public Contracts and Property... PROPERTY 79-ASSIGNMENT AND UTILIZATION OF SPACE Assignment and Utilization of Space Assignment of Space...
41 CFR 102-79.20 - What standard must Executive agencies promote when assigning space?
Code of Federal Regulations, 2012 CFR
2012-01-01
... quality workspace that is delivered and occupied in a timely manner, and assign space based on mission... Executive agencies promote when assigning space? 102-79.20 Section 102-79.20 Public Contracts and Property... PROPERTY 79-ASSIGNMENT AND UTILIZATION OF SPACE Assignment and Utilization of Space Assignment of Space...
41 CFR 102-79.20 - What standard must Executive agencies promote when assigning space?
Code of Federal Regulations, 2010 CFR
2010-07-01
... quality workspace that is delivered and occupied in a timely manner, and assign space based on mission... Executive agencies promote when assigning space? 102-79.20 Section 102-79.20 Public Contracts and Property... PROPERTY 79-ASSIGNMENT AND UTILIZATION OF SPACE Assignment and Utilization of Space Assignment of Space...
41 CFR 102-79.20 - What standard must Executive agencies promote when assigning space?
Code of Federal Regulations, 2011 CFR
2011-01-01
... quality workspace that is delivered and occupied in a timely manner, and assign space based on mission... Executive agencies promote when assigning space? 102-79.20 Section 102-79.20 Public Contracts and Property... PROPERTY 79-ASSIGNMENT AND UTILIZATION OF SPACE Assignment and Utilization of Space Assignment of Space...
41 CFR 102-79.20 - What standard must Executive agencies promote when assigning space?
Code of Federal Regulations, 2014 CFR
2014-01-01
... quality workspace that is delivered and occupied in a timely manner, and assign space based on mission... Executive agencies promote when assigning space? 102-79.20 Section 102-79.20 Public Contracts and Property... PROPERTY 79-ASSIGNMENT AND UTILIZATION OF SPACE Assignment and Utilization of Space Assignment of Space...
The PAH Emission Characteristics of the Reflection Nebula NGC 2023
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeters, Els; Bauschlicher, Charles W. Jr.; Allamandola, Louis J.
We present 5–20 μ m spectral maps of the reflection nebula NGC 2023 obtained with the Infrared Spectrograph SL and SH modes on board the Spitzer Space Telescope, which reveal emission from polycyclic aromatic hydrocarbons (PAHs), C{sub 60}, and H{sub 2} superposed on a dust continuum. We show that several PAH emission bands correlate with each other and exhibit distinct spatial distributions that reveal a spatial sequence with distance from the illuminating star. We explore the distinct morphology of the 6.2, 7.7, and 8.6 μ m PAH bands and find that at least two spatially distinct components contribute to themore » 7–9 μ m PAH emission in NGC 2023. We report that the PAH features behave independently of the underlying plateaus. We present spectra of compact, oval PAHs ranging in size from C{sub 66} to C{sub 210}, determined computationally using density functional theory, and we investigate trends in the band positions and relative intensities as a function of PAH size, charge, and geometry. Based on the NASA Ames PAH database, we discuss the 7–9 μ m components in terms of band assignments and relative intensities. We assign the plateau emission to very small grains with possible contributions from PAH clusters and identify components in the 7–9 μ m emission that likely originate in these structures. Based on the assignments and the observed spatial sequence, we discuss the photochemical evolution of the interstellar PAH family as the PAHs are more and more exposed to the radiation field of the central star in the evaporative flows associated with the Photo-Dissociation Regions in NGC 2023.« less
System for ranking relative threats of U.S. volcanoes
Ewert, J.W.
2007-01-01
A methodology to systematically rank volcanic threat was developed as the basis for prioritizing volcanoes for long-term hazards evaluations, monitoring, and mitigation activities. A ranking of 169 volcanoes in the United States and the Commonwealth of the Northern Mariana Islands (U.S. volcanoes) is presented based on scores assigned for various hazard and exposure factors. Fifteen factors define the hazard: Volcano type, maximum known eruptive explosivity, magnitude of recent explosivity within the past 500 and 5,000 years, average eruption-recurrence interval, presence or potential for a suite of hazardous phenomena (pyroclastic flows, lahars, lava flows, tsunami, flank collapse, hydrothermal explosion, primary lahar), and deformation, seismic, or degassing unrest. Nine factors define exposure: a measure of ground-based human population in hazard zones, past fatalities and evacuations, a measure of airport exposure, a measure of human population on aircraft, the presence of power, transportation, and developed infrastructure, and whether or not the volcano forms a significant part of a populated island. The hazard score and exposure score for each volcano are multiplied to give its overall threat score. Once scored, the ordered list of volcanoes is divided into five overall threat categories from very high to very low. ?? 2007 ASCE.
Das Bremerhavener Grundwasser im Klimawandel - Eine FREEWAT-Fallstudie
NASA Astrophysics Data System (ADS)
Panteleit, Björn; Jensen, Sven; Seiter, Katherina; Siebert, Yvonne
2018-01-01
A 3D structural model was created for the state of Bremen based on an extensive borehole database. Parameters were assigned to the model by interpretation and interpolation of the borehole descriptions. This structural model was transferred into a flow model via the FREEWAT platform, an open-source plug-in of the free QGIS software, with connection to the MODFLOW code. This groundwater management tool is intended for long-term use. As a case study for the FREEWAT Project, possible effects of climate change on groundwater levels in the Bremerhaven area have been simulated. In addition to the calibration year 2010, scenarios with a sea-level rise and decreasing groundwater recharge were simulated for the years 2040, 2070 and 2100. In addition to seawater intrusion in the coastal area, declining groundwater levels are also a concern. Possibilities for future groundwater management already include active control of the water level of a lake and the harbor basin. With the help of a focused groundwater monitoring program based on the model results, the planned flow model can become an important forecasting tool for groundwater management within the framework of the planned continuous model management and for representing the effects of changing climatic conditions and mitigation measures.
Axial Seamount Relative Eruption Timing Constraints Based on Paleointensity Data
NASA Astrophysics Data System (ADS)
Bowles, J. A.; Dreyer, B. M.; Clague, D. A.
2013-12-01
Axial Seamount, located on the Juan de Fuca Ridge in the northeast Pacific, is one of the most extensively studied seamounts in the world. High-resolution mapping and camera imagery by remotely operated vehicle (ROV) have allowed for the creation of a geologic map of the caldera. Individual flow fields have been identified, and relative ages have been assigned based on ROV observations. Some constraints on absolute age have been obtained by 14C dating of the overlying sediments, and flows with inadequate sediment to sample are assumed to be less than 300 years old. To refine relative age relationships between flow fields, geomagnetic paleointensity recorded in basaltic glass is compared with models of field behavior over the past ~1,000 years. Thellier-type paleointensity experiments were carried out on samples from within Axial caldera. Paleointensity results from the 2011 Axial eruption give a paleofield value of 46.0×4.5 μT compared to the IGRF value of 52.1 μT. This suggests that the geodynamo-produced field is being locally distorted by the pre-existing magnetic topography of Axial seamount. Long-wavelength distortion may arise from the large seamount edifice itself, or short- wavelength distortion may arise from small scale (meters to 10s of meters) roughness in the surface flows. The dominance of long-wavelength distortion is implied by an analysis of samples from other flows within the Axial caldera. Within each flow, the paleointensity values are relatively tightly clustered compared to the overall scatter in the data, suggesting that short-wavelength distortion is minimized. These flows are thought to be less than a few hundred years old, and over this time period, the strength of the geomagnetic field should be monotonically decreasing. Such a decreasing trend is recovered in paleointensity results from flows in the north, south, and east caldera regions, supporting the relative age interpretations made from ROV observations. However, all paleointensity values are lower than expected. This is broadly consistent with sea-surface observations of a magnetic anomaly low over the Axial summit. A regional negative anomaly in the caldera will be further tested by analysis of near-bottom magnetometer data.
Fast Laplace solver approach to pore-scale permeability
NASA Astrophysics Data System (ADS)
Arns, C. H.; Adler, P. M.
2018-02-01
We introduce a powerful and easily implemented method to calculate the permeability of porous media at the pore scale using an approximation based on the Poiseulle equation to calculate permeability to fluid flow with a Laplace solver. The method consists of calculating the Euclidean distance map of the fluid phase to assign local conductivities and lends itself naturally to the treatment of multiscale problems. We compare with analytical solutions as well as experimental measurements and lattice Boltzmann calculations of permeability for Fontainebleau sandstone. The solver is significantly more stable than the lattice Boltzmann approach, uses less memory, and is significantly faster. Permeabilities are in excellent agreement over a wide range of porosities.
Comparing and Contrasting Siblings: Defining the Self.
ERIC Educational Resources Information Center
Schachter, Frances Fuchs; Stone, Richard K.
1987-01-01
Deidentification is the phenomenon whereby siblings are defined as different or contrasting. In pathological deidentification, the natural flow of sibling conflict and reconciliation seems obstructed as one sibling is assigned the fixed identity of "devil," who constantly harasses the other, "angel," sibling. A clinical…
Merritt, M.L.
1995-01-01
A digital model of the flow system in the highly permeable surficial aquifer of southern Dade County, Florida, was constructed for the purposes of better understanding processes that influence the flow system and of supporting the construction of a subregional model of the transport of brackish water from a flowing artesian well. Problems that needed resolution in this endeavor included the development of methods to represent the influence of flowing surface water in seasonally inundated wetlands and the influence of a network of controlled canals developed in stages during the simulation time period (water years 1945-89). An additional problem was the general lack of natural aquifer boundaries near the boundaries of the study area. The model construction was based on a conceptual description of the Biscayne aquifer developed from the results of previous U.S. Geological Survey investigations. Modifications were made to an existing three- dimensional finite-difference simulator of ground- water flow to enable an upper layer of the grid to represent seasonally occurring overland sheetflow in a series of transient simulations of water levels from 1945 to 1989. A rewetting procedure was developed for the simulator that permitted resaturation of cells in this layer when the wet season recurred. An "equivalent hydraulic conductivity" coefficient was assigned to the overland flow layer that was analogous, subject to various approximations, to the use of the Manning equation. The surficial semiconfining peat and marl layers, levees, canals, and control structures were also represented as part of the model grid with the appropriate choices of hydraulic coefficient values. For most of the Biscayne aquifer grid cells, the value assigned to hydraulic conductivity for model calibration was 30,000 feet per day and the value assigned to porosity was 20 percent. Boundary conditions were specified near data sites having long-term records of surface-water stages or water-table altitudes, and modifications to the simulator permitted the specification of time- varying pressures at boundary grid cells. Rainfall data from a station in Homestead generally were used as an areally uniform rainfall specification throughout the modeled region. Maximum evapotranspiration rates ranged seasonally from a minimum of 0.08 inch per day in January to a maximum of 0.21 inch per day between June and October. Shallow-root and deep-root zone depths for the evaportranspiration calculation were 3 and 20 feet in the coastal ridge and were 0.10 and 5 feet in the glades regions where peat and marl covers occurred. Results of sensitivity analyses indicated that the simulations of stages and water levels were relatively unresponsive to 50 percent changes in aquifer hydraulic conductivity, porosity, and the equivalent hydraulic conductivity of overland flow. However, 20 percent changes in rainfall and maximum evapotranspiration rates produced significantly different water levels, as did interchange of coastal ridge and glades deep-root zone (extinction) depths. Water levels were simulated very well at most measurement sites. Sensitivity analyses illustrated the significant influence of the uncontrolled agricultural drainage canals on pre- 1968 regional water levels and the further influence of Black Creek Canal in draining a region of high water after 1961. Other analyses indicated that the flood-control system of 1968-82 lowered peak water levels in the affected region by as much as 1.5 feet in the wet summers of 1968, 1969, and 1981, and that Levee 67 Extended channeled flows from the S-12 spillway structures and raised overland flow stages in Shark River Slough. Hypothetical scenarios of well-field pumping in the vicinity of Levee 31N indicated that the pumping induced a significant amount of recharge from the adjacent borrow canal, the degree of which depended on the distance between the canal and the well field. The computed ratio of evapotranspiration to ra
NASA Astrophysics Data System (ADS)
Saffer, D. M.; McKiernan, A. W.; Skarbek, R. M.
2008-12-01
Characterizing dewatering pathways and chemical fluxes near and outboard of subduction trenches is important toward understanding early sediment dewatering and devolatilization. Quantifying fluid flow rates also constrains the hydraulic gradients driving flow, and thus ultimately hold implications for pore pressure distribution and fault mechanical strength. We focus on the well-studied Nankai Trough offshore SW Japan, where drilling has sampled the sedimentary section at several boreholes from ~11 km outboard of the trench to 3 km landward. At these drillsites, &δ37Cl data and correlation of distinct extrema in downhole chloride profiles have been interpreted to reflect substantial horizontal fluid flow to >10 km outboard of the trench within the ~400 m-thick, homogeneous Lower Shikoku Basin (LSB) facies mudstone. The estimated horizontal velocities are 13 ± 5 cm yr-1; the flow is presumably driven by loading during subduction, and mediated by either permeable conduits or strong anisotropy in permeability. However, the pressure gradients and sediment permeabilities necessary for such flow have not been quantified. Here, we address this problem by combining (1) laboratory measurement of horizontal and vertical sediment permeability from a combination of constant rate of strain (CRS) consolidation tests and flow-through measurements on core samples; and (2) numerical models of fluid flow within a cross section perpendicular to the trench. In our models, we assign hydrostatic pressure at the top and seaward edges, a no-flow condition at the base of the sediments, and pore pressures ranging from 40%-100% of lithostatic at the arcward model boundary. We assign sediment permeability on the basis of our laboratory measurements, and evaluate the possible role of thin permeable conduits as well as strong anisotropy in the incoming section. Our laboratory results define a systematic log-linear relationship between sediment permeability and porosity within the LSB mudstones. The overall variation in permeability for our suite of samples is ~1 order of magnitude. Notably, horizontal permeabilities fall within the range of measured vertical permeabilities, and indicate no significant anisotropy. Using laboratory-derived permeability values, simulated horizontal flow rates range from 10-4 to 10-1 cm yr-1, and decrease dramatically with distance seaward of the trench. With permeability anisotropy of 1000x (i.e. kh = 1000kv), simulated flow rates peak at 3 cm yr-1 at the trench, and decrease to 3x10-1 cm yr-1 by 10 km seaward. These flow rates are substantially lower than those inferred from the geochemical data and also lower than the plate convergence rate of 4 cm yr-1, such that net transport of fluids out of the subduction zone is not likely. If discrete conduits are included in our models, permeabilities of ~10-114m2 are required to sustain the inferred flow rates. However, no potential conduits in the LSB were observed by coring or logging- while-drilling. In contrast, net egress of fluids - and associated chemical transport and pressure translation - are plausible at margins where continuous permeable strata are subducting. Overall, our results highlight a major discrepancy between constraints on fluid flow derived from physical hydrogeology and inferences from geochemical data. In this case, we suggest that the chemical signals may be affected by other processes such as in situ clay dehydration and down-section chemical variations.
NASA Astrophysics Data System (ADS)
Ellwood, Robin B.
This research investigated how student social interactions within two approaches to an inquiry-based science curriculum could be related to student motivation and achievement outcomes. This qualitative case study consisted of two cases, Off-Campus and On-Campus, and used ethnographic techniques of participant observation. Research participants included eight eighth grade girls, aged thirteen to fourteen years old. Data sources included formal and informal participant interviews, participant journal reflections, curriculum artifacts including quizzes, worksheets, and student-generated research posters, digital video and audio recordings, photographs, and researcher field notes. Data were transcribed verbatim and coded, then collapsed into emergent themes using NVIVO 9. The results of this research illustrate how setting conditions that promote focused concentration and communicative interactions can be positively related to student motivation and achievement outcomes in inquiry-based science. Participants in the Off-Campus case experienced more frequent states of focused concentration and out performed their peers in the On-Campus case on forty-six percent of classroom assignments. Off-Campus participants also designed and implemented a more cognitively complex research project, provided more in-depth analyses of their research results, and expanded their perceptions of what it means to act like a scientist to a greater extent than participants in the On-Campus case. These results can be understood in relation to Flow Theory. Student interactions that promoted the criteria necessary for initiating flow, which included having clearly defined goals, receiving immediate feedback, and maintaining a balance between challenges and skills, fostered enhanced student motivation and achievement outcomes. This research also illustrates the positive gains in motivation and achievement outcomes that emerge from student experiences with extended time in isolated areas referred to as "hot spots." Implications for science teaching and future research include shifting the current focus in inquiry-based science from a continuum that progresses from teacher-directed to open inquiry experiences to a continuum that also deliberately includes and promotes the necessary criteria for establishing flow. Attending to Flow Theory and incorporating student experiences with flow into inquiry-based science lessons will enhance student motivation and achievement outcomes in science and bolster the success of inquiry-based science.
Stormflow generation: a meta-analysis of field studies and research catchments
NASA Astrophysics Data System (ADS)
Barthold, Frauke; Elsenbeer, Helmut
2014-05-01
Runoff characteristics are expressions of runoff generation mechanisms. In this study, we want to test the hypothesis if storm hydrographs of catchments with prevailing near-surface flow paths are dominated by new water. We aim to test this hypothesis using published data from the scientific literature. We developed a classification system based on three runoff characteristics: (1) hydrograph response (HR: slowly or quickly), (2) the temporal source of water that dominates the hydrograph (TS: pre-event vs. event water) and (3) the flow paths that the water takes until it is released to the stream (FP: subsurface vs. surface flow paths). We then performed a literature survey to collect information on these runoff characteristics for small, forested headwater catchments that served as study areas in runoff generation studies and assigned each study catchment to one of the 8 classes. For this purpose, we designed a procedure to objectively diagnose the predominant conceptual model of storm flow generation in each catchment and assess its temporal and spatial relevance for the catchment. Finally, we performed an explorative analysis of the classified research catchments and summarized field evidence. Our literature survey yielded a sample of 22 research catchments that fell within our defined criteria (small, naturally forested catchments which served as study areas in stormflow generation studies). We applied our classification procedure to all of these catchments. Among them were 14 catchments for which our meta-analysis yielded a complete set of stormflow characteristics resulting in one of the 8 model concepts and were assigned into our classification scheme. Of the 14 classified research catchments, 10 were dominated by subsurface flow paths while 4 were dominated by overland flow. The data also indicate that the spatial and temporal relevance is high for catchments with subsurface flow paths while often weak for surface flow paths dominated catchments. The catalogue of catchments supports our hypothesis; however, it is afflicted with a relative high degree of uncertainty. Two theories exist that may explain the imbalance between surface and subsurface dominated catchments: (1) the selection of research sites for stormflow generation studies was guided by the leading research question in hydrology, i.e. to address the "old water paradox", and (2) catchments with prevailing subsurface flow paths are much more common in nature. In a next step, the proposed catalogue of research catchments allows correlation of environmental characteristics with runoff characteristics to address questions of catchment organization and similarity. However, the successful application and relevance of such an approach depends on the range of conceptual models for which field support exist. Our results prompt us to highlight future research needs: (1) in order to cover a broader range of combinations of runoff characteristics a careful selection of research sites is necessary and (2) propose guidelines for field studies in order achieve higher comparability of resulting conceptual models of research sites and increase the spatial and temporal relevance of the dominant conceptual model.
Wise, Robert A; Bartlett, Susan J; Brown, Ellen D; Castro, Mario; Cohen, Rubin; Holbrook, Janet T; Irvin, Charles G; Rand, Cynthia S; Sockrider, Marianna M; Sugar, Elizabeth A
2009-09-01
Information that enhances expectations about drug effectiveness improves the response to placebos for pain. Although asthma symptoms often improve with placebo, it is not known whether the response to placebo or active treatment can be augmented by increasing expectation of benefit. The study objective was to determine whether response to placebo or a leukotriene antagonist (montelukast) can be augmented by messages that increase expectation of benefit. A randomized 20-center controlled trial enrolled 601 asthmatic patients with poor symptom control who were assigned to one of 5 study groups. Participants were randomly assigned to one of 4 treatment groups in a factorial design (ie, placebo with enhanced messages, placebo with neutral messages, montelukast with enhanced messages, or montelukast with neutral messages) or to usual care. Assignment to study drug was double masked, assignment to message content was single masked, and usual care was not masked. The enhanced message aimed to increase expectation of benefit from the drug. The primary outcome was mean change in daily peak flow over 4 weeks. Secondary outcomes included lung function and asthma symptom control. Peak flow and other lung function measures were not improved in participants assigned to the enhanced message groups versus the neutral messages groups for either montelukast or placebo; no differences were noted between the neutral placebo and usual care groups. Placebo-treated participants had improved asthma control with the enhanced message but not montelukast-treated participants; the neutral placebo group did have improved asthma control compared with the usual care group after adjusting for baseline difference. Headaches were more common in participants provided messages that mentioned headache as a montelukast side effect. Optimistic drug presentation augments the placebo effect for patient-reported outcomes (asthma control) but not lung function. However, the effect of montelukast was not enhanced by optimistic messages regarding treatment effectiveness.
NASA Astrophysics Data System (ADS)
Gelhausen, Elmar; Hinz, Klaus-Peter; Schmidt, Andres; Spengler, Bernhard
2011-10-01
A single particle mass spectrometer LAMPAS 2 (Laser Mass Analyzer for Particles in the Airborne State) was combined with an ultrasonic anemometer to provide a measurement system for monitoring environmental substance exchange as caused by emission/deposition of aerosol particles. For this study, 681 mass spectra of detected particles were sorted into groups of similarity by a clustering algorithm leading to five classes of different particle types. Each single mass spectrum was correlated to corresponding anemometer data (vertical wind vector and wind speed) in a time-resolved analysis. Due to sampling constraints time-resolution was limited to 36 s, as a result of transition time distributions through the sampling tube. Vertical particle flow (emission/deposition) was determined for all particles based on these data as acquired during a measuring campaign in Giessen, Germany. For a selected particle class a detailed up- and downwards flow consideration was performed to prove the developed approach. Particle flow of that class was dominated by an emission trend as expected. The presented combination of single-particle mass spectrometry and ultrasonic anemometry provides for the possibility to correlate chemical particle data and wind data in a distinct assignment for the description of turbulent particle behavior near earth surface. Results demonstrate the ability to apply the method to real micrometeorological systems, if sampling issues are properly considered for an intended time resolution.
Towards catchment classification in data-scarce regions
Auerbach, Daniel A.; Buchanan, Brian P.; Alexiades, Alex V.; ...
2016-01-29
Assessing spatial variation in hydrologic processes can help to inform freshwater management and advance ecological understanding, yet many areas lack sufficient flow records on which to base classifications. Seeking to address this challenge, we apply concepts developed in data-rich settings to public, global data in order to demonstrate a broadly replicable approach to characterizing hydrologic variation. The proposed approach groups the basins associated with reaches in a river network according to key environmental drivers of hydrologic conditions. This initial study examines Colorado (USA), where long-term streamflow records permit comparison to previously distinguished flow regime types, and the Republic of Ecuador,more » where data limitations preclude such analysis. The flow regime types assigned to gages in Colorado corresponded reasonably well to the classes distinguished from environmental features. The divisions in Ecuador reflected major known biophysical gradients while also providing a higher resolution supplement to an existing depiction of freshwater ecoregions. Although freshwater policy and management decisions occur amidst uncertainty and imperfect knowledge, this classification framework offers a rigorous and transferrable means to distinguish catchments in data-scarce regions. The maps and attributes of the resulting ecohydrologic classes offer a departure point for additional study and data collection programs such as the placement of stations in under-monitored classes, and the divisions may serve as a preliminary template with which to structure conservation efforts such as environmental flow assessments.« less
DOT National Transportation Integrated Search
2010-02-01
This project developed a methodology to couple a new pollutant dispersion model with a traffic : assignment process to contain air pollution while maximizing mobility. The overall objective of the air : quality modeling part of the project is to deve...
Random Assignment: Practical Considerations from Field Experiments.
ERIC Educational Resources Information Center
Dunford, Franklyn W.
1990-01-01
Seven qualitative issues associated with randomization that have the potential to weaken or destroy otherwise sound experimental designs are reviewed and illustrated via actual field experiments. Issue areas include ethics and legality, liability risks, manipulation of randomized outcomes, hidden bias, design intrusiveness, case flow, and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nettesheim, D.G.; Klevit, R.E.; Drobny, G.
1989-02-21
The authors report the sequential assignment of resonances to specific residues in the proton nuclear magnetic resonance spectrum of the variant-3 neurotoxin from the scorpion Centruroides sculpturatus Ewing (range southwestern U.S.A.). A combination of two-dimensional NMR experiments such as 2D-COSY, 2D-NOESY, and single- and double-RELAY coherence transfer spectroscopy has been employed on samples of the protein dissolved in D{sub 2}O and in H{sub 2}O for assignment purposes. These studies provide a basis for the determination of the solution-phase conformation of this protein and for undertaking detailed structure-function studies of these neurotoxins that modulate the flow of sodium current by bindingmore » to the sodium channels of excitable membranes.« less
Gray, Joel; Kerfoot, Karlene
2016-01-01
Finding the balance of equitable assignments continues to be a challenge for health care organizations seeking to leverage evidence-based leadership practices. Ratios and subjective acuity strategies for nurse-patient staffing continue to be the dominant approach in health care organizations. In addition to ratio-based assignments and acuity-based assignment models driven by financial targets, more emphasis on using evidence-based leadership strategies to manage and create science for effective staffing is needed. In particular, nurse leaders are challenged to increase the sophistication of management of patient turnover (admissions, discharges, and transfers) and integrate tools from Lean methodologies and quality management strategies to determine the effectiveness of nurse-patient staffing.
Comparing Looping Teacher-Assigned and Traditional Teacher-Assigned Student Achievement Scores
ERIC Educational Resources Information Center
Lloyd, Melissa C.
2014-01-01
A problem in many elementary schools is determining which teacher assignment strategy best promotes the academic progress of students. To find and implement educational practices that address the academic needs of all learners, schools need research-based data focusing on the 2 teacher assignment strategies: looping assignment (LA) and traditional…
Independent Orbiter Assessment (IOA): Analysis of the purge, vent and drain subsystem
NASA Technical Reports Server (NTRS)
Bynum, M. C., III
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter PV and D (Purge, Vent and Drain) Subsystem hardware. The PV and D Subsystem controls the environment of unpressurized compartments and window cavities, senses hazardous gases, and purges Orbiter/ET Disconnect. The subsystem is divided into six systems: Purge System (controls the environment of unpressurized structural compartments); Vent System (controls the pressure of unpressurized compartments); Drain System (removes water from unpressurized compartments); Hazardous Gas Detection System (HGDS) (monitors hazardous gas concentrations); Window Cavity Conditioning System (WCCS) (maintains clear windows and provides pressure control of the window cavities); and External Tank/Orbiter Disconnect Purge System (prevents cryo-pumping/icing of disconnect hardware). Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Four of the sixty-two failure modes analyzed were determined as single failures which could result in the loss of crew or vehicle. A possible loss of mission could result if any of twelve single failures occurred. Two of the criticality 1/1 failures are in the Window Cavity Conditioning System (WCCS) outer window cavity, where leakage and/or restricted flow will cause failure to depressurize/repressurize the window cavity. Two criticality 1/1 failures represent leakage and/or restricted flow in the Orbiter/ET disconnect purge network which prevent cryopumping/icing of disconnect hardware. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
Ductal carcinoma of breast: nuclear grade as a predictor of S-phase fraction.
Dabbs, D J
1993-06-01
Nuclear grade (NG) and S-phase fraction (SPF) are established independent prognostic variables for ductal breast carcinomas. Nuclear grade can be assigned by a pathologist in a simple fashion during histopathologic evaluation of the tumor, while SPF requires flow cytometric evaluation of tumor samples. This prospective study was undertaken to determine whether elevated SPF could be predicted from NG alone and how NG and SPF correlate with c-erbB-2 expression. Eighty-two breast carcinomas of ductal type were assigned an NG of low (grade 1 or grade 2) or high (grade 3). S-phase fraction was recorded initially from fresh-frozen tissue samples and was designated as either low SPF (below the value designated as the cutoff for elevated SPF) or high SPF (a value at or greater than the cutoff value). On fresh tissue the NG predicted the range of SPF (low or high) in 89% of cases. Four percent of the cases that did not correlate could definitely be attributed to sample error. The remaining 7% that did not correlate could have been due to sample error, specimen quality, or tumor heterogeneity, as demonstrated by reversal of SPF range as performed on paraffin blocks of tumor. Eighty-eight percent of the tumors positive for c-erbB-2 were NG 3 and 12% were NG 2. All c-erbB-2 tumors were aneuploid. This study demonstrates the importance of carefully assigning NGs on tissue and indicates the importance of reviewing flow cytometric data side by side with histopathologic parameters to detect discrepancies between these two modalities. Careful nuclear grading assignment can accurately predict the range of SPF.
76 FR 34658 - The Internet Assigned Numbers Authority (IANA) Functions
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-14
... raised concerns that short-term contracts create instability in the IANA functions process and would... political sustainability of an Internet that supports the free flow of information, goods, and services... account security and stability issues. Commenters were divided on whether the IANA functions should be...
A scheduling model for the aerial relay system
NASA Technical Reports Server (NTRS)
Ausrotas, R. A.; Liu, E. W.
1980-01-01
The ability of the Aerial Relay System to handle the U.S. transcontinental large hub passenger flow was analyzed with a flexible, interactive computer model. The model incorporated city pair time of day demand and a demand allocation function which assigned passengers to their preferred flights.
Development of a dynamic traffic assignment model to evaluate lane-reversal plans for I-65.
DOT National Transportation Integrated Search
2010-05-01
This report presents the methodology and results from a project that studied contra-flow operations in support of : hurricane evacuations in the state of Alabama. As part of this effort, a simulation model was developed using the : VISTA platform for...
Code of Federal Regulations, 2010 CFR
2010-10-01
... organizational level (e.g., designations and delegations of authority, assignments of responsibilities, work-flow....) as implemented in 5 CFR part 1320 (see 1.105) and the Regulatory Flexibility Act (5 U.S.C. 601, et seq.). Normally, when a law requires publication of a proposed regulation, the Regulatory Flexibility...
Design Document. EKG Interpretation Program.
ERIC Educational Resources Information Center
Webb, Sandra M.
This teaching plan is designed to assist nursing instructors assigned to advanced medical surgical nursing courses in acquainting students with the basic skills needed to perform electrocardiographic (ECG or EKG) interpretations. The first part of the teaching plan contains a statement of purpose; audience recommendations; a flow chart detailing…
Laboratory investigation on effects of flood intermittency on river delta dynamics
NASA Astrophysics Data System (ADS)
Miller, K. L.; Kim, W.
2015-12-01
In order to simplify the complex hydrological variability of flow conditions, experiments modeling delta evolution are often conducted using a representative "channel-forming" flood flow and then relate results to field settings using an intermittency factor, defined as the fraction of total time at flood conditions. Although this intermittency factor makes it easier to investigate how variables, such as relative base level and/or sediment supply, affect delta dynamics, little is known about how this generalization to a single flow condition affects delta processes. We conducted a set of laboratory experiments with periodic flow conditions to determine the effects of intermittent discharges on delta evolution. During the experiment, flood with a set water discharge and sediment supply, cycles between periods of normal flow where the water flux is halved and the sediment discharge is turned off. For each run, the magnitude of the flood is held constant, but the duration is assigned differently, thus varying the intermittency between 1 and 0.2. We find that as the intermittency factor decreases (duration of each flood period decreases), the delta topset has a larger, more elongated area with a shallower slope as a result of reworking on the delta topset during normal flow conditions. During periods of normal flow, the system adjusts towards a new equilibrium state that then in turn acts as the initial condition for the subsequent flood period. Furthermore, the natural delta avulsion cycle becomes obscured by the flood cycles as the flood duration becomes shorter than the autogenic behavior. These results suggest that the adjustment timescale for differing flow conditions is a factor in determining the overall shape of the delta and behavior of the fluviodeltaic channels. We conclude, periods of normal flow when topset sediment is reworked, may be just as important to delta dynamics as periods of flood when sediment is supplied to the system.
Rong, Jun; Xu, Shuhua; Meirmans, Patrick G.; Vrieling, Klaas
2013-01-01
Background and Aims Transgene introgression from crops into wild relatives may increase the resistance of wild plants to herbicides, insects, etc. The chance of transgene introgression depends not only on the rate of hybridization and the establishment of hybrids in local wild populations, but also on the metapopulation dynamics of the wild relative. The aim of the study was to estimate gene flow in a metapopulation for assessing and managing the risks of transgene introgression. Methods Wild carrots (Daucus carota) were sampled from 12 patches in a metapopulation. Eleven microsatellites were used to genotype wild carrots. Genetic structure was estimated based on the FST statistic. Contemporary (over the last several generations) and historical (over many generations) gene flow was estimated with assignment and coalescent methods, respectively. Key Results The genetic structure in the wild carrot metapopulation was moderate (FST = 0·082) and most of the genetic variation resided within patches. A pattern of isolation by distance was detected, suggesting that most of the gene flow occurred between neighbouring patches (≤1 km). The mean contemporary gene flow was 5 times higher than the historical estimate, and the correlation between them was very low. Moreover, the contemporary gene flow in roadsides was twice that in a nature reserve, and the correlation between contemporary and historical estimates was much higher in the nature reserve. Mowing of roadsides may contribute to the increase in contemporary gene flow. Simulations demonstrated that the higher contemporary gene flow could accelerate the process of transgene introgression in the metapopulation. Conclusions Human disturbance such as mowing may alter gene flow patterns in wild populations, affecting the metapopulation dynamics of wild plants and the processes of transgene introgression in the metapopulation. The risk assessment and management of transgene introgression and the control of weeds need to take metapopulation dynamics into consideration. PMID:24052560
Streamflow gain/loss in the Republican River basin, Nebraska, March 1989
Johnson, Michaela R.; Stanton, Jennifer S.; Cornwall, James F.; Landon, Matthew K.
2002-01-01
This arc and point data set contains streamflow measurement sites and reaches indicating streamflow gain or loss under base-flow conditions along the Republican River and tributaries in Nebraska during March 21 to 22, 1989 (Boohar and others, 1990). These measurements were made to obtain data on ground-water/surface-water interaction. Flow was visually observed to be zero, was measured, or was estimated at 136 sites. The measurements were made on the main stem of the Republican River and all flowing tributaries that enter the Republican River above Swanson Reservoir and parts of the Frenchman, Red Willow, and Medicine Creek drainages in the Nebraska part of the Republican River Basin. Tributaries were followed upstream until the first road crossing where zero flow was encountered. For selected streams, points of zero flow upstream of the first zero flow site were also checked. Streamflow gain or loss for each stream reach was calculated by subtracting the streamflow values measured at the upstream end of the reach and values for contributing tributaries from the downstream value. The data obtained reflected base-flow conditions suitable for estimating streamflow gains and losses for stream reaches between sites. This digital data set was created by manually plotting locations of streamflow measurements. These points were used to designate stream-reach segments to calculate gain/loss per river mile. Reach segments were created by manually splitting the lines from a 1:250,000 hydrography data set (Soenksen and others, 1999) at every location where the streams were measured. Each stream-reach segment between streamflow-measurement sites was assigned a unique reach number. All other lines in the hydrography data set without reach numbers were omitted. This data set was created to archive the calculated streamflow gains and losses of selected streams in part of the Republican River Basin, Nebraska in March 1989, and make the data available for use with geographic information systems (GIS). If measurement sites are used separately from reaches, the maximum scale of 1:100,000 should not be exceeded. When used in conjunction with the reach segments, the maximum scale should not exceed 1:250,000.
A Service-Based Program Evaluation Platform for Enhancing Student Engagement in Assignments
ERIC Educational Resources Information Center
Wu, Ye-Chi; Ma, Lee Wei; Jiau, Hewijin Christine
2013-01-01
Programming assignments are commonly used in computer science education to encourage students to practice target concepts and evaluate their learning status. Ensuring students are engaged in such assignments is critical in attracting and retaining students. To this end, WebHat, a service-based program evaluation platform, is introduced in this…
Understanding Test-Type Assignment: Why Do Special Educators Make Unexpected Test-Type Assignments?
ERIC Educational Resources Information Center
Cho, Hyun-Jeong; Kingston, Neal
2014-01-01
We interviewed special educators (a) whose students with disabilities (SWDs) were proficient on the 2008 general education assessment but were assigned to the 2009 alternate assessment based on modified achievement standards (AA-MAS), and (b) whose students with mild disabilities took the 2008 alternate assessment based on alternate achievement…
ERIC Educational Resources Information Center
Bifulco, Robert
2012-01-01
The ability of nonexperimental estimators to match impact estimates derived from random assignment is examined using data from the evaluation of two interdistrict magnet schools. As in previous within-study comparisons, nonexperimental estimates differ from estimates based on random assignment when nonexperimental estimators are implemented…
Russ, Daniel E.; Ho, Kwan-Yuet; Colt, Joanne S.; Armenti, Karla R.; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P.; Karagas, Margaret R.; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T.; Johnson, Calvin A.; Friesen, Melissa C.
2016-01-01
Background Mapping job titles to standardized occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiologic studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Methods Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14,983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in two occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. Results For 11,991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6- and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (kappa: 0.6–0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Conclusions Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiologic studies. PMID:27102331
Liedtke, Theresa L.; Zimmerman, Mara S.; Tomka, Ryan G.; Holt, Curt; Jennings, Lyle
2016-09-14
Recent interest in flood control and restoration strategies in the Chehalis River Basin has increased the need to understand the current status and ecology of spring Chinook salmon. Based on the extended period between freshwater entry and spawn timing, spring Chinook salmon have the longest exposure of all adult Chinook salmon life histories to the low-flow and high water temperature conditions that typically occur during summer. About 100 adult spring Chinook salmon were found dead in the Chehalis River in July and August 2009. Adult Chinook salmon are known to hold in cool-water refugia during warm summer months, but the extent to which spring Chinook salmon might use thermal refugia in the Chehalis River is unknown. The movements and temperature exposures of adult spring Chinook salmon following their return to the Chehalis River were investigated using radiotelemetry and transmitters equipped with temperature sensors, combined with water temperature monitoring throughout the basin. A total of 23 spring Chinook salmon were radio-tagged between April and early July 2015; 11 were captured and released in the main-stem Chehalis River, and 12 were captured and released in the South Fork Newaukum River. Tagged fish were monitored with a combination of fixed-site monitoring locations and regular mobile tracking, from freshwater entry through the spawning period.Water temperature and flow conditions in the main-stem Chehalis River during 2015 were atypical compared to historical averages. Mean monthly water temperatures between March and July 2015 were higher than any decade since 1960 and mean daily flows were 30–70 percent of the flows in previous years. Overall, 96 percent of the tagged fish were detected, with a mean of 62 d in the detection history of tagged fish. Of the 11 fish released in the main-stem Chehalis River, six fish (55 percent) moved upstream, either shortly after release (2–7 d, 50 percent), or following a short delay (12–18 d, 50 percent). One fish released in the main-stem Chehalis River remained near the release location for 64 d before moving upstream.The final fates for the seven fish that moved upstream in the main-stem Chehalis River included two fish with unknown fates, two fish with a fate of pre-spawn mortality, and three fish that were assigned a fate of spawner. Four (36 percent) of the radio-tagged Chinook salmon released in the main-stem Chehalis River showed limited movement from their release sites, and were assigned fates of unknown (one fish), pre-spawn mortality (one fish), and spit/mortality (2 fish). The 12 spring Chinook salmon released in the South Fork Newaukum River remained in the South Fork Newaukum River throughout the study period. Five (42 percent) of these fish were actively moving through the spawning period and were assigned a fate of spawner. Seven (58 percent) of these fish were detected for a period following release, but their detection histories ended prior to the spawning period. The fates assigned to these seven fish included two fish with spit/mortality fates and five fish with fates of pre-spawn mortality. Tagged fish in both the Chehalis River and the South Fork Newaukum River showed limited movements during the peak water temperatures in July and August, and were not frequently detected at sites where water temperatures were greater than 21 °C. Pre-spawn mortality due to predation or harvest may be an important factor in the Chehalis River Basin as it was the assigned fate for 27 percent of the fish released in the main-stem Chehalis River and 42 percent of the fish released in the South Fork Newaukum River.This study represents a substantial contribution to the understanding of spring Chinook salmon in the Chehalis River Basin. The water temperatures and flow conditions during the 2015 study period were not typical of the historical conditions in the basin and the numbers of tagged fish monitored was relatively low, so results should be interpreted with those cautions in mind.
MinION™ nanopore sequencing of environmental metagenomes: a synthetic approach
Watson, Mick; Minot, Samuel S.; Rivera, Maria C.; Franklin, Rima B.
2017-01-01
Abstract Background: Environmental metagenomic analysis is typically accomplished by assigning taxonomy and/or function from whole genome sequencing or 16S amplicon sequences. Both of these approaches are limited, however, by read length, among other technical and biological factors. A nanopore-based sequencing platform, MinION™, produces reads that are ≥1 × 104 bp in length, potentially providing for more precise assignment, thereby alleviating some of the limitations inherent in determining metagenome composition from short reads. We tested the ability of sequence data produced by MinION (R7.3 flow cells) to correctly assign taxonomy in single bacterial species runs and in three types of low-complexity synthetic communities: a mixture of DNA using equal mass from four species, a community with one relatively rare (1%) and three abundant (33% each) components, and a mixture of genomic DNA from 20 bacterial strains of staggered representation. Taxonomic composition of the low-complexity communities was assessed by analyzing the MinION sequence data with three different bioinformatic approaches: Kraken, MG-RAST, and One Codex. Results: Long read sequences generated from libraries prepared from single strains using the version 5 kit and chemistry, run on the original MinION device, yielded as few as 224 to as many as 3497 bidirectional high-quality (2D) reads with an average overall study length of 6000 bp. For the single-strain analyses, assignment of reads to the correct genus by different methods ranged from 53.1% to 99.5%, assignment to the correct species ranged from 23.9% to 99.5%, and the majority of misassigned reads were to closely related organisms. A synthetic metagenome sequenced with the same setup yielded 714 high quality 2D reads of approximately 5500 bp that were up to 98% correctly assigned to the species level. Synthetic metagenome MinION libraries generated using version 6 kit and chemistry yielded from 899 to 3497 2D reads with lengths averaging 5700 bp with up to 98% assignment accuracy at the species level. The observed community proportions for “equal” and “rare” synthetic libraries were close to the known proportions, deviating from 0.1% to 10% across all tests. For a 20-species mock community with staggered contributions, a sequencing run detected all but 3 species (each included at <0.05% of DNA in the total mixture), 91% of reads were assigned to the correct species, 93% of reads were assigned to the correct genus, and >99% of reads were assigned to the correct family. Conclusions: At the current level of output and sequence quality (just under 4 × 103 2D reads for a synthetic metagenome), MinION sequencing followed by Kraken or One Codex analysis has the potential to provide rapid and accurate metagenomic analysis where the consortium is comprised of a limited number of taxa. Important considerations noted in this study included: high sensitivity of the MinION platform to the quality of input DNA, high variability of sequencing results across libraries and flow cells, and relatively small numbers of 2D reads per analysis limit. Together, these limited detection of very rare components of the microbial consortia, and would likely limit the utility of MinION for the sequencing of high-complexity metagenomic communities where thousands of taxa are expected. Furthermore, the limitations of the currently available data analysis tools suggest there is considerable room for improvement in the analytical approaches for the characterization of microbial communities using long reads. Nevertheless, the fact that the accurate taxonomic assignment of high-quality reads generated by MinION is approaching 99.5% and, in most cases, the inferred community structure mirrors the known proportions of a synthetic mixture warrants further exploration of practical application to environmental metagenomics as the platform continues to develop and improve. With further improvement in sequence throughput and error rate reduction, this platform shows great promise for precise real-time analysis of the composition and structure of more complex microbial communities. PMID:28327976
MinION™ nanopore sequencing of environmental metagenomes: a synthetic approach.
Brown, Bonnie L; Watson, Mick; Minot, Samuel S; Rivera, Maria C; Franklin, Rima B
2017-03-01
Environmental metagenomic analysis is typically accomplished by assigning taxonomy and/or function from whole genome sequencing or 16S amplicon sequences. Both of these approaches are limited, however, by read length, among other technical and biological factors. A nanopore-based sequencing platform, MinION™, produces reads that are ≥1 × 104 bp in length, potentially providing for more precise assignment, thereby alleviating some of the limitations inherent in determining metagenome composition from short reads. We tested the ability of sequence data produced by MinION (R7.3 flow cells) to correctly assign taxonomy in single bacterial species runs and in three types of low-complexity synthetic communities: a mixture of DNA using equal mass from four species, a community with one relatively rare (1%) and three abundant (33% each) components, and a mixture of genomic DNA from 20 bacterial strains of staggered representation. Taxonomic composition of the low-complexity communities was assessed by analyzing the MinION sequence data with three different bioinformatic approaches: Kraken, MG-RAST, and One Codex. Results: Long read sequences generated from libraries prepared from single strains using the version 5 kit and chemistry, run on the original MinION device, yielded as few as 224 to as many as 3497 bidirectional high-quality (2D) reads with an average overall study length of 6000 bp. For the single-strain analyses, assignment of reads to the correct genus by different methods ranged from 53.1% to 99.5%, assignment to the correct species ranged from 23.9% to 99.5%, and the majority of misassigned reads were to closely related organisms. A synthetic metagenome sequenced with the same setup yielded 714 high quality 2D reads of approximately 5500 bp that were up to 98% correctly assigned to the species level. Synthetic metagenome MinION libraries generated using version 6 kit and chemistry yielded from 899 to 3497 2D reads with lengths averaging 5700 bp with up to 98% assignment accuracy at the species level. The observed community proportions for “equal” and “rare” synthetic libraries were close to the known proportions, deviating from 0.1% to 10% across all tests. For a 20-species mock community with staggered contributions, a sequencing run detected all but 3 species (each included at <0.05% of DNA in the total mixture), 91% of reads were assigned to the correct species, 93% of reads were assigned to the correct genus, and >99% of reads were assigned to the correct family. Conclusions: At the current level of output and sequence quality (just under 4 × 103 2D reads for a synthetic metagenome), MinION sequencing followed by Kraken or One Codex analysis has the potential to provide rapid and accurate metagenomic analysis where the consortium is comprised of a limited number of taxa. Important considerations noted in this study included: high sensitivity of the MinION platform to the quality of input DNA, high variability of sequencing results across libraries and flow cells, and relatively small numbers of 2D reads per analysis limit. Together, these limited detection of very rare components of the microbial consortia, and would likely limit the utility of MinION for the sequencing of high-complexity metagenomic communities where thousands of taxa are expected. Furthermore, the limitations of the currently available data analysis tools suggest there is considerable room for improvement in the analytical approaches for the characterization of microbial communities using long reads. Nevertheless, the fact that the accurate taxonomic assignment of high-quality reads generated by MinION is approaching 99.5% and, in most cases, the inferred community structure mirrors the known proportions of a synthetic mixture warrants further exploration of practical application to environmental metagenomics as the platform continues to develop and improve. With further improvement in sequence throughput and error rate reduction, this platform shows great promise for precise real-time analysis of the composition and structure of more complex microbial communities. © The Author 2017. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Jankovic, I.; Barnes, R. J.; Soule, R.
2001-12-01
The analytic element method is used to model local three-dimensional flow in the vicinity of partially penetrating wells. The flow domain is bounded by an impermeable horizontal base, a phreatic surface with recharge and a cylindrical lateral boundary. The analytic element solution for this problem contains (1) a fictitious source technique to satisfy the head and the discharge conditions along the phreatic surface, (2) a fictitious source technique to satisfy specified head conditions along the cylindrical boundary, (3) a method of imaging to satisfy the no-flow condition across the impermeable base, (4) the classical analytic solution for a well and (5) spheroidal harmonics to account for the influence of the inhomogeneities in hydraulic conductivity. Temporal variations of the flow system due to time-dependent recharge and pumping are represented by combining the analytic element method with a finite difference method: analytic element method is used to represent spatial changes in head and discharge, while the finite difference method represents temporal variations. The solution provides a very detailed description of local groundwater flow with an arbitrary number of wells of any orientation and an arbitrary number of ellipsoidal inhomogeneities of any size and conductivity. These inhomogeneities may be used to model local hydrogeologic features (such as gravel packs and clay lenses) that significantly influence the flow in the vicinity of partially penetrating wells. Several options for specifying head values along the lateral domain boundary are available. These options allow for inclusion of the model into steady and transient regional groundwater models. The head values along the lateral domain boundary may be specified directly (as time series). The head values along the lateral boundary may also be assigned by specifying the water-table gradient and a head value at a single point (as time series). A case study is included to demonstrate the application of the model in local modeling of the groundwater flow. Transient three-dimensional capture zones are delineated for a site on Prairie Island, MN. Prairie Island is located on the Mississippi River 40 miles south of the Twin Cities metropolitan area. The case study focuses on a well that has been known to contain viral DNA. The objective of the study was to assess the potential for pathogen migration toward the well.
Sloto, R.A.; Cecil, L.D.; Senior, L.A.
1991-01-01
The Little Lehigh Creek basin is underlain mainly by a complex assemblage of highly-deformed Cambrian and Ordovician carbonate rocks. The Leithsville Formation, Allentown Dolomite, Beekmantown Group, and Jacksonburg Limestone act as a single hydrologic unit. Ground water moves through fractures and other secondary openings and generally is under water-table conditions. Median annual ground-water discharge (base flow) to Little Lehigh Creek near Allentown (station 01451500) during 1946-86 was 12.97 inches or 82 percent of streamflow. Average annual recharge for 1975-83 was 21.75 inches. Groundwater and surface-water divides do not coincide in the basin. Ground-water underflow from the Little Lehigh Creek basin to the Cedar Creek basin in 1987 was 4 inches per year. A double-mass curve analysis of the relation of cumulative precipitation at Allentown to the flow of Schantz Spring for 1956-84 showed that cessation of quarry pumping and development of ground water for public supply in the Schantz Spring basin did not affect the flow of Schantz Spring. Ground-water flow in the Little Lehigh Creek basin was simulated using a finite-difference, two-dimensional computer model. The geologic units in the modeled area were simulated as a single water-table aquifer. The 134-squaremile area of carbonate rocks between the Lehigh River and Sacony Creek was modeled to include the natural hydrologic boundaries of the ground-water-flow system. The ground-water-flow model was calibrated under steady-state conditions using 1975-83 average recharge, evapotranspiration, and pumping rates. Each geologic unit was assigned a different hydraulic conductivity. Initial aquifer hydraulic conductivity was estimated from specific-capacity data. The average (1975-83) water budget for the Little Lehigh Creek basin was simulated. The simulated base flow from the carbonate rocks of the Little Lehigh Creek basin above gaging station 01451500 is 11.85 inches per year. The simulated ground-water underflow from the Little Lehigh Creek basin to the Cedar Creek basin is 4.04 inches per year. For steady-state calibration, the root-mean-squared difference between observed and simulated heads was 21.19 feet. The effects of increased ground-water development on base flow and underflow out of the Little Lehigh Creek basin for average and drought conditions were simulated by locating a hypothetical well field in different parts of the basin. Steady-state simulations were used to represent equilibrium conditions, which would be the maximum expected long-term effect. Increased ground-water development was simulated as hypothetical well fields pumping at the rate of 15, 25, and 45 million gallons per day in addition to existing ground-water withdrawals. Four hypothetical well fields were located near and away from Little Lehigh Creek in upstream and downstream areas. The effects of pumping a well field in different parts of the Little Lehigh Creek basin were compared. Pumping a well field located near the headwaters of Little Lehigh Creek and away from the stream would have greatest effect on inducing underflow from the Sacony Greek basin and the least effect on reducing base flow and underflow to the Ceda^r Creek basin. Pumping a well field located near the headwaters of Little Leh|igh Creek near the stream would have less impact on inducing underflow from|the Sacony Creek basin and a greater impact on reducing the base flow of Little Lehigh Creek because more of the pumpage would come from diverted base flow. Pumping a well field located in the downstream area of the Little Lehigh Creek basin away from the stream would have the greatest effect on the underflow to the Cedar Creek basin. Pumping a well field located in the downstream area of the Little Lehigh Creek basin near the stream would have the greatest effect on reducing the base flow of Little Lehigh Cteek. Model simulations show that groundwater withdrawals do not cause a proportional reduction in base flow. Under average conditions, ground-water withdrawals are equal to 48 to 70 percent of simulated base-flow reductions; under drought conditions, ground-water withdrawals are equal to 35 to 73 percent of simulated base-flow reductions. The hydraulic effects of pumping largely depend on well location. In the Little Lehigh basin, surface-water and ground-water divides do not coincide, and ground-water development, especially near surface-water divides, can cause ground-water divides to shift and induce ground-water underflow from adjacent basins. Large-scale ground-water pumping in a basin may not produce expected reductions of base flow in that basin because of shifts in the ground-water divide; however, such shifts can reduce base flow in adjacent surface-water basins.
2016-01-01
Abstract Background Metabarcoding is becoming a common tool used to assess and compare diversity of organisms in environmental samples. Identification of OTUs is one of the critical steps in the process and several taxonomy assignment methods were proposed to accomplish this task. This publication evaluates the quality of reference datasets, alongside with several alignment and phylogeny inference methods used in one of the taxonomy assignment methods, called tree-based approach. This approach assigns anonymous OTUs to taxonomic categories based on relative placements of OTUs and reference sequences on the cladogram and support that these placements receive. New information In tree-based taxonomy assignment approach, reliable identification of anonymous OTUs is based on their placement in monophyletic and highly supported clades together with identified reference taxa. Therefore, it requires high quality reference dataset to be used. Resolution of phylogenetic trees is strongly affected by the presence of erroneous sequences as well as alignment and phylogeny inference methods used in the process. Two preparation steps are essential for the successful application of tree-based taxonomy assignment approach. Curated collections of genetic information do include erroneous sequences. These sequences have detrimental effect on the resolution of cladograms used in tree-based approach. They must be identified and excluded from the reference dataset beforehand. Various combinations of multiple sequence alignment and phylogeny inference methods provide cladograms with different topology and bootstrap support. These combinations of methods need to be tested in order to determine the one that gives highest resolution for the particular reference dataset. Completing the above mentioned preparation steps is expected to decrease the number of unassigned OTUs and thus improve the results of the tree-based taxonomy assignment approach. PMID:27932919
Holovachov, Oleksandr
2016-01-01
Metabarcoding is becoming a common tool used to assess and compare diversity of organisms in environmental samples. Identification of OTUs is one of the critical steps in the process and several taxonomy assignment methods were proposed to accomplish this task. This publication evaluates the quality of reference datasets, alongside with several alignment and phylogeny inference methods used in one of the taxonomy assignment methods, called tree-based approach. This approach assigns anonymous OTUs to taxonomic categories based on relative placements of OTUs and reference sequences on the cladogram and support that these placements receive. In tree-based taxonomy assignment approach, reliable identification of anonymous OTUs is based on their placement in monophyletic and highly supported clades together with identified reference taxa. Therefore, it requires high quality reference dataset to be used. Resolution of phylogenetic trees is strongly affected by the presence of erroneous sequences as well as alignment and phylogeny inference methods used in the process. Two preparation steps are essential for the successful application of tree-based taxonomy assignment approach. Curated collections of genetic information do include erroneous sequences. These sequences have detrimental effect on the resolution of cladograms used in tree-based approach. They must be identified and excluded from the reference dataset beforehand.Various combinations of multiple sequence alignment and phylogeny inference methods provide cladograms with different topology and bootstrap support. These combinations of methods need to be tested in order to determine the one that gives highest resolution for the particular reference dataset.Completing the above mentioned preparation steps is expected to decrease the number of unassigned OTUs and thus improve the results of the tree-based taxonomy assignment approach.
Path Flow Estimation Using Time Varying Coefficient State Space Model
NASA Astrophysics Data System (ADS)
Jou, Yow-Jen; Lan, Chien-Lun
2009-08-01
The dynamic path flow information is very crucial in the field of transportation operation and management, i.e., dynamic traffic assignment, scheduling plan, and signal timing. Time-dependent path information, which is important in many aspects, is nearly impossible to be obtained. Consequently, researchers have been seeking estimation methods for deriving valuable path flow information from less expensive traffic data, primarily link traffic counts of surveillance systems. This investigation considers a path flow estimation problem involving the time varying coefficient state space model, Gibbs sampler, and Kalman filter. Numerical examples with part of a real network of the Taipei Mass Rapid Transit with real O-D matrices is demonstrated to address the accuracy of proposed model. Results of this study show that this time-varying coefficient state space model is very effective in the estimation of path flow compared to time-invariant model.
Assessing effects of water abstraction on fish assemblages in Mediterranean streams
Benejam, Lluis; Angermeier, Paul L.; Munne, Antoni; García-Berthou, Emili
2010-01-01
1. Water abstraction strongly affects streams in arid and semiarid ecosystems, particularly where there is a Mediterranean climate. Excessive abstraction reduces the availability of water for human uses downstream and impairs the capacity of streams to support native biota. 2. We investigated the flow regime and related variables in six river basins of the Iberian Peninsula and show that they have been strongly altered, with declining flows (autoregressive models) and groundwater levels during the 20th century. These streams had lower flows and more frequent droughts than predicted by the official hydrological model used in this region. Three of these rivers were sometimes dry, whereas there were predicted by the model to be permanently flowing. Meanwhile, there has been no decrease in annual precipitation. 3. We also investigated the fish assemblage of a stream in one of these river basins (Tordera) for 6 years and show that sites more affected by water abstraction display significant differences in four fish metrics (catch per unit effort, number of benthic species, number of intolerant species and proportional abundance of intolerant individuals) commonly used to assess the biotic condition of streams. 4. We discuss the utility of these metrics in assessing impacts of water abstraction and point out the need for detailed characterisation of the natural flow regime (and hence drought events) prior to the application of biotic indices in streams severely affected by water abstraction. In particular, in cases of artificially dry streams, it is more appropriate for regulatory agencies to assign index scores that reflect biotic degradation than to assign ‘missing’ scores, as is presently customary in assessments of Iberian streams.
Effects of individualized assignments on biology achievement
NASA Astrophysics Data System (ADS)
Kremer, Philip L.
A pretest-posttest, randomized, two groups, experimental, factorial design compared effects of detailed and nondetailed assignments on biology achievement over seven and a half months. Detailed assignments (favoring field independence and induction) employed block diagrams and stepwise directions. Nondetailed assignments (favoring field dependence and deduction) virtually lacked these. The accessible population was 101 tenth grade preparatory school male students. The 95 students enrolled in first year biology constituted the sample. Two by three ANOVA was done on residualized posttest score means of the students. Totally, the detailed students achieved significantly higher than the nondetailed students. This significantly higher achievement was only true of detailed students in the middle thirds of the deviation intelligence quotient (DIQ) range and of the grade point average (G.P.A.) range after the breakdown into upper, middle, and lower thirds of intellectual capability (ability and achievement). The upper third detailed DIQ grouping indirectly achieved higher than its peers, whereas the lower detailed DIQ third achieved lower than its peers. Thus, high capability students apparently benefit from flow and block diagrams, inductions, field independence, and high structure, whereas low capability students may be hindered by these.
Meta-cognitive student reflections
NASA Astrophysics Data System (ADS)
Barquist, Britt; Stewart, Jim
2009-05-01
We have recently concluded a project testing the effectiveness of a weekly assignment designed to encourage awareness and improvement of meta-cognitive skills. The project is based on the idea that successful problem solvers implement a meta-cognitive process in which they identify the specific concept they are struggling with, and then identify what they understand, what they don't understand, and what they need to know in order to resolve their problem. The assignment required the students to write an email assessing the level of completion of a weekly workbook assignment and to examine in detail their experiences regarding a specific topic they struggled with. The assignment guidelines were designed to coach them through this meta-cognitive process. We responded to most emails with advice for next week's assignment. Our data follow 12 students through a quarter consisting of 11 email assignments which were scored using a rubric based on the assignment guidelines. We found no correlation between rubric scores and final grades. We do have anecdotal evidence that the assignment was beneficial.
50 CFR 679.93 - Amendment 80 Program recordkeeping, permits, monitoring, and catch accounting.
Code of Federal Regulations, 2012 CFR
2012-10-01
... CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE (CONTINUED... storage. There is sufficient space to accommodate a minimum of 10 observer sampling baskets. This space... manager, and any observers assigned to the vessel. (8) Belt and flow operations. The vessel operator stops...
Modeling post-wildfire hydrological processes with ParFlow
NASA Astrophysics Data System (ADS)
Escobar, I. S.; Lopez, S. R.; Kinoshita, A. M.
2017-12-01
Wildfires alter the natural processes within a watershed, such as surface runoff, evapotranspiration rates, and subsurface water storage. Post-fire hydrologic models are typically one-dimensional, empirically-based models or two-dimensional, conceptually-based models with lumped parameter distributions. These models are useful for modeling and predictions at the watershed outlet; however, do not provide detailed, distributed hydrologic processes at the point scale within the watershed. This research uses ParFlow, a three-dimensional, distributed hydrologic model to simulate post-fire hydrologic processes by representing the spatial and temporal variability of soil burn severity (via hydrophobicity) and vegetation recovery. Using this approach, we are able to evaluate the change in post-fire water components (surface flow, lateral flow, baseflow, and evapotranspiration). This work builds upon previous field and remote sensing analysis conducted for the 2003 Old Fire Burn in Devil Canyon, located in southern California (USA). This model is initially developed for a hillslope defined by a 500 m by 1000 m lateral extent. The subsurface reaches 12.4 m and is assigned a variable cell thickness to explicitly consider soil burn severity throughout the stages of recovery and vegetation regrowth. We consider four slope and eight hydrophobic layer configurations. Evapotranspiration is used as a proxy for vegetation regrowth and is represented by the satellite-based Simplified Surface Energy Balance (SSEBOP) product. The pre- and post-fire surface runoff, subsurface storage, and surface storage interactions are evaluated at the point scale. Results will be used as a basis for developing and fine-tuning a watershed-scale model. Long-term simulations will advance our understanding of post-fire hydrological partitioning between water balance components and the spatial variability of watershed processes, providing improved guidance for post-fire watershed management. In reference to the presenter, Isabel Escobar: Research is funded by the NASA-DIRECT STEM Program. Travel expenses for this presentation is funded by CSU-LSAMP. CSU-LSAMP is supported by the National Science Foundation under Grant # HRD-1302873 and the CSU Office of Chancellor.
Rickles, Jordan H
2011-10-01
Many inquiries regarding the causal effects of policies or programs are based on research designs where the treatment assignment process is unknown, and thus valid inferences depend on tenuous assumptions about the assignment mechanism. This article draws attention to the importance of understanding the assignment mechanism in policy and program evaluation studies, and illustrates how information collected through interviews can develop a richer understanding of the assignment mechanism. Focusing on the issue of student assignment to algebra in 8th grade, I show how a preliminary data collection effort aimed at understanding the assignment mechanism is particularly beneficial in multisite observational studies in education. The findings, based on ten interviews and administrative data from a large school district, draw attention to the often ignored heterogeneity in the assignment mechanism across schools. These findings likely extend beyond the current research project in question to related educational policy issues such as ability grouping, tracking, differential course taking, and curricular intensity, as well as other social programs in which the assignment mechanism can differ across sites.
ERIC Educational Resources Information Center
Pappas, Ilias O.; Giannakos, Michail N.; Mikalef, Patrick
2017-01-01
The use of video-based open educational resources is widespread, and includes multiple approaches to implementation. In this paper, the term "with-video assignments" is introduced to portray video learning resources enhanced with assignments. The goal of this study is to examine the factors that influence students' intention to adopt…
ERIC Educational Resources Information Center
Cho, Hyun-Jeong; Kingston, Neal
2013-01-01
The purpose of this case study was to determine teachers' rationales for assigning students with mild disabilities to alternate assessment based on alternate achievement standards (AA-AAS). In interviews, special educators stated that their primary considerations in making the assignments were low academic performance, student use of extended…
Quantitative characterization of color Doppler images: reproducibility, accuracy, and limitations.
Delorme, S; Weisser, G; Zuna, I; Fein, M; Lorenz, A; van Kaick, G
1995-01-01
A computer-based quantitative analysis for color Doppler images of complex vascular formations is presented. The red-green-blue-signal from an Acuson XP10 is frame-grabbed and digitized. By matching each image pixel with the color bar, color pixels are identified and assigned to the corresponding flow velocity (color value). Data analysis consists of delineation of a region of interest and calculation of the relative number of color pixels in this region (color pixel density) as well as the mean color value. The mean color value was compared to flow velocities in a flow phantom. The thyroid and carotid artery in a volunteer were repeatedly examined by a single examiner to assess intra-observer variability. The thyroids in five healthy controls were examined by three experienced physicians to assess the extent of inter-observer variability and observer bias. The correlation between the mean color value and flow velocity ranged from 0.94 to 0.96 for a range of velocities determined by pulse repetition frequency. The average deviation of the mean color value from the flow velocity was 22% to 41%, depending on the selected pulse repetition frequency (range of deviations, -46% to +66%). Flow velocity was underestimated with inadequately low pulse repetition frequency, or inadequately high reject threshold. An overestimation occurred with inadequately high pulse repetition frequency. The highest intra-observer variability was 22% (relative standard deviation) for the color pixel density, and 9.1% for the mean color value. The inter-observer variation was approximately 30% for the color pixel density, and 20% for the mean color value. In conclusion, computer assisted image analysis permits an objective description of color Doppler images. However, the user must be aware that image acquisition under in vivo conditions as well as physical and instrumental factors may considerably influence the results.
Wang, Kang; Gu, Huaxi; Yang, Yintang; Wang, Kun
2015-08-10
With the number of cores increasing, there is an emerging need for a high-bandwidth low-latency interconnection network, serving core-to-memory communication. In this paper, aiming at the goal of simultaneous access to multi-rank memory, we propose an optical interconnection network for core-to-memory communication. In the proposed network, the wavelength usage is delicately arranged so that cores can communicate with different ranks at the same time and broadcast for flow control can be achieved. A distributed memory controller architecture that works in a pipeline mode is also designed for efficient optical communication and transaction address processes. The scaling method and wavelength assignment for the proposed network are investigated. Compared with traditional electronic bus-based core-to-memory communication, the simulation results based on the PARSEC benchmark show that the bandwidth enhancement and latency reduction are apparent.
Phenomenological model for transient deformation based on state variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, M S; Cho, C W; Alexopoulos, P
The state variable theory of Hart, while providing a unified description of plasticity-dominated deformation, exhibits deficiencies when it is applied to transient deformation phenomena at stresses below yield. It appears that the description of stored anelastic strain is oversimplified. Consideration of a simple physical picture based on continuum dislocation pileups suggests that the neglect of weak barriers to dislocation motion is the source of these inadequacies. An appropriately modified description incorporating such barriers then allows the construction of a macroscopic model including transient effects. Although the flow relations for the microplastic element required in the new theory are not known,more » tentative assignments may be made for such functions. The model then exhibits qualitatively correct behavior when tensile, loading-unloading, reverse loading, and load relaxation tests are simulated. Experimental procedures are described for determining the unknown parameters and functions in the new model.« less
Risk analysis of Safety Service Patrol (SSP) systems in Virginia.
Dickey, Brett D; Santos, Joost R
2011-12-01
The transportation infrastructure is a vital backbone of any regional economy as it supports workforce mobility, tourism, and a host of socioeconomic activities. In this article, we specifically examine the incident management function of the transportation infrastructure. In many metropolitan regions, incident management is handled primarily by safety service patrols (SSPs), which monitor and resolve roadway incidents. In Virginia, SSP allocation across highway networks is based typically on average vehicle speeds and incident volumes. This article implements a probabilistic network model that partitions "business as usual" traffic flow with extreme-event scenarios. Results of simulated network scenarios reveal that flexible SSP configurations can improve incident resolution times relative to predetermined SSP assignments. © 2011 Society for Risk Analysis.
Kotb, Magd A.; Elmahdy, Hesham Nabeh; Khalifa, Nour El Deen Mahmoud; El-Deen, Mohamed Hamed Nasr; Lotfi, Mohamed Amr N.
2015-01-01
Abstract Evidence-based medicine (EBM) is delivered through a didactic, blended learning, and mixed models. Students are supposed to construct an answerable question in PICO (patient, intervention, comparison, and outcome) framework, acquire evidence through search of literature, appraise evidence, apply it to the clinical case scenario, and assess the evidence in relation to clinical context. Yet these teaching models have limitations especially those related to group work, for example, handling uncooperative students, students who fail to contribute, students who domineer, students who have personal conflict, their impact upon progress of their groups, and inconsistent individual acquisition of required skills. At Pediatrics Department, Faculty of Medicine, Cairo University, we designed a novel undergraduate pediatric EBM assignment online system to overcome shortcomings of previous didactic method and aimed to assess its effectiveness by prospective follow-up during academic years 2012 to 2013 and 2013 to 2014. The novel web-based online interactive system was tailored to provide sequential single and group assignments for each student. Single assignment addressed a specific case scenario question, while group assignment was teamwork that addressed different questions of same case scenario. Assignment comprised scholar content and skills. We objectively analyzed students’ performance by criterion-based assessment and subjectively by anonymous student questionnaire. A total of 2879 were enrolled in 5th year Pediatrics Course consecutively, of them 2779 (96.5%) logged in and 2554 (88.7%) submitted their work. They were randomly assigned to 292 groups. A total of 2277 (89.15%) achieved ≥80% of total mark (4/5), of them 717 (28.1%) achieved a full mark. A total of 2178 (85.27%) and 2359 (92.36%) made evidence-based conclusions and recommendations in single and group assignment, respectively (P < 0.001). A total of 1102 (43.1%) answered student questionnaire, of them 898 (81.48%) found e-educational experience satisfactory, 175 (15.88%) disagreed, and 29 (2.6%) could not decide. A total of 964 (87.47%) found single assignment educational, 913 (82.84%) found group assignment educational, and 794 (72.3%) enjoyed it. Web-based online interactive undergraduate EBM assignment was found effective in teaching medical students and assured individual student acquisition of concepts and skills of pediatric EMB. It was effective in mass education, data collection, and storage essential for system and student assessment. PMID:26200621
Kotb, Magd A; Elmahdy, Hesham Nabeh; Khalifa, Nour El Deen Mahmoud; El-Deen, Mohamed Hamed Nasr; Lotfi, Mohamed Amr N
2015-07-01
Evidence-based medicine (EBM) is delivered through a didactic, blended learning, and mixed models. Students are supposed to construct an answerable question in PICO (patient, intervention, comparison, and outcome) framework, acquire evidence through search of literature, appraise evidence, apply it to the clinical case scenario, and assess the evidence in relation to clinical context. Yet these teaching models have limitations especially those related to group work, for example, handling uncooperative students, students who fail to contribute, students who domineer, students who have personal conflict, their impact upon progress of their groups, and inconsistent individual acquisition of required skills. At Pediatrics Department, Faculty of Medicine, Cairo University, we designed a novel undergraduate pediatric EBM assignment online system to overcome shortcomings of previous didactic method and aimed to assess its effectiveness by prospective follow-up during academic years 2012 to 2013 and 2013 to 2014. The novel web-based online interactive system was tailored to provide sequential single and group assignments for each student. Single assignment addressed a specific case scenario question, while group assignment was teamwork that addressed different questions of same case scenario. Assignment comprised scholar content and skills. We objectively analyzed students' performance by criterion-based assessment and subjectively by anonymous student questionnaire. A total of 2879 were enrolled in 5th year Pediatrics Course consecutively, of them 2779 (96.5%) logged in and 2554 (88.7%) submitted their work. They were randomly assigned to 292 groups. A total of 2277 (89.15%) achieved ≥ 80% of total mark (4/5), of them 717 (28.1%) achieved a full mark. A total of 2178 (85.27%) and 2359 (92.36%) made evidence-based conclusions and recommendations in single and group assignment, respectively (P < 0.001). A total of 1102 (43.1%) answered student questionnaire, of them 898 (81.48%) found e-educational experience satisfactory, 175 (15.88%) disagreed, and 29 (2.6%) could not decide. A total of 964 (87.47%) found single assignment educational, 913 (82.84%) found group assignment educational, and 794 (72.3%) enjoyed it. Web-based online interactive undergraduate EBM assignment was found effective in teaching medical students and assured individual student acquisition of concepts and skills of pediatric EMB. It was effective in mass education, data collection, and storage essential for system and student assessment.
Reverse logistics in the Brazilian construction industry.
Nunes, K R A; Mahler, C F; Valle, R A
2009-09-01
In Brazil most Construction and Demolition Waste (C&D waste) is not recycled. This situation is expected to change significantly, since new federal regulations oblige municipalities to create and implement sustainable C&D waste management plans which assign an important role to recycling activities. The recycling organizational network and its flows and components are fundamental to C&D waste recycling feasibility. Organizational networks, flows and components involve reverse logistics. The aim of this work is to introduce the concepts of reverse logistics and reverse distribution channel networks and to study the Brazilian C&D waste case.
Oosterhuis, W P; van der Horst, M; van Dongen, K; Ulenkate, H J L M; Volmer, M; Wulkan, R W
2007-10-20
To compare the flow diagram for the diagnosis of anaemia from the guideline 'Anaemia' from the Dutch College of General Practitioners (NHG) with a substantive and logistical alternative protocol. Prospective. For evaluation of anaemia, 124 patients from primary care reported to the laboratories of the St. Elisabeth Hospital in Tilburg (n = 94) and the Scheper Hospital in Emmen (n = 30), the Netherlands. Two flow charts were used: the NHG's flow chart and a self-developed chart in which not mean corpuscular volume, but ferritin concentration occupies the central position. All the laboratory tests mentioned in both flow charts were carried out in every patient with, for practical reasons, the exception of Hgb electrophoresis and bone marrow investigations. General practitioners were approached and patient dossiers were consulted to obtain further clinical data. According to the NHG protocol, on the grounds of the laboratory investigations, 64 (52%) of patients could not be put in a specific category. The majority were patients with normocytary anaemia who did not fulfil the criteria for iron deficiency anaemia or the anaemia of chronic disease. According to the alternative chart, in 36 (29%) patients no diagnosis was made. These were patients in whom no abnormal laboratory findings were observed, other than low haemoglobin values. The majority of the patients had normocytary anaemia, in some cases this was interpreted as the anaemia of chronic disease, but more often the anaemia could not be assigned to a particular category. A large number ofpatients had a raised creatinine value. This value did not appear in the NHG protocol. In 15% of patients, more than one cause for anaemia was found. The NHG protocol did not enable these multiple diagnoses to be made. Accordingly, the NHG protocol was difficult to implement in the laboratory. Using the NHG flow diagram a large percentage of patients could not be assigned to a particular category. Using the alternative flow diagram, which procedure is easier to carry out in the laboratory, it was possible to make multiple diagnoses.
Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico
Knutilla, R.L.; Veenhuis, J.E.
1994-01-01
Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.
Ware, Russell E; Davis, Barry R; Schultz, William H; Brown, R Clark; Aygun, Banu; Sarnaik, Sharada; Odame, Isaac; Fuh, Beng; George, Alex; Owen, William; Luchtman-Jones, Lori; Rogers, Zora R; Hilliard, Lee; Gauger, Cynthia; Piccone, Connie; Lee, Margaret T; Kwiatkowski, Janet L; Jackson, Sherron; Miller, Scott T; Roberts, Carla; Heeney, Matthew M; Kalfa, Theodosia A; Nelson, Stephen; Imran, Hamayun; Nottage, Kerri; Alvarez, Ofelia; Rhodes, Melissa; Thompson, Alexis A; Rothman, Jennifer A; Helton, Kathleen J; Roberts, Donna; Coleman, Jamie; Bonner, Melanie J; Kutlar, Abdullah; Patel, Niren; Wood, John; Piller, Linda; Wei, Peng; Luden, Judy; Mortier, Nicole A; Stuber, Susan E; Luban, Naomi L C; Cohen, Alan R; Pressel, Sara; Adams, Robert J
2016-02-13
For children with sickle cell anaemia and high transcranial doppler (TCD) flow velocities, regular blood transfusions can effectively prevent primary stroke, but must be continued indefinitely. The efficacy of hydroxycarbamide (hydroxyurea) in this setting is unknown; we performed the TWiTCH trial to compare hydroxyurea with standard transfusions. TWiTCH was a multicentre, phase 3, randomised, open-label, non-inferiority trial done at 26 paediatric hospitals and health centres in the USA and Canada. We enrolled children with sickle cell anaemia who were aged 4-16 years and had abnormal TCD flow velocities (≥ 200 cm/s) but no severe vasculopathy. After screening, eligible participants were randomly assigned 1:1 to continue standard transfusions (standard group) or hydroxycarbamide (alternative group). Randomisation was done at a central site, stratified by site with a block size of four, and an adaptive randomisation scheme was used to balance the covariates of baseline age and TCD velocity. The study was open-label, but TCD examinations were read centrally by observers masked to treatment assignment and previous TCD results. Participants assigned to standard treatment continued to receive monthly transfusions to maintain 30% sickle haemoglobin or lower, while those assigned to the alternative treatment started oral hydroxycarbamide at 20 mg/kg per day, which was escalated to each participant's maximum tolerated dose. The treatment period lasted 24 months from randomisation. The primary study endpoint was the 24 month TCD velocity calculated from a general linear mixed model, with the non-inferiority margin set at 15 cm/s. The primary analysis was done in the intention-to-treat population and safety was assessed in all patients who received at least one dose of assigned treatment. This study is registered with ClinicalTrials.gov, number NCT01425307. Between Sept 20, 2011, and April 17, 2013, 159 patients consented and enrolled in TWiTCH. 121 participants passed screening and were then randomly assigned to treatment (61 to transfusions and 60 to hydroxycarbamide). At the first scheduled interim analysis, non-inferiority was shown and the sponsor terminated the study. Final model-based TCD velocities were 143 cm/s (95% CI 140-146) in children who received standard transfusions and 138 cm/s (135-142) in those who received hydroxycarbamide, with a difference of 4·54 (0·10-8·98). Non-inferiority (p=8·82 × 10(-16)) and post-hoc superiority (p=0·023) were met. Of 29 new neurological events adjudicated centrally by masked reviewers, no strokes were identified, but three transient ischaemic attacks occurred in each group. Magnetic resonance brain imaging and angiography (MRI and MRA) at exit showed no new cerebral infarcts in either treatment group, but worsened vasculopathy in one participant who received standard transfusions. 23 severe adverse events in nine (15%) patients were reported for hydroxycarbamide and ten serious adverse events in six (10%) patients were reported for standard transfusions. The most common serious adverse event in both groups was vaso-occlusive pain (11 events in five [8%] patients with hydroxycarbamide and three events in one [2%] patient for transfusions). For high-risk children with sickle cell anaemia and abnormal TCD velocities who have received at least 1 year of transfusions, and have no MRA-defined severe vasculopathy, hydroxycarbamide treatment can substitute for chronic transfusions to maintain TCD velocities and help to prevent primary stroke. National Heart, Lung, and Blood Institute, National Institutes of Health. Copyright © 2016 Elsevier Ltd. All rights reserved.
Simulation of water flow in fractured porous medium by using discretized virtual internal bond
NASA Astrophysics Data System (ADS)
Peng, Shujun; Zhang, Zhennan; Li, Chunfang; He, Guofu; Miao, Guoqing
2017-12-01
The discretized virtual internal bond (DVIB) is adopted to simulate the water flow in fractured porous medium. The intact porous medium is permeable because it contains numerous micro cracks and pores. These micro discontinuities construct a fluid channel network. The representative volume of this fluid channel network is modeled as a lattice bond cell with finite number of bonds in statistical sense. Each bond serves as a fluid channel. In fractured porous medium, many bond cells are cut by macro fractures. The conductivity of the fracture facet in a bond cell is taken over by the bonds parallel to the flow direction. The equivalent permeability and volumetric storage coefficient of a micro bond are calibrated based on the ideal bond cell conception, which makes it unnecessary to consider the detailed geometry of a specific element. Such parameter calibration method is flexible and applicable to any type of element. The accuracy check results suggest this method has a satisfying accuracy in both the steady and transient flow simulation. To simulate the massive fractures in rockmass, the bond cells intersected by fracture are assigned aperture values, which are assumed random numbers following a certain distribution law. By this method, any number of fractures can be implicitly incorporated into the background mesh, avoiding the setup of fracture element and mesh modification. The fracture aperture heterogeneity is well represented by this means. The simulation examples suggest that the present method is a feasible, simple and efficient approach to the numerical simulation of water flow in fractured porous medium.
Implementing an obstetric triage acuity scale: interrater reliability and patient flow analysis.
Smithson, David S; Twohey, Rachel; Rice, Tim; Watts, Nancy; Fernandes, Christopher M; Gratton, Robert J
2013-10-01
A 5-category Obstetric Triage Acuity Scale (OTAS) was developed with a comprehensive set of obstetrical determinants. The objectives of this study were as follows: (1) to test the interrater reliability of OTAS and (2) to determine the distribution of patient acuity and flow by OTAS level. To test the interrater reliability, 110 triage charts were used to generate vignettes and the consistency of the OTAS level assigned by 8 triage nurses was measured. OTAS performed with substantial (Kappa, 0.61 - 0.77, OTAS 1-4) and near perfect correlation (0.87, OTAS 5). To assess patient flow, the times to primary and secondary health care provider assessments and lengths of stay stratified by acuity were abstracted from the patient management system. Two-thirds of triage visits were low acuity (OTAS 4, 5). There was a decrease in length of stay (median [interquartile range], minutes) as acuity decreased from OTAS 1 (120.0 [156.0] minutes) to OTAS 3 (75.0 [120.8]). The major contributor to length of stay was time to secondary health care provider assessment and this did not change with acuity. The percentage of patients admitted to the antenatal or birthing unit decreased from 80% (OTAS 1) to 12% (OTAS 5). OTAS provides a reliable assessment of acuity and its implementation has allowed for triaging of obstetric patients based on acuity, and a more in-depth assessment of the patient flow. By standardizing assessment, OTAS allows for opportunities to improve performance and make comparisons of patient care and flow across organizations. Copyright © 2013 Mosby, Inc. All rights reserved.
In-vehicle signing concepts: An analytical precursor to an in-vehicle information system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spelt, P.F.; Tufano, D.R.; Knee, H.E.
The purpose of the project described in this report is to develop alternative In-Vehicle Signing (IVS) system concepts based on allocation of the functions associated with driving a road vehicle. In the driving milieu, tasks can be assigned to one of three agents, the driver, the vehicle or the infrastructure. Assignment of tasks is based on a philosophy of function allocation which can emphasize any of several philosophical approaches. In this project, function allocations were made according to the current practice in vehicle design and signage as well as a human-centered strategy. Several IVS system concepts are presented based onmore » differing functional allocation outcomes. A design space for IVS systems is described, and a technical analysis of a map-based and sever beacon-based IVS systems are presented. Because of problems associated with both map-based and beacon-based concepts, a hybrid IVS concept was proposed. The hybrid system uses on-board map-based databases to serve those areas in which signage can be anticipated to be relatively static, such as large metropolitan areas where few if any new roads will be built. For areas where sign density is low, and/or where population growth causes changes in traffic flow, beacon-based concepts function best. For this situation, changes need only occur in the central database from which sign information is transmitted. This report presents system concepts which enable progress from the IVS system concept-independent functional requirements to a more specific set of system concepts which facilitate analysis and selection of hardware and software to perform the functions of IVS. As such, this phase of the project represents a major step toward the design and development of a prototype WS system. Once such a system is developed, a program of testing, evaluation, an revision will be undertaken. Ultimately, such a system can become part of the road vehicle of the future.« less
Ahearn, Elizabeth A.
2010-01-01
Multiple linear regression equations for determining flow-duration statistics were developed to estimate select flow exceedances ranging from 25- to 99-percent for six 'bioperiods'-Salmonid Spawning (November), Overwinter (December-February), Habitat Forming (March-April), Clupeid Spawning (May), Resident Spawning (June), and Rearing and Growth (July-October)-in Connecticut. Regression equations also were developed to estimate the 25- and 99-percent flow exceedances without reference to a bioperiod. In total, 32 equations were developed. The predictive equations were based on regression analyses relating flow statistics from streamgages to GIS-determined basin and climatic characteristics for the drainage areas of those streamgages. Thirty-nine streamgages (and an additional 6 short-term streamgages and 28 partial-record sites for the non-bioperiod 99-percent exceedance) in Connecticut and adjacent areas of neighboring States were used in the regression analysis. Weighted least squares regression analysis was used to determine the predictive equations; weights were assigned based on record length. The basin characteristics-drainage area, percentage of area with coarse-grained stratified deposits, percentage of area with wetlands, mean monthly precipitation (November), mean seasonal precipitation (December, January, and February), and mean basin elevation-are used as explanatory variables in the equations. Standard errors of estimate of the 32 equations ranged from 10.7 to 156 percent with medians of 19.2 and 55.4 percent to predict the 25- and 99-percent exceedances, respectively. Regression equations to estimate high and median flows (25- to 75-percent exceedances) are better predictors (smaller variability of the residual values around the regression line) than the equations to estimate low flows (less than 75-percent exceedance). The Habitat Forming (March-April) bioperiod had the smallest standard errors of estimate, ranging from 10.7 to 20.9 percent. In contrast, the Rearing and Growth (July-October) bioperiod had the largest standard errors, ranging from 30.9 to 156 percent. The adjusted coefficient of determination of the equations ranged from 77.5 to 99.4 percent with medians of 98.5 and 90.6 percent to predict the 25- and 99-percent exceedances, respectively. Descriptive information on the streamgages used in the regression, measured basin and climatic characteristics, and estimated flow-duration statistics are provided in this report. Flow-duration statistics and the 32 regression equations for estimating flow-duration statistics in Connecticut are stored on the U.S. Geological Survey World Wide Web application ?StreamStats? (http://water.usgs.gov/osw/streamstats/index.html). The regression equations developed in this report can be used to produce unbiased estimates of select flow exceedances statewide.
Gating mass cytometry data by deep learning.
Li, Huamin; Shaham, Uri; Stanton, Kelly P; Yao, Yi; Montgomery, Ruth R; Kluger, Yuval
2017-11-01
Mass cytometry or CyTOF is an emerging technology for high-dimensional multiparameter single cell analysis that overcomes many limitations of fluorescence-based flow cytometry. New methods for analyzing CyTOF data attempt to improve automation, scalability, performance and interpretation of data generated in large studies. Assigning individual cells into discrete groups of cell types (gating) involves time-consuming sequential manual steps, untenable for larger studies. We introduce DeepCyTOF, a standardization approach for gating, based on deep learning techniques. DeepCyTOF requires labeled cells from only a single sample. It is based on domain adaptation principles and is a generalization of previous work that allows us to calibrate between a target distribution and a source distribution in an unsupervised manner. We show that DeepCyTOF is highly concordant (98%) with cell classification obtained by individual manual gating of each sample when applied to a collection of 16 biological replicates of primary immune blood cells, even when measured across several instruments. Further, DeepCyTOF achieves very high accuracy on the semi-automated gating challenge of the FlowCAP-I competition as well as two CyTOF datasets generated from primary immune blood cells: (i) 14 subjects with a history of infection with West Nile virus (WNV), (ii) 34 healthy subjects of different ages. We conclude that deep learning in general, and DeepCyTOF specifically, offers a powerful computational approach for semi-automated gating of CyTOF and flow cytometry data. Our codes and data are publicly available at https://github.com/KlugerLab/deepcytof.git. yuval.kluger@yale.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Sheridan, M. F.; Stinton, A. J.; Patra, A.; Pitman, E. B.; Bauer, A.; Nichita, C. C.
2005-01-01
The Titan2D geophysical mass-flow model is evaluated by comparing its simulation results and those obtained from another flow model, FLOW3D, with published data on the 1963 Little Tahoma Peak avalanches on Mount Rainier, Washington. The avalanches, totaling approximately 10×10 6 m 3 of broken lava blocks and other debris, traveled 6.8 km horizontally and fell 1.8 km vertically ( H/ L=0.246). Velocities calculated from runup range from 24 to 42 m/s and may have been as high as 130 m/s while the avalanches passed over Emmons Glacier. Titan2D is a code for an incompressible Coulomb continuum; it is a depth-averaged, 'shallow-water', granular-flow model. The conservation equations for mass and momentum are solved with a Coulomb-type friction term at the basal interface. The governing equations are solved on multiple processors using a parallel, adaptive mesh, Godunov scheme. Adaptive gridding dynamically concentrates computing power in regions of special interest; mesh refinement and coarsening key on the perimeter of the moving avalanche. The model flow initiates as a pile defined as an ellipsoid by a height ( z) and an elliptical base defined by radii in the x and y planes. Flow parameters are the internal friction angle and bed friction angle. Results from the model are similar in terms of velocity history, lateral spreading, location of runup areas, and final distribution of the Little Tahoma Peak deposit. The avalanches passed over the Emmons Glacier along their upper flow paths, but lower in the valley they traversed stream gravels and glacial outwash deposits. This presents difficulty in assigning an appropriate bed friction angle for the entire deposit. Incorporation of variable bed friction angles into the model using GIS will help to resolve this issue.
Gender Differences in the Determinants of the Willingness to Accept an International Assignment
ERIC Educational Resources Information Center
van der Velde, Mandy E. G.; Bossink, Carin J. H.; Jansen, Paul G. W.
2005-01-01
Multinational organisations experience difficulties in finding managers willing to accept international assignments. This study has therefore focused on factors that can predict males' and females' willingness to accept international assignments, or to follow their partners on international assignments. Hypotheses were formulated based on the…
Gene Polymorphism Studies in a Teaching Laboratory
NASA Astrophysics Data System (ADS)
Shultz, Jeffry
2009-02-01
I present a laboratory procedure for illustrating transcription, post-transcriptional modification, gene conservation, and comparative genetics for use in undergraduate biology education. Students are individually assigned genes in a targeted biochemical pathway, for which they design and test polymerase chain reaction (PCR) primers. In this example, students used genes annotated for the steroid biosynthesis pathway in soybean. The authoritative Kyoto encyclopedia of genes and genomes (KEGG) interactive database and other online resources were used to design primers based first on soybean expressed sequence tags (ESTs), then on ESTs from an alternate organism if soybean sequence was unavailable. Students designed a total of 50 gene-based primer pairs (37 soybean, 13 alternative) and tested these for polymorphism state and similarity between two soybean and two pea lines. Student assessment was based on acquisition of laboratory skills and successful project completion. This simple procedure illustrates conservation of genes and is not limited to soybean or pea. Cost per student estimates are included, along with a detailed protocol and flow diagram of the procedure.
Flow Cytometric Human Leukocyte Antigen-B27 Typing with Stored Samples for Batch Testing
Seo, Bo Young
2013-01-01
Background Flow cytometry (FC) HLA-B27 typing is still used extensively for the diagnosis of spondyloarthropathies. If patient blood samples are stored for a prolonged duration, this testing can be performed in a batch manner, and in-house cellular controls could easily be procured. In this study, we investigated various methods of storing patient blood samples. Methods We compared four storage methods: three methods of analyzing lymphocytes (whole blood stored at room temperature, frozen mononuclear cells, and frozen white blood cells [WBCs] after lysing red blood cells [RBCs]), and one method using frozen platelets (FPLT). We used three ratios associated with mean fluorescence intensities (MFI) for HLAB27 assignment: the B27 MFI ratio (sample/control) for HLA-B27 fluorescein-5-isothiocyanate (FITC); the B7 MFI ratio for HLA-B7 phycoerythrin (PE); and the ratio of these two ratios, B7/B27 ratio. Results Comparing the B27 MFI ratios of each storage method for the HLA-B27+ samples and the B7/B27 ratios for the HLA-B7+ samples revealed that FPLT was the best of the four methods. FPLT had a sensitivity of 100% and a specificity of 99.3% for HLA-B27 assignment in DNA-typed samples (N=164) when the two criteria, namely, B27 MFI ratio >4.0 and B7/B27 ratio <1.5, were used. Conclusions The FPLT method was found to offer a simple, economical, and accurate method of FC HLA-B27 typing by using stored patient samples. If stored samples are used, this method has the potential to replace the standard FC typing method when used in combination with a complementary DNA-based method. PMID:23667843
Samberg, Leah H; Fishman, Lila; Allendorf, Fred W
2013-01-01
Conservation strategies are increasingly driven by our understanding of the processes and patterns of gene flow across complex landscapes. The expansion of population genetic approaches into traditional agricultural systems requires understanding how social factors contribute to that landscape, and thus to gene flow. This study incorporates extensive farmer interviews and population genetic analysis of barley landraces (Hordeum vulgare) to build a holistic picture of farmer-mediated geneflow in an ancient, traditional agricultural system in the highlands of Ethiopia. We analyze barley samples at 14 microsatellite loci across sites at varying elevations and locations across a contiguous mountain range, and across farmer-identified barley types and management strategies. Genetic structure is analyzed using population-based and individual-based methods, including measures of population differentiation and genetic distance, multivariate Principal Coordinate Analysis, and Bayesian assignment tests. Phenotypic analysis links genetic patterns to traits identified by farmers. We find that differential farmer management strategies lead to markedly different patterns of population structure across elevation classes and barley types. The extent to which farmer seed management appears as a stronger determinant of spatial structure than the physical landscape highlights the need for incorporation of social, landscape, and genetic data for the design of conservation strategies in human-influenced landscapes. PMID:24478796
NASA Astrophysics Data System (ADS)
Farlin, J.; Drouet, L.; Gallé, T.; Pittois, D.; Bayerle, M.; Braun, C.; Maloszewski, P.; Vanderborght, J.; Elsner, M.; Kies, A.
2013-06-01
A simple method to delineate the recharge areas of a series of springs draining a fractured aquifer is presented. Instead of solving the flow and transport equations, the delineation is reformulated as a mass balance problem assigning arable land in proportion to the pesticide mass discharged annually in a spring at minimum total transport cost. The approach was applied to the Luxembourg Sandstone, a fractured-rock aquifer supplying half of the drinking water for Luxembourg, using the herbicide atrazine. Predictions of the recharge areas were most robust in situations of strong competition by neighbouring springs while the catchment boundaries for isolated springs were extremely sensitive to the parameter controlling flow direction. Validation using a different pesticide showed the best agreement with the simplest model used, whereas using historical crop-rotation data and spatially distributed soil-leaching data did not improve predictions. The whole approach presents the advantage of integrating objectively information on land use and pesticide concentration in spring water into the delineation of groundwater recharge zones in a fractured-rock aquifer.
Domain decomposition methods for the parallel computation of reacting flows
NASA Technical Reports Server (NTRS)
Keyes, David E.
1988-01-01
Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.
Development of a particle method of characteristics (PMOC) for one-dimensional shock waves
NASA Astrophysics Data System (ADS)
Hwang, Y.-H.
2018-03-01
In the present study, a particle method of characteristics is put forward to simulate the evolution of one-dimensional shock waves in barotropic gaseous, closed-conduit, open-channel, and two-phase flows. All these flow phenomena can be described with the same set of governing equations. The proposed scheme is established based on the characteristic equations and formulated by assigning the computational particles to move along the characteristic curves. Both the right- and left-running characteristics are traced and represented by their associated computational particles. It inherits the computational merits from the conventional method of characteristics (MOC) and moving particle method, but without their individual deficiencies. In addition, special particles with dual states deduced to the enforcement of the Rankine-Hugoniot relation are deliberately imposed to emulate the shock structure. Numerical tests are carried out by solving some benchmark problems, and the computational results are compared with available analytical solutions. From the derivation procedure and obtained computational results, it is concluded that the proposed PMOC will be a useful tool to replicate one-dimensional shock waves.
An expert system for planning and scheduling in a telerobotic environment
NASA Technical Reports Server (NTRS)
Ntuen, Celestine A.; Park, Eui H.
1991-01-01
A knowledge based approach to assigning tasks to multi-agents working cooperatively in jobs that require a telerobot in the loop was developed. The generality of the approach allows for such a concept to be applied in a nonteleoperational domain. The planning architecture known as the task oriented planner (TOP) uses the principle of flow mechanism and the concept of planning by deliberation to preserve and use knowledge about a particular task. The TOP is an open ended architecture developed with a NEXPERT expert system shell and its knowledge organization allows for indirect consultation at various levels of task abstraction. Considering that a telerobot operates in a hostile and nonstructured environment, task scheduling should respond to environmental changes. A general heuristic was developed for scheduling jobs with the TOP system. The technique is not to optimize a given scheduling criterion as in classical job and/or flow shop problems. For a teleoperation job schedule, criteria are situation dependent. A criterion selection is fuzzily embedded in the task-skill matrix computation. However, goal achievement with minimum expected risk to the human operator is emphasized.
A hybrid variational ensemble data assimilation for the HIgh Resolution Limited Area Model (HIRLAM)
NASA Astrophysics Data System (ADS)
Gustafsson, N.; Bojarova, J.; Vignes, O.
2014-02-01
A hybrid variational ensemble data assimilation has been developed on top of the HIRLAM variational data assimilation. It provides the possibility of applying a flow-dependent background error covariance model during the data assimilation at the same time as full rank characteristics of the variational data assimilation are preserved. The hybrid formulation is based on an augmentation of the assimilation control variable with localised weights to be assigned to a set of ensemble member perturbations (deviations from the ensemble mean). The flow-dependency of the hybrid assimilation is demonstrated in single simulated observation impact studies and the improved performance of the hybrid assimilation in comparison with pure 3-dimensional variational as well as pure ensemble assimilation is also proven in real observation assimilation experiments. The performance of the hybrid assimilation is comparable to the performance of the 4-dimensional variational data assimilation. The sensitivity to various parameters of the hybrid assimilation scheme and the sensitivity to the applied ensemble generation techniques are also examined. In particular, the inclusion of ensemble perturbations with a lagged validity time has been examined with encouraging results.
Castillo, Andrés M; Bernal, Andrés; Patiny, Luc; Wist, Julien
2015-08-01
We present a method for the automatic assignment of small molecules' NMR spectra. The method includes an automatic and novel self-consistent peak-picking routine that validates NMR peaks in each spectrum against peaks in the same or other spectra that are due to the same resonances. The auto-assignment routine used is based on branch-and-bound optimization and relies predominantly on integration and correlation data; chemical shift information may be included when available to fasten the search and shorten the list of viable assignments, but in most cases tested, it is not required in order to find the correct assignment. This automatic assignment method is implemented as a web-based tool that runs without any user input other than the acquired spectra. Copyright © 2015 John Wiley & Sons, Ltd.
Ferreira da Silva, Maria Joana; Kopp, Gisela H; Casanova, Catarina; Godinho, Raquel; Minhós, Tânia; Sá, Rui; Zinner, Dietmar; Bruford, Michael W
2018-01-01
Dispersal is a demographic process that can potentially counterbalance the negative impacts of anthropogenic habitat fragmentation. However, mechanisms of dispersal may become modified in populations living in human-dominated habitats. Here, we investigated dispersal in Guinea baboons (Papio papio) in areas with contrasting levels of anthropogenic fragmentation, as a case study. Using molecular data, we compared the direction and extent of sex-biased gene flow in two baboon populations: from Guinea-Bissau (GB, fragmented distribution, human-dominated habitat) and Senegal (SEN, continuous distribution, protected area). Individual-based Bayesian clustering, spatial autocorrelation, assignment tests and migrant identification suggested female-mediated gene flow at a large spatial scale for GB with evidence of contact between genetically differentiated males at one locality, which could be interpreted as male-mediated gene flow in southern GB. Gene flow was also found to be female-biased in SEN for a smaller scale. However, in the southwest coastal part of GB, at the same geographic scale as SEN, no sex-biased dispersal was detected and a modest or recent restriction in GB female dispersal seems to have occurred. This population-specific variation in dispersal is attributed to behavioural responses to human activity in GB. Our study highlights the importance of considering the genetic consequences of disrupted dispersal patterns as an additional impact of anthropogenic habitat fragmentation and is potentially relevant to the conservation of many species inhabiting human-dominated environments.
47 CFR 90.723 - Selection and assignment of frequencies.
Code of Federal Regulations, 2012 CFR
2012-10-01
... shall specify the number of frequencies requested. All frequencies in this band will be assigned by the... assigned only the number of channels justified to meet their requirements. (d) Phase I base or fixed station receivers utilizing 221-222 MHz frequencies assigned from Sub-band A as designated in § 90.715(b...
47 CFR 90.723 - Selection and assignment of frequencies.
Code of Federal Regulations, 2013 CFR
2013-10-01
... shall specify the number of frequencies requested. All frequencies in this band will be assigned by the... assigned only the number of channels justified to meet their requirements. (d) Phase I base or fixed station receivers utilizing 221-222 MHz frequencies assigned from Sub-band A as designated in § 90.715(b...
47 CFR 90.723 - Selection and assignment of frequencies.
Code of Federal Regulations, 2014 CFR
2014-10-01
... shall specify the number of frequencies requested. All frequencies in this band will be assigned by the... assigned only the number of channels justified to meet their requirements. (d) Phase I base or fixed station receivers utilizing 221-222 MHz frequencies assigned from Sub-band A as designated in § 90.715(b...
Strategic planning for health care management information systems.
Rosenberger, H R; Kaiser, K M
1985-01-01
Using a planning methodology and a structured design technique for analyzing data and data flow, information requirements can be derived to produce a strategic plan for a management information system. Such a long-range plan classifies information groups and assigns them priorities according to the goals of the organization. The approach emphasizes user involvement.
Use of an Interactive General-Purpose Computer Terminal to Simulate Training Equipment Operation.
ERIC Educational Resources Information Center
Lahey, George F.; And Others
Trainees from Navy Basic Electricity/Electronics School were assigned to receive either computer-assisted instruction (CAI) or conventional individualized instruction in a segment of a course requiring use of a multimeter to measure resistance and current flow. The (CAI) group used PLATO IV plasma-screen terminals; individualized instruction…
10 CFR 436.24 - Uncertainty analyses.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or timing of cash flows are uncertain and are not fixed under § 436.14, Federal agencies may examine the impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order...
Integrating Business Core Knowledge through Upper Division Report Composition
ERIC Educational Resources Information Center
Roach, Joy; Tracy, Daniel; Durden, Kay
2007-01-01
The most ambitious project of many undergraduate business communication courses is the formal report. This assignment typically requires the use of many writing skills nurtured throughout the course. Skills such as proper style, tone, organization, flow, and mechanics are enhanced through the writing of memos and various types of letters. While…
NASA Technical Reports Server (NTRS)
Smith, Greg
2003-01-01
Schedule risk assessments determine the likelihood of finishing on time. Each task in a schedule has a varying degree of probability of being finished on time. A schedule risk assessment quantifies these probabilities by assigning values to each task. This viewgraph presentation contains a flow chart for conducting a schedule risk assessment, and profiles applicable several methods of data analysis.
An Algorithm for Converting Contours to Elevation Grids.
ERIC Educational Resources Information Center
Reid-Green, Keith S.
Some of the test questions for the National Council of Architectural Registration Boards deal with the site, including drainage, regrading, and the like. Some questions are most easily scored by examining contours, but others, such as water flow questions, are best scored from a grid in which each element is assigned its average elevation. This…
Mahendru, Amita A; Wilhelm-Benartzi, Charlotte S; Wilkinson, Ian B; McEniery, Carmel M; Johnson, Sarah; Lees, Christoph
2016-10-01
Understanding the natural length of human pregnancy is central to clinical care. However, variability in the reference methods to assign gestational age (GA) confound our understanding of pregnancy length. Assignation from ultrasound measurement of fetal crown-rump length (CRL) has superseded that based on last menstrual period (LMP). Our aim was to estimate gestational length based on LMP, ultrasound CRL, and implantation that were known, compared to pregnancy duration assigned by day of ovulation. Prospective study in 143 women trying to conceive. In 71 ongoing pregnancies, gestational length was estimated from LMP, CRL at 10-14 weeks, ovulation, and implantation day. For each method of GA assignment, the distribution in observed gestational length was derived and both agreement and correlation between the methods determined. Median ovulation and implantation days were 16 and 27, respectively. The gestational length based on LMP, CRL, implantation, and ovulation was similar: 279, 278, 276.5 and 276.5 days, respectively. The distributions for observed gestational length were widest where GA was assigned from CRL and LMP and narrowest when assigned from implantation and ovulation day. The strongest correlation for gestational length assessment was between ovulation and implantation (r = 0.98) and weakest between CRL and LMP (r = 0.88). The most accurate method of predicting gestational length is ovulation day, and this agrees closely with implantation day. Prediction of gestational length from CRL and known LMP are both inferior to ovulation and implantation day. This information could have important implications on the routine assignment of gestational age.
NASA Astrophysics Data System (ADS)
Kestens, Vikram; Roebben, Gert; Herrmann, Jan; Jämting, Åsa; Coleman, Victoria; Minelli, Caterina; Clifford, Charles; De Temmerman, Pieter-Jan; Mast, Jan; Junjie, Liu; Babick, Frank; Cölfen, Helmut; Emons, Hendrik
2016-06-01
A new certified reference material for quality control of nanoparticle size analysis methods has been developed and produced by the Institute for Reference Materials and Measurements of the European Commission's Joint Research Centre. The material, ERM-FD102, consists of an aqueous suspension of a mixture of silica nanoparticle populations of distinct particle size and origin. The characterisation relied on an interlaboratory comparison study in which 30 laboratories of demonstrated competence participated with a variety of techniques for particle size analysis. After scrutinising the received datasets, certified and indicative values for different method-defined equivalent diameters that are specific for dynamic light scattering (DLS), centrifugal liquid sedimentation (CLS), scanning and transmission electron microscopy (SEM and TEM), atomic force microscopy (AFM), particle tracking analysis (PTA) and asymmetrical-flow field-flow fractionation (AF4) were assigned. The value assignment was a particular challenge because metrological concepts were not always interpreted uniformly across all participating laboratories. This paper presents the main elements and results of the ERM-FD102 characterisation study and discusses in particular the key issues of measurand definition and the estimation of measurement uncertainty.
The electronic emission spectrum of methylnitrene
NASA Astrophysics Data System (ADS)
Carrick, P. G.; Engelking, P. C.
1984-08-01
The à 3E-X˜ 3A2ultraviolet emission spectrum of methylnitrene (CH3N) was obtained in two ways: (1) by reacting methylazide (CH3N3) with metastable N2 in a flowing afterglow; and (2) by discharging a mixture of methylazide (CH3N3) and helium in a corona excited supersonic expansion (CESE). The origin appears at T0=31 811 cm-1. Several vibrational progressions were observed leading to the determination of a number of vibrational frequencies: v″1=2938, v■2=1350, v″3=1039, v■4=3065, and v″6=902 cm-1. Deuterium substitution confirmed the assignments of the vibrational frequencies. The X˜ 3A2 state is a normal, bound local minimum on the triplet electronic potential surface, and the upper à 3E state is able to support at least one quantum of vibration, assigned to v3, predominantly a C-N stretch. A comparison of flowing afterglow hollow cathode discharge sources and corona excited supersonic expansion sources shows the advantage of the CESE method of radical production for spectroscopy.
Russ, Daniel E; Ho, Kwan-Yuet; Colt, Joanne S; Armenti, Karla R; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P; Karagas, Margaret R; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T; Johnson, Calvin A; Friesen, Melissa C
2016-06-01
Mapping job titles to standardised occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiological studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14 983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in 2 occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. For 11 991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6-digit and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (κ 0.6-0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiological studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Insights into asthenospheric anisotropy and deformation in Mainland China
NASA Astrophysics Data System (ADS)
Zhu, Tao
2018-03-01
Seismic anisotropy can provide direct constraints on asthenospheric deformation which also can be induced by the inherent mantle flow within our planet. Mantle flow calculations thus have been an effective tool to probe asthenospheric anisotropy. To explore the source of seismic anisotropy, asthenospheric deformation and the effects of mantle flow on seismic anisotropy in Mainland China, mantle flow models driven by plate motion (plate-driven) and by a combination of plate motion and mantle density heterogeneity (plate-density-driven) are used to predict the fast polarization direction of shear wave splitting. Our results indicate that: (1) plate-driven or plate-density-driven mantle flow significantly affects the predicted fast polarization direction when compared with simple asthenospheric flow commonly used in interpreting the asthenospheric source of seismic anisotropy, and thus new insights are presented; (2) plate-driven flow controls the fast polarization direction while thermal mantle flow affects asthenospheric deformation rate and local deformation direction significantly; (3) asthenospheric flow is an assignable contributor to seismic anisotropy, and the asthenosphere is undergoing low, large or moderate shear deformation controlled by the strain model, the flow plane/flow direction model or both in most regions of central and eastern China; and (4) the asthenosphere is under more rapid extension deformation in eastern China than in western China.
Different Simultaneous Sleep States in the Hippocampus and Neocortex.
Emrick, Joshua J; Gross, Brooks A; Riley, Brett T; Poe, Gina R
2016-12-01
Investigators assign sleep-waking states using brain activity collected from a single site, with the assumption that states occur at the same time throughout the brain. We sought to determine if sleep-waking states differ between two separate structures: the hippocampus and neocortex. We measured electrical signals (electroencephalograms and electromyograms) during sleep from the hippocampus and neocortex of five freely behaving adult male rats. We assigned sleep-waking states in 10-sec epochs based on standard scoring criteria across a 4-h recording, then analyzed and compared states and signals from simultaneous epochs between sites. We found that the total amount of each state, assigned independently using the hippocampal and neocortical signals, was similar between the hippocampus and neocortex. However, states at simultaneous epochs were different as often as they were the same (P = 0.82). Furthermore, we found that the progression of states often flowed through asynchronous state-pairs led by the hippocampus. For example, the hippocampus progressed from transition-to-rapid eye movement sleep to rapid eye movement sleep before the neocortex more often than in synchrony with the neocortex (38.7 ± 16.2% versus 15.8 ± 5.6% mean ± standard error of the mean). We demonstrate that hippocampal and neocortical sleep-waking states often differ in the same epoch. Consequently, electrode location affects estimates of sleep architecture, state transition timing, and perhaps even percentage of time in sleep states. Therefore, under normal conditions, models assuming brain state homogeneity should not be applied to the sleeping or waking brain. © 2016 Associated Professional Sleep Societies, LLC.
Kondo, Keiko; Morino, Katsutaro; Nishio, Yoshihiko; Kondo, Motoyuki; Nakao, Keiko; Nakagawa, Fumiyuki; Ishikado, Atsushi; Sekine, Osamu; Yoshizaki, Takeshi; Kashiwagi, Atsunori; Ugi, Satoshi; Maegawa, Hiroshi
2014-07-01
The beneficial effects of fish and n-3 polyunsaturated fatty acids (PUFAs) consumption on atherosclerosis have been reported in numerous epidemiological studies. However, to the best of our knowledge, the effects of a fish-based diet intervention on endothelial function have not been investigated. Therefore, we studied these effects in postmenopausal women with type 2 diabetes mellitus (T2DM). Twenty-three postmenopausal women with T2DM were assigned to two four-week periods of either a fish-based diet (n-3 PUFAs ≧ 3.0 g/day) or a control diet in a randomized crossover design. Endothelial function was measured with reactive hyperemia using strain-gauge plethysmography and compared with the serum levels of fatty acids and their metabolites. Endothelial function was determined with peak forearm blood flow (Peak), duration of reactive hyperemia (Duration) and flow debt repayment (FDR). A fish-based dietary intervention improved Peak by 63.7%, Duration by 27.9% and FDR by 70.7%, compared to the control diet. Serum n-3 PUFA levels increased after the fish-based diet period and decreased after the control diet, compared with the baseline (1.49 vs. 0.97 vs. 1.19 mmol/l, p < 0.0001). There was no correlation between serum n-3 PUFA levels and endothelial function. An increased ratio of epoxyeicosatrienoic acid/dihydroxyeicosatrienoic acid was observed after a fish-based diet intervention, possibly due to the inhibition of the activity of soluble epoxide hydrolase. A fish-based dietary intervention improves endothelial function in postmenopausal women with T2DM. Dissociation between the serum n-3 PUFA concentration and endothelial function suggests that the other factors may contribute to this phenomenon. Copyright © 2014 Elsevier Inc. All rights reserved.
Nurse-patient assignment models considering patient acuity metrics and nurses' perceived workload.
Sir, Mustafa Y; Dundar, Bayram; Barker Steege, Linsey M; Pasupathy, Kalyan S
2015-06-01
Patient classification systems (PCSs) are commonly used in nursing units to assess how many nursing care hours are needed to care for patients. These systems then provide staffing and nurse-patient assignment recommendations for a given patient census based on these acuity scores. Our hypothesis is that such systems do not accurately capture workload and we conduct an experiment to test this hypothesis. Specifically, we conducted a survey study to capture nurses' perception of workload in an inpatient unit. Forty five nurses from oncology and surgery units completed the survey and rated the impact of patient acuity indicators on their perceived workload using a six-point Likert scale. These ratings were used to calculate a workload score for an individual nurse given a set of patient acuity indicators. The approach offers optimization models (prescriptive analytics), which use patient acuity indicators from a commercial PCS as well as a survey-based nurse workload score. The models assign patients to nurses in a balanced manner by distributing acuity scores from the PCS and survey-based perceived workload. Numerical results suggest that the proposed nurse-patient assignment models achieve a balanced assignment and lower overall survey-based perceived workload compared to the assignment based solely on acuity scores from the PCS. This results in an improvement of perceived workload that is upwards of five percent. Copyright © 2015 Elsevier Inc. All rights reserved.
21 CFR 1271.27 - Will FDA assign me a registration number?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Will FDA assign me a registration number? 1271.27..., TISSUES, AND CELLULAR AND TISSUE-BASED PRODUCTS Procedures for Registration and Listing § 1271.27 Will FDA assign me a registration number? (a) FDA will assign each location a permanent registration number. (b...
21 CFR 1271.27 - Will FDA assign me a registration number?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Will FDA assign me a registration number? 1271.27..., TISSUES, AND CELLULAR AND TISSUE-BASED PRODUCTS Procedures for Registration and Listing § 1271.27 Will FDA assign me a registration number? (a) FDA will assign each location a permanent registration number. (b...
21 CFR 1271.27 - Will FDA assign me a registration number?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Will FDA assign me a registration number? 1271.27..., TISSUES, AND CELLULAR AND TISSUE-BASED PRODUCTS Procedures for Registration and Listing § 1271.27 Will FDA assign me a registration number? (a) FDA will assign each location a permanent registration number. (b...
21 CFR 1271.27 - Will FDA assign me a registration number?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Will FDA assign me a registration number? 1271.27..., TISSUES, AND CELLULAR AND TISSUE-BASED PRODUCTS Procedures for Registration and Listing § 1271.27 Will FDA assign me a registration number? (a) FDA will assign each location a permanent registration number. (b...
21 CFR 1271.27 - Will FDA assign me a registration number?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Will FDA assign me a registration number? 1271.27..., TISSUES, AND CELLULAR AND TISSUE-BASED PRODUCTS Procedures for Registration and Listing § 1271.27 Will FDA assign me a registration number? (a) FDA will assign each location a permanent registration number. (b...
Theoretical and subjective bit assignments in transform picture
NASA Technical Reports Server (NTRS)
Jones, H. W., Jr.
1977-01-01
It is shown that all combinations of symmetrical input distributions with difference distortion measures give a bit assignment rule identical to the well-known rule for a Gaussian input distribution with mean-square error. Published work is examined to show that the bit assignment rule is useful for transforms of full pictures, but subjective bit assignments for transform picture coding using small block sizes are significantly different from the theoretical bit assignment rule. An intuitive explanation is based on subjective design experience, and a subjectively obtained bit assignment rule is given.
Monleón, Daniel; Colson, Kimberly; Moseley, Hunter N B; Anklin, Clemens; Oswald, Robert; Szyperski, Thomas; Montelione, Gaetano T
2002-01-01
Rapid data collection, spectral referencing, processing by time domain deconvolution, peak picking and editing, and assignment of NMR spectra are necessary components of any efficient integrated system for protein NMR structure analysis. We have developed a set of software tools designated AutoProc, AutoPeak, and AutoAssign, which function together with the data processing and peak-picking programs NMRPipe and Sparky, to provide an integrated software system for rapid analysis of protein backbone resonance assignments. In this paper we demonstrate that these tools, together with high-sensitivity triple resonance NMR cryoprobes for data collection and a Linux-based computer cluster architecture, can be combined to provide nearly complete backbone resonance assignments and secondary structures (based on chemical shift data) for a 59-residue protein in less than 30 hours of data collection and processing time. In this optimum case of a small protein providing excellent spectra, extensive backbone resonance assignments could also be obtained using less than 6 hours of data collection and processing time. These results demonstrate the feasibility of high throughput triple resonance NMR for determining resonance assignments and secondary structures of small proteins, and the potential for applying NMR in large scale structural proteomics projects.
Evacuee Compliance Behavior Analysis using High Resolution Demographic Information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Wei; Han, Lee; Liu, Cheng
2014-01-01
The purpose of this study is to examine whether evacuee compliance behavior with route assignments from different resolutions of demographic data would impact the evacuation performance. Most existing evacuation strategies assume that travelers will follow evacuation instructions, while in reality a certain percent of evacuees do not comply with prescribed instructions. In this paper, a comparison study of evacuation assignment based on Traffic Analysis Zones (TAZ) and high resolution LandScan USA Population Cells (LPC) were conducted for the detailed road network representing Alexandria, Virginia. A revised platform for evacuation modeling built on high resolution demographic data and activity-based microscopic trafficmore » simulation is proposed. The results indicate that evacuee compliance behavior affects evacuation efficiency with traditional TAZ assignment, but it does not significantly compromise the efficiency with high resolution LPC assignment. The TAZ assignment also underestimates the real travel time during evacuation, especially for high compliance simulations. This suggests that conventional evacuation studies based on TAZ assignment might not be effective at providing efficient guidance to evacuees. From the high resolution data perspective, traveler compliance behavior is an important factor but it does not impact the system performance significantly. The highlight of evacuee compliance behavior analysis should be emphasized on individual evacuee level route/shelter assignments, rather than the whole system performance.« less
32 CFR 700.721 - Administration and discipline: Staff based ashore.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 5 2011-07-01 2011-07-01 false Administration and discipline: Staff based... discipline: Staff based ashore. When a staff is based ashore, the enlisted persons serving with the staff... discipline. The staff officers may be similarly assigned. Members of a staff assigned for any purpose to a...
32 CFR 700.721 - Administration and discipline: Staff based ashore.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 5 2013-07-01 2013-07-01 false Administration and discipline: Staff based... discipline: Staff based ashore. When a staff is based ashore, the enlisted persons serving with the staff... discipline. The staff officers may be similarly assigned. Members of a staff assigned for any purpose to a...
32 CFR 700.721 - Administration and discipline: Staff based ashore.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 5 2010-07-01 2010-07-01 false Administration and discipline: Staff based... discipline: Staff based ashore. When a staff is based ashore, the enlisted persons serving with the staff... discipline. The staff officers may be similarly assigned. Members of a staff assigned for any purpose to a...
NASA Astrophysics Data System (ADS)
Streubel, D. P.; Kodama, K.
2014-12-01
To provide continuous flash flood situational awareness and to better differentiate severity of ongoing individual precipitation events, the National Weather Service Research Distributed Hydrologic Model (RDHM) is being implemented over Hawaii and Alaska. In the implementation process of RDHM, three gridded precipitation analyses are used as forcing. The first analysis is a radar only precipitation estimate derived from WSR-88D digital hybrid reflectivity, a Z-R relationship and aggregated into an hourly ¼ HRAP grid. The second analysis is derived from a rain gauge network and interpolated into an hourly ¼ HRAP grid using PRISM climatology. The third analysis is derived from a rain gauge network where rain gauges are assigned static pre-determined weights to derive a uniform mean areal precipitation that is applied over a catchment on a ¼ HRAP grid. To assess the effect of different QPE analyses on the accuracy of RDHM simulations and to potentially identify a preferred analysis for operational use, each QPE was used to force RDHM to simulate stream flow for 20 USGS peak flow events. An evaluation of the RDHM simulations was focused on peak flow magnitude, peak flow timing, and event volume accuracy to be most relevant for operational use. Results showed RDHM simulations based on the observed rain gauge amounts were more accurate in simulating peak flow magnitude and event volume relative to the radar derived analysis. However this result was not consistent for all 20 events nor was it consistent for a few of the rainfall events where an annual peak flow was recorded at more than one USGS gage. Implications of this indicate that a more robust QPE forcing with the inclusion of uncertainty derived from the three analyses may provide a better input for simulating extreme peak flow events.
Bertlich, Mattis; Ihler, Friedrich; Freytag, Saskia; Weiss, Bernhard G; Strupp, Michael; Canis, Martin
2015-01-01
Betahistine is a histamine-like drug that is considered beneficial in Ménière's disease by increasing cochlear blood flow. Acting as an agonist at the histamine H1-receptor and as an inverse agonist at the H3-receptor, these receptors as well as the adrenergic α2-receptor were investigated for betahistine effects on cochlear blood flow. A total of 54 Dunkin-Hartley guinea pigs were randomly assigned to one of nine groups treated with a selection of H1-, H3- or α2-selective agonists and antagonists together with betahistine. Cochlear blood flow and mean arterial pressure were recorded for 3 min before and 15 min after infusion. Blockage of the H3- or α2-receptors caused a suppression of betahistine-mediated typical changes in cochlear blood flow or blood pressure. Activation of H3-receptors caused a drop in cochlear blood flow and blood pressure. H1-receptors showed no involvement in betahistine-mediated changes of cochlear blood flow. Betahistine most likely affects cochlear blood flow through histaminergic H3-heteroreceptors. © 2015 S. Karger AG, Basel.
Application of dynamic traffic assignment to advanced managed lane modeling.
DOT National Transportation Integrated Search
2013-11-01
In this study, a demand estimation framework is developed for assessing the managed lane (ML) : strategies by utilizing dynamic traffic assignment (DTA) modeling, instead of the traditional : approaches that are based on the static traffic assignment...
Carcelén, María; Abascal, Estefanía; Herranz, Marta; Santantón, Sheila; Zenteno, Roberto; Ruiz Serrano, María Jesús; Bouza, Emilio
2017-01-01
The assignation of lineages in Mycobacterium tuberculosis (MTB) provides valuable information for evolutionary and phylogeographic studies and makes for more accurate knowledge of the distribution of this pathogen worldwide. Differences in virulence have also been found for certain lineages. MTB isolates were initially assigned to lineages based on data obtained from genotyping techniques, such as spoligotyping or MIRU-VNTR analysis, some of which are more suitable for molecular epidemiology studies. However, since these methods are subject to a certain degree of homoplasy, other criteria have been chosen to assign lineages. These are based on targeting robust and specific SNPs for each lineage. Here, we propose two newly designed multiplex targeting methods—both of which are single-tube tests—to optimize the assignation of the six main lineages in MTB. The first method is based on ASO-PCR and offers an inexpensive and easy-to-implement assay for laboratories with limited resources. The other, which is based on SNaPshot, enables more refined standardized assignation of lineages for laboratories with better resources. Both methods performed well when assigning lineages from cultured isolates from a control panel, a test panel, and a problem panel from an unrelated population, Mexico, which included isolates in which standard genotyping was not able to classify lineages. Both tests were also able to assign lineages from stored isolates, without the need for subculture or purification of DNA, and even directly from clinical specimens with a medium-high bacilli burden. Our assays could broaden the contexts where information on lineages can be acquired, thus enabling us to quickly update data from retrospective collections and to merge data with those obtained at the time of diagnosis of a new TB case. PMID:29091913
Integration of Grid and Local Batch Resources at DESY
NASA Astrophysics Data System (ADS)
Beyer, Christoph; Finnern, Thomas; Gellrich, Andreas; Hartmann, Thomas; Kemp, Yves; Lewendel, Birgit
2017-10-01
As one of the largest resource centres DESY has to support differing work flows of users from various scientific backgrounds. Users can be for one HEP experiments in WLCG or Belle II as well as local HEP users but also physicists from other fields as photon science or accelerator development. By abandoning specific worker node setups in favour of generic flat nodes with middleware resources provided via CVMFS, we gain flexibility to subsume different use cases in a homogeneous environment. Grid jobs and the local batch system are managed in a HTCondor based setup, accepting pilot, user and containerized jobs. The unified setup allows dynamic re-assignment of resources between the different use cases. Monitoring is implemented on global batch system metrics as well as on a per job level utilizing corresponding cgroup information.
A real-time expert system for self-repairing flight control
NASA Technical Reports Server (NTRS)
Gaither, S. A.; Agarwal, A. K.; Shah, S. C.; Duke, E. L.
1989-01-01
An integrated environment for specifying, prototyping, and implementing a self-repairing flight-control (SRFC) strategy is described. At an interactive workstation, the user can select paradigms such as rule-based expert systems, state-transition diagrams, and signal-flow graphs and hierarchically nest them, assign timing and priority attributes, establish blackboard-type communication, and specify concurrent execution on single or multiple processors. High-fidelity nonlinear simulations of aircraft and SRFC systems can be performed off-line, with the possibility of changing SRFC rules, inference strategies, and other heuristics to correct for control deficiencies. Finally, the off-line-generated SRFC can be transformed into highly optimized application-specific real-time C-language code. An application of this environment to the design of aircraft fault detection, isolation, and accommodation algorithms is presented in detail.
Competitive game theoretic optimal routing in optical networks
NASA Astrophysics Data System (ADS)
Yassine, Abdulsalam; Kabranov, Ognian; Makrakis, Dimitrios
2002-09-01
Optical transport service providers need control and optimization strategies for wavelength management, network provisioning, restoration and protection, allowing them to define and deploy new services and maintain competitiveness. In this paper, we investigate a game theory based model for wavelength and flow assignment in multi wavelength optical networks, consisting of several backbone long-haul optical network transport service providers (TSPs) who are offering their services -in terms of bandwidth- to Internet service providers (ISPs). The ISPs act as brokers or agents between the TSP and end user. The agent (ISP) buys services (bandwidth) from the TSP. The TSPs compete among themselves to sell their services and maintain profitability. We present a case study, demonstrating the impact of different bandwidth broker demands on the supplier's profit and the price paid by the network broker.
Esralew, Rachel A.; Baker, Ronald J.
2008-01-01
Hydrologic changes in New Jersey stream basins resulting from human activity can affect the flow and ecology of the streams. To assess future changes in streamflow resulting from human activity an understanding of the natural variability of streamflow is needed. The natural variability can be classified using Ecologically Relevant Hydrologic Indices (ERHIs). ERHIs are defined as selected streamflow statistics that characterize elements of the flow regime that substantially affect biological health and ecological sustainability. ERHIs are used to quantitatively characterize aspects of the streamflow regime, including magnitude, duration, frequency, timing, and rate of change. Changes in ERHI values can occur as a result of human activity, and changes in ERHIs over time at various stream locations can provide information about the degree of alteration in aquatic ecosystems at or near those locations. New Jersey streams can be divided into four classes (A, B, C, or D), where streams with similar ERHI values (determined from cluster analysis) are assigned the same stream class. In order to detect and quantify changes in ERHIs at selected streamflow-gaging stations, a 'baseline' period is needed. Ideally, a baseline period is a period of continuous daily streamflow record at a gaging station where human activity along the contributing stream reach or in the stream's basin is minimal. Because substantial urbanization and other development had already occurred before continuous streamflow-gaging stations were installed, it is not possible to identify baseline periods that meet this criterion for many reaches in New Jersey. Therefore, the baseline period for a considerably altered basin can be defined as a period prior to a substantial human-induced change in the drainage basin or stream reach (such as regulations or diversions), or a period during which development did not change substantially. Index stations (stations with minimal urbanization) were defined as streamflow-gaging stations in basins that contain less than 15 percent urban land use throughout the period of continuous streamflow record. A minimum baseline period of record for each stream class was determined by comparing the variability of selected ERHIs among consecutive 5-, 10-, 15-, and 20-year time increments for index stations. On the basis of this analysis, stream classes A and D were assigned a minimum of 20 years of continuous record as a baseline period and stream classes B and C, a minimum of 10 years. Baseline periods were calculated for 85 streamflow-gaging stations in New Jersey with 10 or more years of continuous daily streamflow data, and the values of 171 ERHIs also were calculated for these baseline periods for each station. Baseline periods were determined by using historical streamflow-gaging station data, estimated changes in impervious surface in the drainage basin, and statistically significant changes in annual base flow and runoff. Historical records were reviewed to identify years during which regulation, diversions, or withdrawals occurred in the drainage basins. Such years were not included in baseline periods of record. For some sites, the baseline period of record was shorter than the minimum period of record specified for the given stream class. In such cases, the baseline period was rated as 'poor'. Impervious surface was used as an indicator of urbanization and change in streamflow characteristics owing to increases in storm runoff and decreases in base flow. Percentages of impervious surface were estimated for 85 streamflow-gaging stations from available municipal population-density data by using a regression model. Where the period of record was sufficiently long, all years after the impervious surface exceeded 10 to 20 percent were excluded from the baseline period. The percentage of impervious surface also was used as a criterion in assigning qualitative ratings to baseline periods. Changes in trends of annual base fl
A New Secondary Structure Assignment Algorithm Using Cα Backbone Fragments
Cao, Chen; Wang, Guishen; Liu, An; Xu, Shutan; Wang, Lincong; Zou, Shuxue
2016-01-01
The assignment of secondary structure elements in proteins is a key step in the analysis of their structures and functions. We have developed an algorithm, SACF (secondary structure assignment based on Cα fragments), for secondary structure element (SSE) assignment based on the alignment of Cα backbone fragments with central poses derived by clustering known SSE fragments. The assignment algorithm consists of three steps: First, the outlier fragments on known SSEs are detected. Next, the remaining fragments are clustered to obtain the central fragments for each cluster. Finally, the central fragments are used as a template to make assignments. Following a large-scale comparison of 11 secondary structure assignment methods, SACF, KAKSI and PROSS are found to have similar agreement with DSSP, while PCASSO agrees with DSSP best. SACF and PCASSO show preference to reducing residues in N and C cap regions, whereas KAKSI, P-SEA and SEGNO tend to add residues to the terminals when DSSP assignment is taken as standard. Moreover, our algorithm is able to assign subtle helices (310-helix, π-helix and left-handed helix) and make uniform assignments, as well as to detect rare SSEs in β-sheets or long helices as outlier fragments from other programs. The structural uniformity should be useful for protein structure classification and prediction, while outlier fragments underlie the structure–function relationship. PMID:26978354
Automated and assisted RNA resonance assignment using NMR chemical shift statistics
Aeschbacher, Thomas; Schmidt, Elena; Blatter, Markus; Maris, Christophe; Duss, Olivier; Allain, Frédéric H.-T.; Güntert, Peter; Schubert, Mario
2013-01-01
The three-dimensional structure determination of RNAs by NMR spectroscopy relies on chemical shift assignment, which still constitutes a bottleneck. In order to develop more efficient assignment strategies, we analysed relationships between sequence and 1H and 13C chemical shifts. Statistics of resonances from regularly Watson–Crick base-paired RNA revealed highly characteristic chemical shift clusters. We developed two approaches using these statistics for chemical shift assignment of double-stranded RNA (dsRNA): a manual approach that yields starting points for resonance assignment and simplifies decision trees and an automated approach based on the recently introduced automated resonance assignment algorithm FLYA. Both strategies require only unlabeled RNAs and three 2D spectra for assigning the H2/C2, H5/C5, H6/C6, H8/C8 and H1′/C1′ chemical shifts. The manual approach proved to be efficient and robust when applied to the experimental data of RNAs with a size between 20 nt and 42 nt. The more advanced automated assignment approach was successfully applied to four stem-loop RNAs and a 42 nt siRNA, assigning 92–100% of the resonances from dsRNA regions correctly. This is the first automated approach for chemical shift assignment of non-exchangeable protons of RNA and their corresponding 13C resonances, which provides an important step toward automated structure determination of RNAs. PMID:23921634
Monte Carlo calculations of diatomic molecule gas flows including rotational mode excitation
NASA Technical Reports Server (NTRS)
Yoshikawa, K. K.; Itikawa, Y.
1976-01-01
The direct simulation Monte Carlo method was used to solve the Boltzmann equation for flows of an internally excited nonequilibrium gas, namely, of rotationally excited homonuclear diatomic nitrogen. The semi-classical transition probability model of Itikawa was investigated for its ability to simulate flow fields far from equilibrium. The behavior of diatomic nitrogen was examined for several different nonequilibrium initial states that are subjected to uniform mean flow without boundary interactions. A sample of 1000 model molecules was observed as the gas relaxed to a steady state starting from three specified initial states. The initial states considered are: (1) complete equilibrium, (2) nonequilibrium, equipartition (all rotational energy states are assigned the mean energy level obtained at equilibrium with a Boltzmann distribution at the translational temperature), and (3) nonequipartition (the mean rotational energy is different from the equilibrium mean value with respect to the translational energy states). In all cases investigated the present model satisfactorily simulated the principal features of the relaxation effects in nonequilibrium flow of diatomic molecules.
Palæomagnetism of Hawaiian lava flows
Doell, Richard R.; Cox, Allan
1961-01-01
PALÆOMAGNETIC investigations of volcanic rocks extruded in various parts of the world during the past several million years have generally revealed a younger sequence of lava flows magnetized nearly parallel to the field of a theoretical geocentric axial dipole, underlain by a sequence of older flows with exactly the opposite direction of remanent magnetization. A 180-degree reversal of the geomagnetic field, occurring near the middle of the Pleistocene epoch, has been inferred by many workers from such results1–3. This is a preliminary report of an investigation of 755 oriented samples collected from 152 lava flows on the island of Hawaii, selected to represent as many stratigraphic horizons as possible. (Sampling details are indicated in Table 1.) This work was undertaken because Hawaii's numerous thick sequences of lava flows, previously mapped as Pliocene to Historic by Stearns and Macdonald4, and afterwards assigned ages ranging from later Tertiary to Recent, by Macdonald and Davis5, appeared to offer an ideal opportunity to examine the most recent reversal of Earth's field.
A new tree-ring date for the "floating island" lava flow, Mount St. Helens, Washington
Yamaguchi, D.K.; Hoblitt, R.P.; Lawrence, D.B.
1990-01-01
Anomalously narrow and missing rings in trees 12 m from Mount St. Helens' "floating island" lava flow, and synchronous growth increases in trees farther from the flow margin, are evidence that this andesitic flow was extruded between late summer 1799 and spring 1800 a.d., within a few months after the eruption of Mount St. Helens' dacitic layer T tephra. For ease of reference, we assign here an 1800 a.d. date to this flow. The new date shows that the start of Mount St. Helens' Goat Rocks eruptive period (1800-1857 a.d.) resembled the recent (1980-1986) activity in both petrochemical trends and timing. In both cases, an initial explosive eruption of dacite was quickly succeeded by the eruption of more mafic lavas; dacite lavas then reappeared during an extended concluding phase of activity. This behavior is consistent with a recently proposed fluid-dynamic model of magma withdrawal from a compositionally zoned magma chamber. ?? 1990 Springer-Verlag.
Time-resolved flowmetering of gas-liquid two-phase pipe flow by ultrasound pulse Doppler method
NASA Astrophysics Data System (ADS)
Murai, Yuichi; Tasaka, Yuji; Takeda, Yasushi
2012-03-01
Ultrasound pulse Doppler method is applied for componential volumetric flow rate measurement in multiphase pipe flow consisted of gas and liquid phases. The flowmetering is realized with integration of measured velocity profile over the cross section of the pipe within liquid phase. Spatio-temporal position of interface is detected also with the same ultrasound pulse, which further gives cross sectional void fraction. A series of experimental demonstration was shown by applying this principle of measurement to air-water two-phase flow in a horizontal tube of 40 mm in diameter, of which void fraction ranges from 0 to 90% at superficial velocity from 0 to 15 m/s. The measurement accuracy is verified with a volumetric type flowmeter. We also analyze the accuracy of area integration of liquid velocity distribution for many different patterns of ultrasound measurement lines assigned on the cross section of the tube. The present method is also identified to be pulsation sensor of flow rate that fluctuates with complex gas-liquid interface behavior.
Dick, Gregory M.; Namani, Ravi; Patel, Bhavesh; Kassab, Ghassan S.
2018-01-01
Myogenic responses (pressure-dependent contractions) of coronary arterioles play a role in autoregulation (relatively constant flow vs. pressure). Publications on myogenic reactivity in swine coronaries vary in caliber, analysis, and degree of responsiveness. Further, data on myogenic responses and autoregulation in swine have not been completely compiled, compared, and modeled. Thus, it has been difficult to understand these physiological phenomena. Our purpose was to: (a) analyze myogenic data with standard criteria; (b) assign results to diameter categories defined by morphometry; and (c) use our novel multiscale flow model to determine the extent to which ex vivo myogenic reactivity can explain autoregulation in vivo. When myogenic responses from the literature are an input for our model, the predicted coronary autoregulation approaches in vivo observations. More complete and appropriate data are now available to investigate the regulation of coronary blood flow in swine, a highly relevant model for human physiology and disease. PMID:29875686
2014-01-01
Background Pedigree reconstruction using genetic analysis provides a useful means to estimate fundamental population biology parameters relating to population demography, trait heritability and individual fitness when combined with other sources of data. However, there remain limitations to pedigree reconstruction in wild populations, particularly in systems where parent-offspring relationships cannot be directly observed, there is incomplete sampling of individuals, or molecular parentage inference relies on low quality DNA from archived material. While much can still be inferred from incomplete or sparse pedigrees, it is crucial to evaluate the quality and power of available genetic information a priori to testing specific biological hypotheses. Here, we used microsatellite markers to reconstruct a multi-generation pedigree of wild Atlantic salmon (Salmo salar L.) using archived scale samples collected with a total trapping system within a river over a 10 year period. Using a simulation-based approach, we determined the optimal microsatellite marker number for accurate parentage assignment, and evaluated the power of the resulting partial pedigree to investigate important evolutionary and quantitative genetic characteristics of salmon in the system. Results We show that at least 20 microsatellites (ave. 12 alleles/locus) are required to maximise parentage assignment and to improve the power to estimate reproductive success and heritability in this study system. We also show that 1.5 fold differences can be detected between groups simulated to have differing reproductive success, and that it is possible to detect moderate heritability values for continuous traits (h2 ~ 0.40) with more than 80% power when using 28 moderately to highly polymorphic markers. Conclusion The methodologies and work flow described provide a robust approach for evaluating archived samples for pedigree-based research, even where only a proportion of the total population is sampled. The results demonstrate the feasibility of pedigree-based studies to address challenging ecological and evolutionary questions in free-living populations, where genealogies can be traced only using molecular tools, and that significant increases in pedigree assignment power can be achieved by using higher numbers of markers. PMID:24684698
Faunt, Claudia C.; Provost, Alden M.; Hill, Mary C.; Belcher, Wayne R.
2011-01-01
Carroll et al. (2009) state that the United States Geological Survey (USGS) Death Valley Regional Flow System (DVRFS) model, which is based on MODFLOW, is “conceptually inaccurate in that it models an unconfined aquifer as a confined system and does not simulate unconfined drawdown in transient pumping simulations.” Carroll et al. (2009) claim that “more realistic estimates of water availability” can be produced by a SURFACT-based model of the DVRFS that simulates unconfined groundwater flow and limits withdrawals from wells to avoid excessive drawdown. Differences in results from the original MODFLOW-based model and the SURFACT-based model stem primarily from application by Carroll et al. (2009) of head limits that can also be applied using the existing MODLOW model and not from any substantial difference in the accuracy with which the unconfined aquifer is represented in the two models. In a hypothetical 50-year predictive simulation presented by Carroll et al. (2009), large differences between the models are shown when simulating pumping from the lower clastic confining unit, where the transmissivity is nearly two orders of magnitude less than in an alluvial aquifer. Yet even for this extreme example, drawdowns and pumping rates from the MODFLOW and SURFACT models are similar when the head-limit capabilities of the MODFLOW MNW Package are applied. These similarities persist despite possible discrepancies between assigned hydraulic properties. The resulting comparison between the MODFLOW and SURFACT models of the DVRFS suggests that approximating the unconfined system in the DVRFS as a constant-saturated-thickness system (called a “confined system” by Carroll et al., 2009) performs very well.
Non-Uniform Per-Packet Priority Marker for Use with Adaptive Protocols
2014-01-07
through con gestion points that would totally stop traffic from a customer using the SLA shown in FIG. 5, though only some fraction of his traffic...assigning priori ties to TCP flows. PDQoS has potential to fill the need for a quality of service mechanism that is simple to configure and to
77 FR 41202 - Notice of Intent to Co-Exclusive License.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-12
...)(i). NASA hereby gives notice of its intent to grant a co-exclusive license in the United States to... Unsteady Pressure And Flow Rate Distribution In A Fluid Network Version 4, U.S. Patent No. 7,542,885, to... inventions as applicable have been assigned to the United States of America as represented by the...
USDA-ARS?s Scientific Manuscript database
A 4-unit, dual-flow continuous culture fermentor system was used to assess nutrient digestibility, volatile fatty acids (VFA) production, bacterial protein synthesis and CH4 output of warm-season summer annual grasses. Treatments were randomly assigned to fermentors in a 4 × 4 Latin square design us...
Knowledge and Processes in Design
1992-09-03
Orqanization Name(s) and Address(es). Self-explanatory. Block 16. Price Code. Enter approoriate price Block 8. Performing Organization Report code...NTIS on/y). Number. Enter the unique alphanumerc report number(s) assigned by the organization periorming the report. Blocks 17.-19...statement codings were then organized into larger control-flow structures centered around design components called modules. The general assumption was
Aquatic habitat measurement and valuation: imputing social benefits to instream flow levels
Douglas, Aaron J.; Johnson, Richard L.
1991-01-01
Instream flow conflicts have been analysed from the perspectives offered by policy oriented applied (physical) science, theories of conflict resolution and negotiation strategy, and psychological analyses of the behavior patterns of the bargaining parties. Economics also offers some useful insights in analysing conflict resolution within the context of these water allocation problems. We attempt to analyse the economics of the bargaining process in conjunction with a discussion of the water allocation process. In particular, we examine in detail the relation between certain habitat estimation techniques, and the socially optimal allocation of non-market resources. The results developed here describe the welfare implications implicit in the contemporary general equilibrium analysis of a competitive market economy. We also review certain currently available techniques for assigning dollar values to the social benefits of instream flow. The limitations of non-market valuation techniques with respect to estimating the benefits provided by instream flows and the aquatic habitat contingent on these flows should not deter resource managers from using economic analysis as a basic tool for settling instream flow conflicts.
Lago, A; Godden, S M; Bey, R; Ruegg, P L; Leslie, K
2011-09-01
The objective of this multi-state, multi-herd clinical trial was to evaluate the efficacy of using an on-farm culture system to guide strategic treatment decisions in cows with clinical mastitis. The study was conducted in 8 commercial dairy farms ranging in size from 144 to 1,795 cows from Minnesota, Wisconsin, and Ontario, Canada. A total of 422 cows affected with mild or moderate clinical mastitis in 449 quarters were randomly assigned to either (1) a positive-control treatment program or (2) an on-farm, culture-based treatment program. Quarter cases assigned to the positive-control group received immediate on-label intramammary treatment with cephapirin sodium. Quarters assigned to the culture-based treatment program were cultured on-farm and treated with cephapirin sodium after 18 to 24h of incubation if they had gram-positive growth or a mixed infection. Quarters with gram-negative or no growth did not receive intramammary therapy. The proportion of quarter cases assigned to positive-control and culture-based treatments that received intramammary antibiotic therapy because of study assignment was 100 and 44%, respectively; the proportion of cases that received secondary antibiotic therapy was 36 and 19%, respectively; and the proportion of cases that received intramammary antibiotic therapy because of study assignment or secondary therapy was 100 and 51%, respectively. A tendency existed for a decrease in the number of days in which milk was discarded from cows assigned to the culture-based treatment program versus cows assigned to the positive-control group (5.9 vs. 5.2 d). No statistically significant differences existed between cases assigned to the positive-control and cases assigned to the culture-based treatment program in days to clinical cure (2.7 vs. 3.2 d), bacteriological cure risk within 21 d of enrollment (71 vs. 60%), new intramammary infection risk within 21 d of enrollment (50 vs. 50%), and treatment failure risk (presence of infection, secondary treatment, clinical mastitis recurrence, or removal from herd within 21 d after enrollment; 81 vs. 78%). In summary, the use of an on-farm culture system to guide the strategic treatment of clinical mastitis reduced intramammary antibiotic use by half and tended to decrease milk withholding time by 1 d, without significant differences in days to clinical cure, bacteriological cure risk, new intramammary infection risk, and treatment failure risk within 21 d after the clinical mastitis event. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Moreo, Michael T.; Halford, Keith J.; La Camera, Richard J.; Laczniak, Randell J.
2003-01-01
Ground-water withdrawals from 1913 through 1998 from the Death Valley regional flow system have been compiled to support a regional, three-dimensional, transient ground-water flow model. Withdrawal locations and depths of production intervals were estimated and associated errors were reported for 9,300 wells. Withdrawals were grouped into three categories: mining, public-supply, and commercial water use; domestic water use; and irrigation water use. In this report, groupings were based on the method used to estimate pumpage. Cumulative ground-water withdrawals from 1913 through 1998 totaled 3 million acre-feet, most of which was used to irrigate alfalfa. Annual withdrawal for irrigation ranged from 80 to almost 100 percent of the total pumpage. About 75,000 acre-feet was withdrawn for irrigation in 1998. Annual irrigation withdrawals generally were estimated as the product of irrigated acreage and application rate. About 320 fields totaling 11,000 acres were identified in six hydrographic areas. Annual application rates for high water-use crops ranged from 5 feet in Penoyer Valley to 9 feet in Pahrump Valley. The uncertainty in the estimates of ground-water withdrawals was attributed primarily to the uncertainty of application rate estimates. Annual ground-water withdrawal was estimated at about 90,000 acre-feet in 1998 with an assigned uncertainty bounded by 60,000 to 130,000 acre-feet.
A proposal of optimal sampling design using a modularity strategy
NASA Astrophysics Data System (ADS)
Simone, A.; Giustolisi, O.; Laucelli, D. B.
2016-08-01
In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.
Thermal Analysis of the PediaFlow pediatric ventricular assist device.
Gardiner, Jeffrey M; Wu, Jingchun; Noh, Myounggyu D; Antaki, James F; Snyder, Trevor A; Paden, David B; Paden, Brad E
2007-01-01
Accurate modeling of heat dissipation in pediatric intracorporeal devices is crucial in avoiding tissue and blood thermotrauma. Thermal models of new Maglev ventricular assist device (VAD) concepts for the PediaFlow VAD are developed by incorporating empirical heat transfer equations with thermal finite element analysis (FEA). The models assume three main sources of waste heat generation: copper motor windings, active magnetic thrust bearing windings, and eddy currents generated within the titanium housing due to the two-pole motor. Waste heat leaves the pump by convection into blood passing through the pump and conduction through surrounding tissue. Coefficients of convection are calculated and assigned locally along fluid path surfaces of the three-dimensional pump housing model. FEA thermal analysis yields a three-dimensional temperature distribution for each of the three candidate pump models. Thermal impedances from the motor and thrust bearing windings to tissue and blood contacting surfaces are estimated based on maximum temperature rise at respective surfaces. A new updated model for the chosen pump topology is created incorporating computational fluid dynamics with empirical fluid and heat transfer equations. This model represents the final geometry of the first generation prototype, incorporates eddy current heating, and has 60 discrete convection regions. Thermal analysis is performed at nominal and maximum flow rates, and temperature distributions are plotted. Results suggest that the pump will not exceed a temperature rise of 2 degrees C during normal operation.
Improving Initiation and Tracking of Research Projects at an Academic Health Center: A Case Study.
Schmidt, Susanne; Goros, Martin; Parsons, Helen M; Saygin, Can; Wan, Hung-Da; Shireman, Paula K; Gelfond, Jonathan A L
2017-09-01
Research service cores at academic health centers are important in driving translational advancements. Specifically, biostatistics and research design units provide services and training in data analytics, biostatistics, and study design. However, the increasing demand and complexity of assigning appropriate personnel to time-sensitive projects strains existing resources, potentially decreasing productivity and increasing costs. Improving processes for project initiation, assigning appropriate personnel, and tracking time-sensitive projects can eliminate bottlenecks and utilize resources more efficiently. In this case study, we describe our application of lean six sigma principles to our biostatistics unit to establish a systematic continual process improvement cycle for intake, allocation, and tracking of research design and data analysis projects. The define, measure, analyze, improve, and control methodology was used to guide the process improvement. Our goal was to assess and improve the efficiency and effectiveness of operations by objectively measuring outcomes, automating processes, and reducing bottlenecks. As a result, we developed a web-based dashboard application to capture, track, categorize, streamline, and automate project flow. Our workflow system resulted in improved transparency, efficiency, and workload allocation. Using the dashboard application, we reduced the average study intake time from 18 to 6 days, a 66.7% reduction over 12 months (January to December 2015).
Bag-of-features based medical image retrieval via multiple assignment and visual words weighting.
Wang, Jingyan; Li, Yongping; Zhang, Ying; Wang, Chao; Xie, Honglan; Chen, Guoling; Gao, Xin
2011-11-01
Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights.
Kimble, Steven J. A.; Rhodes Jr., O. E.; Williams, Rod N.
2014-01-01
Rangewide studies of genetic parameters can elucidate patterns and processes that operate only over large geographic scales. Herein, we present a rangewide population genetic assessment of the eastern box turtle Terrapene c. carolina, a species that is in steep decline across its range. To inform conservation planning for this species, we address the hypothesis that disruptions to demographic and movement parameters associated with the decline of the eastern box turtle has resulted in distinctive genetic signatures in the form of low genetic diversity, high population structuring, and decreased gene flow. We used microsatellite genotype data from (n = 799) individuals from across the species range to perform two Bayesian population assignment approaches, two methods for comparing historical and contemporary migration among populations, an evaluation of isolation by distance, and a method for detecting barriers to gene flow. Both Bayesian methods of population assignment indicated that there are two populations rangewide, both of which have maintained high levels of genetic diversity (HO = 0.756). Evidence of isolation by distance was detected in this species at a spatial scale of 300 – 500 km, and the Appalachian Mountains were identified as the primary barrier to gene flow across the species range. We also found evidence for historical but not contemporary migration between populations. Our prediction of many, highly structured populations across the range was not supported. This may point to cryptic contemporary gene flow, which might in turn be explained by the presence of rare transients in populations. However these data may be influenced by historical signatures of genetic connectivity because individuals of this species can be long-lived. PMID:24647580
Oppel, S.; Powell, A.N.
2008-01-01
Identification of wintering regions for birds sampled during the breeding season is crucial to understanding how events outside the breeding season may affect populations. We assigned king eiders captured on breeding grounds in northern Alaska to 3 broad geographic wintering regions in the Bering Sea using stable carbon and nitrogen isotopes obtained from head feathers. Using a discriminant function analysis of feathers obtained from birds tracked with satellite transmitters, we estimated that 88 % of feathers were assigned to the region in which they were grown. We then assigned 84 birds of unknown origin to wintering regions based on their head feather isotope ratios, and tested the utility of claws for geographic assignment. Based on the feather results, we estimated that similar proportions of birds in our study area use each of the 3 wintering regions in the Bering Sea. These results are in close agreement with estimates from satellite telemetry and show the usefulness of stable isotope signatures of feathers in assigning marine birds to geographic regions. The use of claws is currently limited by incomplete understanding of claw growth rates. Data presented here will allow managers of eiders, other marine birds, and marine mammals to assign animals to regions in the Bering Sea based on stable isotope signatures of body tissues. ?? Inter-Research 2008.
Impacts of high resolution data on traveler compliance levels in emergency evacuation simulations
Lu, Wei; Han, Lee D.; Liu, Cheng; ...
2016-05-05
In this article, we conducted a comparison study of evacuation assignment based on Traffic Analysis Zones (TAZ) and high resolution LandScan USA Population Cells (LPC) with detailed real world roads network. A platform for evacuation modeling built on high resolution population distribution data and activity-based microscopic traffic simulation was proposed. This platform can be extended to any cities in the world. The results indicated that evacuee compliance behavior affects evacuation efficiency with traditional TAZ assignment, but it did not significantly compromise the performance with high resolution LPC assignment. The TAZ assignment also underestimated the real travel time during evacuation. Thismore » suggests that high data resolution can improve the accuracy of traffic modeling and simulation. The evacuation manager should consider more diverse assignment during emergency evacuation to avoid congestions.« less
Integrated consensus-based frameworks for unmanned vehicle routing and targeting assignment
NASA Astrophysics Data System (ADS)
Barnawi, Waleed T.
Unmanned aerial vehicles (UAVs) are increasingly deployed in complex and dynamic environments to perform multiple tasks cooperatively with other UAVs that contribute to overarching mission effectiveness. Studies by the Department of Defense (DoD) indicate future operations may include anti-access/area-denial (A2AD) environments which limit human teleoperator decision-making and control. This research addresses the problem of decentralized vehicle re-routing and task reassignments through consensus-based UAV decision-making. An Integrated Consensus-Based Framework (ICF) is formulated as a solution to the combined single task assignment problem and vehicle routing problem. The multiple assignment and vehicle routing problem is solved with the Integrated Consensus-Based Bundle Framework (ICBF). The frameworks are hierarchically decomposed into two levels. The bottom layer utilizes the renowned Dijkstra's Algorithm. The top layer addresses task assignment with two methods. The single assignment approach is called the Caravan Auction Algorithm (CarA) Algorithm. This technique extends the Consensus-Based Auction Algorithm (CBAA) to provide awareness for task completion by agents and adopt abandoned tasks. The multiple assignment approach called the Caravan Auction Bundle Algorithm (CarAB) extends the Consensus-Based Bundle Algorithm (CBBA) by providing awareness for lost resources, prioritizing remaining tasks, and adopting abandoned tasks. Research questions are investigated regarding the novelty and performance of the proposed frameworks. Conclusions regarding the research questions will be provided through hypothesis testing. Monte Carlo simulations will provide evidence to support conclusions regarding the research hypotheses for the proposed frameworks. The approach provided in this research addresses current and future military operations for unmanned aerial vehicles. However, the general framework implied by the proposed research is adaptable to any unmanned vehicle. Civil applications that involve missions where human observability would be limited could benefit from the independent UAV task assignment, such as exploration and fire surveillance are also notable uses for this approach.
Flow over a membrane-covered, fluid-filled cavity.
Thomson, Scott L; Mongeau, Luc; Frankel, Steven H
2007-01-01
The flow-induced response of a membrane covering a fluid-filled cavity located in a section of a rigid-walled channel was explored using finite element analysis. The membrane was initially aligned with the channel wall and separated the channel fluid from the cavity fluid. As fluid flowed over the membrane-covered cavity, a streamwise-dependent transmural pressure gradient caused membrane deformation. This model has application to synthetic models of the vocal fold cover layer used in voice production research. In this paper, the model is introduced and responses of the channel flow, the membrane, and the cavity flow are summarized for a range of flow and membrane parameters. It is shown that for high values of cavity fluid viscosity, the intracavity pressure and the beam deflection both reached steady values. For combinations of low cavity viscosity and sufficiently large upstream pressures, large-amplitude membrane vibrations resulted. Asymmetric conditions were introduced by creating cavities on opposing sides of the channel and assigning different stiffness values to the two membranes. The asymmetry resulted in reduction in or cessation of vibration amplitude, depending on the degree of asymmetry, and in significant skewing of the downstream flow field.
Bertlich, Mattis; Ihler, Friedrich; Weiss, Bernhard G; Freytag, Saskia; Jakob, Mark; Strupp, Michael; Pellkofer, Hannah; Canis, Martin
2017-09-01
The potential of Fingolimod (FTY-720), a sphingosine-1-phosphate analogue, to revoke the changes in cochlear blood flow induced by tumor necrosis factor (TNF) was investigated. Impairment of cochlear blood flow has often been considered as the common final pathway of various inner ear pathologies. TNF, an ubiquitous cytokine, plays a major role in these pathologies, reducing cochlear blood flow via sphingosine-1-phosphate-signaling. Fifteen Dunkin-Hartley guinea pigs were randomly assigned to one of three groups (placebo/placebo, TNF/placebo, TNF/FTY-720). Cochlear microcirculation was quantified over 60 minutes by in vivo fluorescence microscopy before and after topical application of placebo or TNF (5 ng/ml) and after subsequent application of placebo or FTY-720 (200 μg/ml). Treatment with TNF led to a significant decrease of cochlear blood flow.Following this, application of placebo caused no significant changes while application of FTY-720 caused a significant rise in cochlear blood flow. FTY-720 is capable of reversing changes in cochlear blood flow induced by application of TNF. This makes FTY-720 a valid candidate for potential treatment of numerous inner ear pathologies.
NASA Technical Reports Server (NTRS)
Wargan, K.; Stajner, I.; Pawson, S.
2003-01-01
In a data assimilation system the forecast error covariance matrix governs the way in which the data information is spread throughout the model grid. Implementation of a correct method of assigning covariances is expected to have an impact on the analysis results. The simplest models assume that correlations are constant in time and isotropic or nearly isotropic. In such models the analysis depends on the dynamics only through assumed error standard deviations. In applications to atmospheric tracer data assimilation this may lead to inaccuracies, especially in regions with strong wind shears or high gradient of potential vorticity, as well as in areas where no data are available. In order to overcome this problem we have developed a flow-dependent covariance model that is based on short term evolution of error correlations. The presentation compares performance of a static and a flow-dependent model applied to a global three- dimensional ozone data assimilation system developed at NASA s Data Assimilation Office. We will present some results of validation against WMO balloon-borne sondes and the Polar Ozone and Aerosol Measurement (POAM) III instrument. Experiments show that allowing forecast error correlations to evolve with the flow results in positive impact on assimilated ozone within the regions where data were not assimilated, particularly at high latitudes in both hemispheres and in the troposphere. We will also discuss statistical characteristics of both models; in particular we will argue that including evolution of error correlations leads to stronger internal consistency of a data assimilation ,
2011-01-01
Background When a specimen belongs to a species not yet represented in DNA barcode reference libraries there is disagreement over the effectiveness of using sequence comparisons to assign the query accurately to a higher taxon. Library completeness and the assignment criteria used have been proposed as critical factors affecting the accuracy of such assignments but have not been thoroughly investigated. We explored the accuracy of assignments to genus, tribe and subfamily in the Sphingidae, using the almost complete global DNA barcode reference library (1095 species) available for this family. Costa Rican sphingids (118 species), a well-documented, diverse subset of the family, with each of the tribes and subfamilies represented were used as queries. We simulated libraries with different levels of completeness (10-100% of the available species), and recorded assignments (positive or ambiguous) and their accuracy (true or false) under six criteria. Results A liberal tree-based criterion assigned 83% of queries accurately to genus, 74% to tribe and 90% to subfamily, compared to a strict tree-based criterion, which assigned 75% of queries accurately to genus, 66% to tribe and 84% to subfamily, with a library containing 100% of available species (but excluding the species of the query). The greater number of true positives delivered by more relaxed criteria was negatively balanced by the occurrence of more false positives. This effect was most sharply observed with libraries of the lowest completeness where, for example at the genus level, 32% of assignments were false positives with the liberal criterion versus < 1% when using the strict. We observed little difference (< 8% using the liberal criterion) however, in the overall accuracy of the assignments between the lowest and highest levels of library completeness at the tribe and subfamily level. Conclusions Our results suggest that when using a strict tree-based criterion for higher taxon assignment with DNA barcodes, the likelihood of assigning a query a genus name incorrectly is very low, if a genus name is provided it has a high likelihood of being accurate, and if no genus match is available the query can nevertheless be assigned to a subfamily with high accuracy regardless of library completeness. DNA barcoding often correctly assigned sphingid moths to higher taxa when species matches were unavailable, suggesting that barcode reference libraries can be useful for higher taxon assignments long before they achieve complete species coverage. PMID:21806794
High-resolution absorption measurements of NH3 at high temperatures: 2100-5500 cm-1
NASA Astrophysics Data System (ADS)
Barton, Emma J.; Yurchenko, Sergei N.; Tennyson, Jonathan; Clausen, Sønnik; Fateev, Alexander
2017-03-01
High-resolution absorption spectra of NH3 in the region 2100-5500 cm-1 at 1027 °C and approximately atmospheric pressure (1045±3 mbar) are measured. An NH3 concentration of 10% in volume fraction is used in the measurements. Spectra are recorded in a high-temperature gas-flow cell using a Fourier Transform Infrared (FTIR) spectrometer at a nominal resolution of 0.09 cm-1. The spectra are analysed by comparison to a variational line list, BYTe, and experimental energy levels determined using the MARVEL procedure. 2308 lines have been assigned to 45 different bands, of which 1755 and 15 have been assigned or observed for the first time in this work.
Cui, Licong; Xu, Rong; Luo, Zhihui; Wentz, Susan; Scarberry, Kyle; Zhang, Guo-Qiang
2014-08-03
Finding quality consumer health information online can effectively bring important public health benefits to the general population. It can empower people with timely and current knowledge for managing their health and promoting wellbeing. Despite a popular belief that search engines such as Google can solve all information access problems, recent studies show that using search engines and simple search terms is not sufficient. Our objective is to provide an approach to organizing consumer health information for navigational exploration, complementing keyword-based direct search. Multi-topic assignment to health information, such as online questions, is a fundamental step for navigational exploration. We introduce a new multi-topic assignment method combining semantic annotation using UMLS concepts (CUIs) and Formal Concept Analysis (FCA). Each question was tagged with CUIs identified by MetaMap. The CUIs were filtered with term-frequency and a new term-strength index to construct a CUI-question context. The CUI-question context and a topic-subject context were used for multi-topic assignment, resulting in a topic-question context. The topic-question context was then directly used for constructing a prototype navigational exploration interface. Experimental evaluation was performed on the task of automatic multi-topic assignment of 99 predefined topics for about 60,000 consumer health questions from NetWellness. Using example-based metrics, suitable for multi-topic assignment problems, our method achieved a precision of 0.849, recall of 0.774, and F₁ measure of 0.782, using a reference standard of 278 questions with manually assigned topics. Compared to NetWellness' original topic assignment, a 36.5% increase in recall is achieved with virtually no sacrifice in precision. Enhancing the recall of multi-topic assignment without sacrificing precision is a prerequisite for achieving the benefits of navigational exploration. Our new multi-topic assignment method, combining term-strength, FCA, and information retrieval techniques, significantly improved recall and performed well according to example-based metrics.
2014-01-01
Background Finding quality consumer health information online can effectively bring important public health benefits to the general population. It can empower people with timely and current knowledge for managing their health and promoting wellbeing. Despite a popular belief that search engines such as Google can solve all information access problems, recent studies show that using search engines and simple search terms is not sufficient. Our objective is to provide an approach to organizing consumer health information for navigational exploration, complementing keyword-based direct search. Multi-topic assignment to health information, such as online questions, is a fundamental step for navigational exploration. Methods We introduce a new multi-topic assignment method combining semantic annotation using UMLS concepts (CUIs) and Formal Concept Analysis (FCA). Each question was tagged with CUIs identified by MetaMap. The CUIs were filtered with term-frequency and a new term-strength index to construct a CUI-question context. The CUI-question context and a topic-subject context were used for multi-topic assignment, resulting in a topic-question context. The topic-question context was then directly used for constructing a prototype navigational exploration interface. Results Experimental evaluation was performed on the task of automatic multi-topic assignment of 99 predefined topics for about 60,000 consumer health questions from NetWellness. Using example-based metrics, suitable for multi-topic assignment problems, our method achieved a precision of 0.849, recall of 0.774, and F1 measure of 0.782, using a reference standard of 278 questions with manually assigned topics. Compared to NetWellness’ original topic assignment, a 36.5% increase in recall is achieved with virtually no sacrifice in precision. Conclusion Enhancing the recall of multi-topic assignment without sacrificing precision is a prerequisite for achieving the benefits of navigational exploration. Our new multi-topic assignment method, combining term-strength, FCA, and information retrieval techniques, significantly improved recall and performed well according to example-based metrics. PMID:25086916
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Jing; Huang, Hai; Mattson, Earl
Aimed at supporting the design of hydraulic fracturing experiments at the kISMET site, ~1500 m below ground in a deep mine, we performed pre-experimental hydraulic fracturing simulations in order to estimate the breakdown pressure, propagation pressure, fracture geometry, and the magnitude of induced seismicity using a newly developed fully coupled three-dimensional (3D) network flow and quasi-static discrete element model (DEM). The quasi-static DEM model, which is constructed by Delaunay tessellation of the rock volume, considers rock fabric heterogeneities by using the “disordered” DEM mesh and adding random perturbations to the stiffness and tensile/shear strengths of individual DEM elements and themore » elastic beams between them. A conjugate 3D flow network based on the DEM lattice is constructed to calculate the fluid flow in both the fracture and porous matrix. One distinctive advantage of the model is that fracturing is naturally described by the breakage of elastic beams between DEM elements. It is also extremely convenient to introduce mechanical anisotropy into the model by simply assigning orientation-dependent tensile/shear strengths to the elastic beams. In this paper, the 3D hydraulic fracturing model was verified against the analytic solution for a penny-shaped crack model. We applied the model to simulate fracture propagation from a vertical open borehole based on initial estimates of rock mechanical properties and in-situ stress conditions. The breakdown pressure and propagation pressure are directly obtained from the simulation. In addition, the released elastic strain energies of individual fracturing events were calculated and used as a conservative estimate for the magnitudes of the potential induced seismic activities associated with fracturing. The comparisons between model predictions and experimental results are still ongoing.« less
Numerical investigation of tip clearance cavitation in Kaplan runners
NASA Astrophysics Data System (ADS)
Nikiforova, K.; Semenov, G.; Kuznetsov, I.; Spiridonov, E.
2016-11-01
There is a gap between the Kaplan runner blade and the shroud that makes for a special kind of cavitation: cavitation in the tip leakage flow. Two types of cavitation caused by the presence of clearance gap are known: tip vortex cavitation that appears at the core of the rolled up vortex on the blade suction side and tip clearance cavitation that appears precisely in the gap between the blade tip edge and the shroud. In the context of this work numerical investigation of the model Kaplan runner has been performed taking into account variable tip clearance for several cavitation regimes. The focus is put on investigation of structure and origination of mechanism of cavitation in the tip leakage flow. Calculations have been performed with the help of 3-D unsteady numerical model for two-phase medium. Modeling of turbulent flow in this work has been carried out using full equations of Navier-Stokes averaged by Reynolds with correction for streamline curvature and system rotation. For description of this medium (liquid-vapor) simplification of Euler approach is used; it is based on the model of interpenetrating continuums, within the bounds of this two- phase medium considered as a quasi-homogeneous mixture with the common velocity field and continuous distribution of density for both phases. As a result, engineering techniques for calculation of cavitation conditioned by existence of tip clearance in model turbine runner have been developed. The detailed visualization of the flow was carried out and vortex structure on the suction side of the blade was reproduced. The range of frequency with maximum value of pulsation was assigned and maximum energy frequency was defined; it is based on spectral analysis of the obtained data. Comparison between numerical computation results and experimental data has been also performed. The location of cavitation zone has a good agreement with experiment for all analyzed regimes.
2017-03-23
solutions obtained through their proposed method to comparative instances of a generalized assignment problem with either ordinal cost components or... method flag: Designates the method by which the changed/ new assignment problem instance is solved. methodFlag = 0:SMAWarmstart Returns a matching...of randomized perturbations. We examine the contrasts between these methods in the context of assigning Army Officers among a set of identified
Trial to assess the utility of genetic sequencing to improve patient outcomes
A pilot trial to assess whether assigning treatment based on specific gene mutations can provide benefit to patients with metastatic solid tumors is being launched this month by the NCI. The Molecular Profiling based Assignment of Cancer Therapeutics, or
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-06
... 252.211-7008, Use of Government-Assigned Serial Numbers, and DFARS 252.232-7006, Wide Area WorkFlow... harmonizing rules, and of promoting flexibility. This is not a significant regulatory action and, therefore... September 30, 1993. This rule is not a major rule under 5 U.S.C. 804. III. Regulatory Flexibility Act DoD...
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Zhou; H. Huang; M. Deo
Log and seismic data indicate that most shale formations have strong heterogeneity. Conventional analytical and semi-analytical fracture models are not enough to simulate the complex fracture propagation in these highly heterogeneous formation. Without considering the intrinsic heterogeneity, predicted morphology of hydraulic fracture may be biased and misleading in optimizing the completion strategy. In this paper, a fully coupling fluid flow and geomechanics hydraulic fracture simulator based on dual-lattice Discrete Element Method (DEM) is used to predict the hydraulic fracture propagation in heterogeneous reservoir. The heterogeneity of rock is simulated by assigning different material force constant and critical strain to differentmore » particles and is adjusted by conditioning to the measured data and observed geological features. Based on proposed model, the effects of heterogeneity at different scale on micromechanical behavior and induced macroscopic fractures are examined. From the numerical results, the microcrack will be more inclined to form at the grain weaker interface. The conventional simulator with homogeneous assumption is not applicable for highly heterogeneous shale formation.« less
Mechanisms-based viscoplasticity: Theoretical approach and experimental validation for steel 304L
Zubelewicz, Aleksander; Oliferuk, Wiera
2016-01-01
We propose a mechanisms-based viscoplasticity approach for metals and alloys. First, we derive a stochastic model for thermally-activated motion of dislocations and, then, introduce power-law flow rules. The overall plastic deformation includes local plastic slip events taken with an appropriate weight assigned to each angle of the plane misorientation from the direction of maximum shear stress. As deformation progresses, the material experiences successive reorganizations of the slip systems. The microstructural evolution causes that a portion of energy expended on plastic deformation is dissipated and the rest is stored in the defect structures. We show that the reorganizations are stable in a homogeneously deformed material. The concept is tested for steel 304L, where we reproduce experimentally obtained stress-strain responses, we construct the Frost-Ashby deformation map and predict the rate of the energy storage. The storage is assessed in terms of synchronized measurements of temperature and displacement distributions on the specimen surface during tensile loading. PMID:27026209
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, H; Leszczynski, K; Lee, Y
Purpose: To evaluate MR-only treatment planning for brain Stereotactic Ablative Radiotherapy (SABR) based on pseudo-CT (pCT) generation using one set of T1-weighted MRI. Methods: T1-weighted MR and CT images from 12 patients who were eligible for brain SABR were retrospectively acquired for this study. MR-based pCT was generated by using a newly in-house developed algorithm based on MR tissue segmentation and voxel-based electron density (ED) assignment (pCTv). pCTs using bulk density assignment (pCTb where bone and soft tissue were assigned 800HU and 0HU,respectively), and water density assignment (pCTw where all tissues were assigned 0HU) were generated for comparison of EDmore » assignment techniques. The pCTs were registered with CTs and contours of radiation targets and Organs-at-Risk (OARs) from clinical CT-based plans were copied to co-registered pCTs. Volumetric-Modulated-Arc-Therapy(VMAT) plans were independently created for pCTv and CT using the same optimization settings and a prescription (50Gy/10 fractions) to planning-target-volume (PTV) mean dose. pCTv-based plans and CT-based plans were compared with dosimetry parameters and monitor units (MUs). Beam fluence maps of CT-based plans were transferred to co-registered pCTs, and dose was recalculated on pCTs. Dose distribution agreement between pCTs and CT plans were quantified using Gamma analysis (2%/2mm, 1%/1mm with a 10% cut-off threshold) in axial, coronal and sagittal planes across PTV. Results: The average differences of PTV mean and maximum doses, and monitor units between independently created pCTv-based and CT-based plans were 0.5%, 1.5% and 1.1%, respectively. Gamma analysis of dose distributions of the pCTs and the CT calculated using the same fluence map resulted in average agreements of 92.6%/79.1%/52.6% with 1%/1mm criterion, and 98.7%/97.4%/71.5% with 2%/2mm criterion, for pCTv/CT, pCTb/CT and pCTw/CT, respectively. Conclusion: Plans produced on Voxel-based pCT is dosimetrically more similar to CT plans than bulk assignment-based pCTs. MR-only treatment planning using voxel-based pCT generated from T1-wieghted MRI may be feasible.« less
Validation of a semi-quantitative job exposure matrix at a Söderberg aluminum smelter.
Friesen, M C; Demers, P A; Spinelli, J J; Le, N D
2003-08-01
We tested the validity of a job exposure matrix (JEM) for coal tar pitch volatiles (CTPV) at a Söderberg aluminum smelter. The JEM had been developed by a committee of company hygienists and union representatives for an earlier study of cancer incidence and mortality. Our aim was to test the validity and reliability of the expert-based assignments. Personal CTPV exposure measurements (n = 1879) overlapped 11 yr of the JEM. The arithmetic mean was calculated for 35 job/time period combinations (35% of the exposed work history), categorized using the original exposure intervals, and compared with the expert-based assignments. The expert-based and the measurement-based exposure assignments were only moderately correlated (Spearman's rho = 0.42; weighted kappa = 0.39, CI 0.10-0.69). Only 40% of the expert-based medium category assignments were correctly assigned, with better agreement in the low (84%) and high (100%) categories. Pot operation jobs exhibited better agreement (rho = 0.60) than the maintenance and pot shell repair jobs (rho = 0.25). The mid-point value of the medium category was overestimated by 0.3 mg/m(3). The expert-based exposure assignments may be improved by better characterizing the transitions between exposure categories, by accounting for exposure differences between pot lines and by re-examining the category mid-point values used in calculating the cumulative exposure. Lack of historical exposure measurements often requires reliance on expert knowledge to assess exposure levels. Validating the experts' estimates against available exposure measurements may help to identify weaknesses in the exposure assessment where improvements may be possible, as was shown here.
Inferring Higher Functional Information for RIKEN Mouse Full-Length cDNA Clones With FACTS
Nagashima, Takeshi; Silva, Diego G.; Petrovsky, Nikolai; Socha, Luis A.; Suzuki, Harukazu; Saito, Rintaro; Kasukawa, Takeya; Kurochkin, Igor V.; Konagaya, Akihiko; Schönbach, Christian
2003-01-01
FACTS (Functional Association/Annotation of cDNA Clones from Text/Sequence Sources) is a semiautomated knowledge discovery and annotation system that integrates molecular function information derived from sequence analysis results (sequence inferred) with functional information extracted from text. Text-inferred information was extracted from keyword-based retrievals of MEDLINE abstracts and by matching of gene or protein names to OMIM, BIND, and DIP database entries. Using FACTS, we found that 47.5% of the 60,770 RIKEN mouse cDNA FANTOM2 clone annotations were informative for text searches. MEDLINE queries yielded molecular interaction-containing sentences for 23.1% of the clones. When disease MeSH and GO terms were matched with retrieved abstracts, 22.7% of clones were associated with potential diseases, and 32.5% with GO identifiers. A significant number (23.5%) of disease MeSH-associated clones were also found to have a hereditary disease association (OMIM Morbidmap). Inferred neoplastic and nervous system disease represented 49.6% and 36.0% of disease MeSH-associated clones, respectively. A comparison of sequence-based GO assignments with informative text-based GO assignments revealed that for 78.2% of clones, identical GO assignments were provided for that clone by either method, whereas for 21.8% of clones, the assignments differed. In contrast, for OMIM assignments, only 28.5% of clones had identical sequence-based and text-based OMIM assignments. Sequence, sentence, and term-based functional associations are included in the FACTS database (http://facts.gsc.riken.go.jp/), which permits results to be annotated and explored through web-accessible keyword and sequence search interfaces. The FACTS database will be a critical tool for investigating the functional complexity of the mouse transcriptome, cDNA-inferred interactome (molecular interactions), and pathome (pathologies). PMID:12819151
NASA Technical Reports Server (NTRS)
Barhydt, Richard; Palmer, Michael T.; Eischeid, Todd M.
2004-01-01
NASA Langley Research Center is developing an Autonomous Operations Planner (AOP) that functions as an Airborne Separation Assurance System for autonomous flight operations. This development effort supports NASA s Distributed Air-Ground Traffic Management (DAG-TM) operational concept, designed to significantly increase capacity of the national airspace system, while maintaining safety. Autonomous aircraft pilots use the AOP to maintain traffic separation from other autonomous aircraft and managed aircraft flying under today's Instrument Flight Rules, while maintaining traffic flow management constraints assigned by Air Traffic Service Providers. AOP is designed to facilitate eventual implementation through careful modeling of its operational environment, interfaces with other aircraft systems and data links, and conformance with established flight deck conventions and human factors guidelines. AOP uses currently available or anticipated data exchanged over modeled Arinc 429 data buses and an Automatic Dependent Surveillance Broadcast 1090 MHz link. It provides pilots with conflict detection, prevention, and resolution functions and works with the Flight Management System to maintain assigned traffic flow management constraints. The AOP design has been enhanced over the course of several experiments conducted at NASA Langley and is being prepared for an upcoming Joint Air/Ground Simulation with NASA Ames Research Center.
TH-A-9A-01: Active Optical Flow Model: Predicting Voxel-Level Dose Prediction in Spine SBRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, J; Wu, Q.J.; Yin, F
2014-06-15
Purpose: To predict voxel-level dose distribution and enable effective evaluation of cord dose sparing in spine SBRT. Methods: We present an active optical flow model (AOFM) to statistically describe cord dose variations and train a predictive model to represent correlations between AOFM and PTV contours. Thirty clinically accepted spine SBRT plans are evenly divided into training and testing datasets. The development of predictive model consists of 1) collecting a sequence of dose maps including PTV and OAR (spinal cord) as well as a set of associated PTV contours adjacent to OAR from the training dataset, 2) classifying data into fivemore » groups based on PTV's locations relative to OAR, two “Top”s, “Left”, “Right”, and “Bottom”, 3) randomly selecting a dose map as the reference in each group and applying rigid registration and optical flow deformation to match all other maps to the reference, 4) building AOFM by importing optical flow vectors and dose values into the principal component analysis (PCA), 5) applying another PCA to features of PTV and OAR contours to generate an active shape model (ASM), and 6) computing a linear regression model of correlations between AOFM and ASM.When predicting dose distribution of a new case in the testing dataset, the PTV is first assigned to a group based on its contour characteristics. Contour features are then transformed into ASM's principal coordinates of the selected group. Finally, voxel-level dose distribution is determined by mapping from the ASM space to the AOFM space using the predictive model. Results: The DVHs predicted by the AOFM-based model and those in clinical plans are comparable in training and testing datasets. At 2% volume the dose difference between predicted and clinical plans is 4.2±4.4% and 3.3±3.5% in the training and testing datasets, respectively. Conclusion: The AOFM is effective in predicting voxel-level dose distribution for spine SBRT. Partially supported by NIH/NCI under grant #R21CA161389 and a master research grant by Varian Medical System.« less
Lavretsky, Philip; Peters, Jeffrey L; Winker, Kevin; Bahn, Volker; Kulikova, Irina; Zhuravlev, Yuri N; Wilson, Robert E; Barger, Chris; Gurney, Kirsty; McCracken, Kevin G
2016-02-01
Estimating the frequency of hybridization is important to understand its evolutionary consequences and its effects on conservation efforts. In this study, we examined the extent of hybridization in two sister species of ducks that hybridize. We used mitochondrial control region sequences and 3589 double-digest restriction-associated DNA sequences (ddRADseq) to identify admixture between wild lesser scaup (Aythya affinis) and greater scaup (A. marila). Among 111 individuals, we found one introgressed mitochondrial DNA haplotype in lesser scaup and four in greater scaup. Likewise, based on the site-frequency spectrum from autosomal DNA, gene flow was asymmetrical, with higher rates from lesser into greater scaup. However, using ddRADseq nuclear DNA, all individuals were assigned to their respective species with >0.95 posterior assignment probability. To examine the power for detecting admixture, we simulated a breeding experiment in which empirical data were used to create F1 hybrids and nine generations (F2-F10) of backcrossing. F1 hybrids and F2, F3 and most F4 backcrosses were clearly distinguishable from pure individuals, but evidence of admixed histories was effectively lost after the fourth generation. Thus, we conclude that low interspecific assignment probabilities (0.011-0.043) for two lesser and nineteen greater scaup were consistent with admixed histories beyond the F3 generation. These results indicate that the propensity of these species to hybridize in the wild is low and largely asymmetric. When applied to species-specific cases, our approach offers powerful utility for examining concerns of hybridization in conservation efforts, especially for determining the generational time until admixed histories are effectively lost through backcrossing. © 2015 John Wiley & Sons Ltd.
Infrared absorption of CH3OSO detected with time-resolved Fourier-transform spectroscopy.
Chen, Jin-Dah; Lee, Yuan-Pern
2011-03-07
A step-scan Fourier-transform spectrometer coupled with a multipass absorption cell was employed to detect temporally resolved infrared absorption spectra of CH(3)OSO produced upon irradiation of a flowing gaseous mixture of CH(3)OS(O)Cl in N(2) or CO(2) at 248 nm. Two intense transient features with origins near 1152 and 994 cm(-1) are assigned to syn-CH(3)OSO; the former is attributed to overlapping bands at 1154 ± 3 and 1151 ± 3 cm(-1), assigned to the S=O stretching mixed with CH(3) rocking (ν(8)) and the S=O stretching mixed with CH(3) wagging (ν(9)) modes, respectively, and the latter to the C-O stretching (ν(10)) mode at 994 ± 6 cm(-1). Two weak bands at 2991 ± 6 and 2956 ± 3 cm(-1) are assigned as the CH(3) antisymmetric stretching (ν(2)) and symmetric stretching (ν(3)) modes, respectively. Observed vibrational transition wavenumbers agree satisfactorily with those predicted with quantum-chemical calculations at level B3P86∕aug-cc-pVTZ. Based on rotational parameters predicted at that level, the simulated rotational contours of these bands agree satisfactorily with experimental results. The simulation indicates that the S=O stretching mode of anti-CH(3)OSO near 1164 cm(-1) likely makes a small contribution to the observed band near 1152 cm(-1). A simple kinetic model of self-reaction is employed to account for the decay of CH(3)OSO and yields a second-order rate coefficient k=(4 ± 2)×10(-10) cm(3)molecule(-1)s(-1). © 2011 American Institute of Physics.
von Haller, Priska D; Yi, Eugene; Donohoe, Samuel; Vaughn, Kelly; Keller, Andrew; Nesvizhskii, Alexey I; Eng, Jimmy; Li, Xiao-jun; Goodlett, David R; Aebersold, Ruedi; Watts, Julian D
2003-07-01
Lipid rafts were prepared according to standard protocols from Jurkat T cells stimulated via T cell receptor/CD28 cross-linking and from control (unstimulated) cells. Co-isolating proteins from the control and stimulated cell preparations were labeled with isotopically normal (d0) and heavy (d8) versions of the same isotope-coded affinity tag (ICAT) reagent, respectively. Samples were combined, proteolyzed, and resultant peptides fractionated via cation exchange chromatography. Cysteine-containing (ICAT-labeled) peptides were recovered via the biotin tag component of the ICAT reagents by avidin-affinity chromatography. On-line micro-capillary liquid chromatography tandem mass spectrometry was performed on both avidin-affinity (ICAT-labeled) and flow-through (unlabeled) fractions. Initial peptide sequence identification was by searching recorded tandem mass spectrometry spectra against a human sequence data base using SEQUEST software. New statistical data modeling algorithms were then applied to the SEQUEST search results. These allowed for discrimination between likely "correct" and "incorrect" peptide assignments, and from these the inferred proteins that they collectively represented, by calculating estimated probabilities that each peptide assignment and subsequent protein identification was a member of the "correct" population. For convenience, the resultant lists of peptide sequences assigned and the proteins to which they corresponded were filtered at an arbitrarily set cut-off of 0.5 (i.e. 50% likely to be "correct") and above and compiled into two separate datasets. In total, these data sets contained 7667 individual peptide identifications, which represented 2669 unique peptide sequences, corresponding to 685 proteins and related protein groups.
Different Simultaneous Sleep States in the Hippocampus and Neocortex
Emrick, Joshua J.; Gross, Brooks A.; Riley, Brett T.; Poe, Gina R.
2016-01-01
Study Objectives: Investigators assign sleep-waking states using brain activity collected from a single site, with the assumption that states occur at the same time throughout the brain. We sought to determine if sleep-waking states differ between two separate structures: the hippocampus and neocortex. Methods: We measured electrical signals (electroencephalograms and electromyograms) during sleep from the hippocampus and neocortex of five freely behaving adult male rats. We assigned sleep-waking states in 10-sec epochs based on standard scoring criteria across a 4-h recording, then analyzed and compared states and signals from simultaneous epochs between sites. Results: We found that the total amount of each state, assigned independently using the hippocampal and neocortical signals, was similar between the hippocampus and neocortex. However, states at simultaneous epochs were different as often as they were the same (P = 0.82). Furthermore, we found that the progression of states often flowed through asynchronous state-pairs led by the hippocampus. For example, the hippocampus progressed from transition-to-rapid eye movement sleep to rapid eye movement sleep before the neocortex more often than in synchrony with the neocortex (38.7 ± 16.2% versus 15.8 ± 5.6% mean ± standard error of the mean). Conclusions: We demonstrate that hippocampal and neocortical sleep-waking states often differ in the same epoch. Consequently, electrode location affects estimates of sleep architecture, state transition timing, and perhaps even percentage of time in sleep states. Therefore, under normal conditions, models assuming brain state homogeneity should not be applied to the sleeping or waking brain. Citation: Emrick JJ, Gross BA, Riley BT, Poe GR. Different simultaneous sleep states in the hippocampus and neocortex. SLEEP 2016;39(12):2201–2209. PMID:27748240
Cerebral palsy characterization by estimating ocular motion
NASA Astrophysics Data System (ADS)
González, Jully; Atehortúa, Angélica; Moncayo, Ricardo; Romero, Eduardo
2017-11-01
Cerebral palsy (CP) is a large group of motion and posture disorders caused during the fetal or infant brain development. Sensorial impairment is commonly found in children with CP, i.e., between 40-75 percent presents some form of vision problems or disabilities. An automatic characterization of the cerebral palsy is herein presented by estimating the ocular motion during a gaze pursuing task. Specifically, After automatically detecting the eye location, an optical flow algorithm tracks the eye motion following a pre-established visual assignment. Subsequently, the optical flow trajectories are characterized in the velocity-acceleration phase plane. Differences are quantified in a small set of patients between four to ten years.
A comparison of multiprocessor scheduling methods for iterative data flow architectures
NASA Technical Reports Server (NTRS)
Storch, Matthew
1993-01-01
A comparative study is made between the Algorithm to Architecture Mapping Model (ATAMM) and three other related multiprocessing models from the published literature. The primary focus of all four models is the non-preemptive scheduling of large-grain iterative data flow graphs as required in real-time systems, control applications, signal processing, and pipelined computations. Important characteristics of the models such as injection control, dynamic assignment, multiple node instantiations, static optimum unfolding, range-chart guided scheduling, and mathematical optimization are identified. The models from the literature are compared with the ATAMM for performance, scheduling methods, memory requirements, and complexity of scheduling and design procedures.
Franc-Law, Jeffrey Michael; Ingrassia, Pier Luigi; Ragazzoni, Luca; Della Corte, Francesco
2010-01-01
Training in practical aspects of disaster medicine is often impossible, and simulation may offer an educational opportunity superior to traditional didactic methods. We sought to determine whether exposure to an electronic simulation tool would improve the ability of medical students to manage a simulated disaster. We stratified 22 students by year of education and randomly assigned 50% from each category to form the intervention group, with the remaining 50% forming the control group. Both groups received the same didactic training sessions. The intervention group received additional disaster medicine training on a patient simulator (disastermed.ca), and the control group spent equal time on the simulator in a nondisaster setting. We compared markers of patient flow during a simulated disaster, including mean differences in time and number of patients to reach triage, bed assignment, patient assessment and disposition. In addition, we compared triage accuracy and scores on a structured command-and-control instrument. We collected data on the students' evaluations of the course for secondary purposes. Participants in the intervention group triaged their patients more quickly than participants in the control group (mean difference 43 s, 99.5% confidence interval [CI] 12 to 75 s). The score of performance indicators on a standardized scale was also significantly higher in the intervention group (18/18) when compared with the control group (8/18) (p < 0.001). All students indicated that they preferred the simulation-based curriculum to a lecture-based curriculum. When asked to rate the exercise overall, both groups gave a median score of 8 on a 10-point modified Likert scale. Participation in an electronic disaster simulation using the disastermed.ca software package appears to increase the speed at which medical students triage simulated patients and increase their score on a structured command-and-control performance indicator instrument. Participants indicated that the simulation-based curriculum in disaster medicine is preferable to a lecture-based curriculum. Overall student satisfaction with the simulation-based curriculum was high.
Reforming the Eighth-Grade Student Assignment Process for the Philadelphia Public Schools.
ERIC Educational Resources Information Center
Johnson, Michael P.
The eighth grade student assignment project, an initiative of the School District of Philadelphia, assigns students to high school academic programs based on student preferences, academic preparation, program capacity, and desegregation requirements. These programs, called small learning communities (SLCs), emphasize areas such as design and…
Anticipatory Speech Anxiety as a Function of Public Speaking Assignment Type
ERIC Educational Resources Information Center
Witt, Paul L.; Behnke, Ralph R.
2006-01-01
This investigation included two studies relating anticipatory public speaking anxiety to the nature of the speech assignment. Based on uncertainty reduction theory, which suggests that communicators are less comfortable in unfamiliar or unpredictable contexts, two hypotheses were advanced on the presumption that various types of assignments in a…
ERIC Educational Resources Information Center
Kersaint, Gladis
2007-01-01
This article describes a technology integration course planning assignment that was developed to enhance preservice teachers' technological pedagogical content knowledge (TPCK). This assignment required preservice teachers work with peers to integrate various technological tools (e.g., graphing calculators, web-based mathematics applets, etc) in a…
A Comparison of Electronic and Paper-Based Assignment Submission and Feedback
ERIC Educational Resources Information Center
Bridge, Pete; Appleyard, Rob
2008-01-01
This paper presents the results of a study evaluating student perceptions of online assignment submission. 47 students submitted assignments and received feedback via features within the Virtual Learning Environment Blackboard[TM]. The students then completed questionnaires comparing their experience of online submission and feedback with…
The Biomes of Homewood: Interactive Map Software
ERIC Educational Resources Information Center
Shingles, Richard; Feist, Theron; Brosnan, Rae
2005-01-01
To build a learning community, the General Biology faculty at Johns Hopkins University conducted collaborative, problem-based learning assignments outside of class in which students are assigned to specific areas on campus, and gather and report data about their area. To overcome the logistics challenges presented by conducting such assignments in…
Automating Formative and Summative Feedback for Individualised Assignments
ERIC Educational Resources Information Center
Hamilton, Ian Robert
2009-01-01
Purpose: The purpose of this paper is to report on the rationale behind the use of a unique paper-based individualised accounting assignment, which automated the provision to students of immediate formative and timely summative feedback. Design/methodology/approach: As students worked towards completing their assignment, the package provided…
2004-03-01
Assignment Sub-Process.........................................................................................12 2. Possible Improvements By A Market ...COMPENSATION STARTEGY .............................................17 A. THE RIGHT COMPENSATION SYSTEM ...............................................17 B. AN...5. Market -Based Labor Markets (From: Gates, 2001).........................................13 Figure 6. What should a compensation system do? (From
ERIC Educational Resources Information Center
Jackson, C. Kirabo
2009-01-01
In Trinidad and Tobago students are assigned to secondary schools after fifth grade based on achievement tests, leading to large differences in the school environments to which students of differing initial levels of achievement are exposed. Using both a regression discontinuity design and rule-based instrumental variables to address…
NASA Astrophysics Data System (ADS)
Saurel, Jean-Marie; Randriamora, Frédéric; Bosson, Alexis; Kitou, Thierry; Vidal, Cyril; Bouin, Marie-Paule; de Chabalier, Jean-Bernard; Clouard, Valérie
2010-05-01
Lesser Antilles observatories are in charge of monitoring the volcanoes and earthquakes in the Eastern Caribbean region. During the past two years, our seismic networks have evolved toward a full digital technology. These changes, which include modern three components sensors, high dynamic range digitizers, high speed terrestrial and satellite telemetry, improve data quality but also increase the data flows to process and to store. Moreover, the generalization of data exchange to build a wide virtual seismic network around the Caribbean domain requires a great flexibility to provide and receive data flows in various formats. As many observatories, we have decided to use the most popular and robust open source data acquisition systems in use in today observatories community : EarthWorm and SeisComP. The first is renowned for its ability to process real time seismic data flows, with a high number of tunable modules (filters, triggers, automatic pickers, locators). The later is renowned for its ability to exchange seismic data using the international SEED standard (Standard for Exchange of Earthquake Data), either by producing archive files, or by managing output and input SEEDLink flows. French Antilles Seismological and Volcanological Observatories have chosen to take advantage of the best features of each software to design a new data flow scheme and to integrate it in our global observatory data management system, WebObs [Beauducel et al., 2004]1, see the companion paper (Part 2). We assigned the tasks to the different softwares, regarding their main abilities : - EarthWorm first performs the integration of data from different heterogeneous sources; - SeisComP takes all this homogeneous EarthWorm data flow, adds other sources and produces SEED archives and SEED data flow; - EarthWorm is then used again to process this clean and complete SEEDLink data flow, mainly producing triggers, automatic locations and alarms; - WebObs provides a friendly human interface, both to the administrator for station management, and to the regular user for real time everyday analysis of the seismic data (event classification database, location scripts, automatic shakemaps and regional catalog with associated hypocenter maps).
50 CFR 679.50 - Groundfish Observer Program.
Code of Federal Regulations, 2010 CFR
2010-10-01
... following: (A) Identification of the management, organizational structure, and ownership structure of the.../processors. A catcher/processor will be assigned to a fishery category based on the retained groundfish catch... in Federal waters will be assigned to a fishery category based on the retained groundfish catch...
Macias, Cathaleene; Barreira, Paul; Hargreaves, William; Bickman, Leonard; Fisher, William; Aronson, Elliot
2005-04-01
The inability to blind research participants to their experimental conditions is the Achilles' heel of mental health services research. When one experimental condition receives more disappointed participants, or more satisfied participants, research findings can be biased in spite of random assignment. The authors explored the potential for research participants' preference for one experimental program over another to compromise the generalizability and validity of randomized controlled service evaluations as well as cross-study comparisons. Three Cox regression analyses measured the impact of applicants' service assignment preference on research project enrollment, engagement in assigned services, and a service-related outcome, competitive employment. A stated service preference, referral by an agency with a low level of continuity in outpatient care, and willingness to switch from current services were significant positive predictors of research enrollment. Match to service assignment preference was a significant positive predictor of service engagement, and mismatch to assignment preference was a significant negative predictor of both service engagement and employment outcome. Referral source type and service assignment preference should be routinely measured and statistically controlled for in all studies of mental health service effectiveness to provide a sound empirical base for evidence-based practice.
Integer Linear Programming for Constrained Multi-Aspect Committee Review Assignment
Karimzadehgan, Maryam; Zhai, ChengXiang
2011-01-01
Automatic review assignment can significantly improve the productivity of many people such as conference organizers, journal editors and grant administrators. A general setup of the review assignment problem involves assigning a set of reviewers on a committee to a set of documents to be reviewed under the constraint of review quota so that the reviewers assigned to a document can collectively cover multiple topic aspects of the document. No previous work has addressed such a setup of committee review assignments while also considering matching multiple aspects of topics and expertise. In this paper, we tackle the problem of committee review assignment with multi-aspect expertise matching by casting it as an integer linear programming problem. The proposed algorithm can naturally accommodate any probabilistic or deterministic method for modeling multiple aspects to automate committee review assignments. Evaluation using a multi-aspect review assignment test set constructed using ACM SIGIR publications shows that the proposed algorithm is effective and efficient for committee review assignments based on multi-aspect expertise matching. PMID:22711970
Kamali, Fahimeh; Mirkhani, Hossein; Nematollahi, Ahmadreza; Heidari, Saeed; Moosavi, Elahesadat; Mohamadi, Marzieh
2017-04-01
Transcutaneous electrical nerve stimulation (TENS) is a widely-practiced method to increase blood flow in clinical practice. The best location for stimulation to achieve optimal blood flow has not yet been determined. We compared the effect of TENS application at sympathetic ganglions and acupuncture points on blood flow in the foot of healthy individuals. Seventy-five healthy individuals were randomly assigned to three groups. The first group received cutaneous electrical stimulation at the thoracolumbar sympathetic ganglions. The second group received stimulation at acupuncture points. The third group received stimulation in the mid-calf area as a control group. Blood flow was recorded at time zero as baseline and every 3 minutes after baseline during stimulation, with a laser Doppler flow-meter. Individuals who received sympathetic ganglion stimulation showed significantly greater blood flow than those receiving acupuncture point stimulation or those in the control group (p<0.001). Data analysis revealed that blood flow at different times during stimulation increased significantly from time zero in each group. Therefore, the application of low-frequency TENS at the thoracolumbar sympathetic ganglions was more effective in increasing peripheral blood circulation than stimulation at acupuncture points. Copyright © 2017 Medical Association of Pharmacopuncture Institute. Published by Elsevier B.V. All rights reserved.
da Mata, A D S P; da Silva Marques, D N; Silveira, J M L; Marques, J R O F; de Melo Campos Felino, E T; Guilherme, N F R P M
2009-04-01
To compare salivary pH changes and stimulation efficacy of two different gustatory stimulants of salivary secretion (GSSS). Portuguese Dental Faculty Clinic. Double blind randomized controlled trial. One hundred and twenty volunteers were randomized to two intervention groups. Sample sized was calculated using an alpha error of 0.05 and a beta of 0.20. Participants were randomly assigned to receive a new gustatory stimulant of secretory secretion containing a weaker malic acid, fluoride and xylitol or a traditionally citric acid-based one. Saliva collection was obtained by established methods at different times. The salivary pH of the samples was determined with a pH meter and a microelectrode. Salivary pH variations and counts of subjects with pH below 5.5 for over 1 min and stimulated salivary flow were the main outcome measures. Both GSSS significantly stimulated salivary output without significant differences between the two groups. The new gustatory stimulant of salivary secretion presented a risk reduction of 80 +/- 10.6% (95% CI) when compared with the traditional one. Gustatory stimulants of salivary secretion with fluoride, xylitol and lower acid content maintain similar salivary stimulation capacity while reducing significantly the dental erosion predictive potential.
NASA Technical Reports Server (NTRS)
Deyoung, R. J.; Walker, G. H.; Williams, M. D.; Schuster, G. L.; Conway, E. J.
1987-01-01
A preliminary conceptual design of a space-based solar pumped iodide laser emitting 1 megawatt of laser power for space-to-space power transmission is described. A near parabolic solar collector focuses sunlight onto the t-C4F9I (perfluoro-t butyl iodide) lasant within a transverse flow optical cavity. Using waste heat, a thermal system was designed to supply compressor and auxiliary power. System components were designed with weight and cost estimates assigned. Although cost is very approximate, the cost comparison of individual system components leads to valuable insights for future research. In particular, it was found that laser efficiency was not a dominant cost or weight factor, the dominant factor being the laser cavity and laser transmission optics. The manufacturing cost was approx. two thirds of the total cost with transportation to orbit the remainder. The flowing nonrenewable lasant comprised 20% of the total life cycle cost of the system and thus was not a major cost factor. The station mass was 92,000 kg without lasant, requiring approx. four shuttle flights to low Earth orbit where an orbital transfer vehicle will transport it to the final altitude of 6378 km.
Global Optimization of Emergency Evacuation Assignments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Lee; Yuan, Fang; Chin, Shih-Miao
2006-01-01
Conventional emergency evacuation plans often assign evacuees to fixed routes or destinations based mainly on geographic proximity. Such approaches can be inefficient if the roads are congested, blocked, or otherwise dangerous because of the emergency. By not constraining evacuees to prespecified destinations, a one-destination evacuation approach provides flexibility in the optimization process. We present a framework for the simultaneous optimization of evacuation-traffic distribution and assignment. Based on the one-destination evacuation concept, we can obtain the optimal destination and route assignment by solving a one-destination traffic-assignment problem on a modified network representation. In a county-wide, large-scale evacuation case study, the one-destinationmore » model yields substantial improvement over the conventional approach, with the overall evacuation time reduced by more than 60 percent. More importantly, emergency planners can easily implement this framework by instructing evacuees to go to destinations that the one-destination optimization process selects.« less
Calibrated peer review assignments for the earth sciences
Rudd, J.A.; Wang, V.Z.; Cervato, C.; Ridky, R.W.
2009-01-01
Calibrated Peer Review ??? (CPR), a web-based instructional tool developed as part of the National Science Foundation reform initiatives in undergraduate science education, allows instructors to incorporate multiple writing assignments in large courses without overwhelming the instructor. This study reports successful implementation of CPR in a large, introductory geology course and student learning of geoscience content. For each CPR assignment in this study, students studied web-based and paper resources, wrote an essay, and reviewed seven essays (three from the instructor, three from peers, and their own) on the topic. Although many students expressed negative attitudes and concerns, particularly about the peer review process of this innovative instructional approach, they also recognized the learning potential of completing CPR assignments. Comparing instruction on earthquakes and plate boundaries using a CPR assignment vs. an instructional video lecture and homework essay with extensive instructor feedback, students mastered more content via CPR instruction.
NASA Astrophysics Data System (ADS)
Wu, Shanhua; Yang, Zhongzhen
2018-07-01
This paper aims to optimize the locations of manufacturing industries in the context of economic globalization by proposing a bi-level programming model which integrates the location optimization model with the traffic assignment model. In the model, the transport network is divided into the subnetworks of raw materials and products respectively. The upper-level model is used to determine the location of industries and the OD matrices of raw materials and products. The lower-level model is used to calculate the attributes of traffic flow under given OD matrices. To solve the model, the genetic algorithm is designed. The proposed method is tested using the Chinese steel industry as an example. The result indicates that the proposed method could help the decision-makers to implement the location decisions for the manufacturing industries effectively.
Hazel, Joseph E.; Kaplinski, Matt; Parnell, Rod; Kohl, Keith; Topping, David J.
2007-01-01
This report presents stage-discharge relations for 47 discrete locations along the Colorado River, downstream from Glen Canyon Dam. Predicting the river stage that results from changes in flow regime is important for many studies investigating the effects of dam operations on resources in and along the Colorado River. The empirically based stage-discharge relations were developed from water-surface elevation data surveyed at known discharges at all 47 locations. The rating curves accurately predict stage at each location for discharges between 141 cubic meters per second and 1,274 cubic meters per second. The coefficient of determination (R2) of the fit to the data ranged from 0.993 to 1.00. Given the various contributing errors to the method, a conservative error estimate of ?0.05 m was assigned to the rating curves.
Multi-dimension feature fusion for action recognition
NASA Astrophysics Data System (ADS)
Dong, Pei; Li, Jie; Dong, Junyu; Qi, Lin
2018-04-01
Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. The challenge for action recognition is to capture and fuse the multi-dimension information in video data. In order to take into account these characteristics simultaneously, we present a novel method that fuses multiple dimensional features, such as chromatic images, depth and optical flow fields. We built our model based on the multi-stream deep convolutional networks with the help of temporal segment networks and extract discriminative spatial and temporal features by fusing ConvNets towers multi-dimension, in which different feature weights are assigned in order to take full advantage of this multi-dimension information. Our architecture is trained and evaluated on the currently largest and most challenging benchmark NTU RGB-D dataset. The experiments demonstrate that the performance of our method outperforms the state-of-the-art methods.
Definition of a Sinkhole hazard methodology in the Pontina Plain (Latium Region, Central Italy)
NASA Astrophysics Data System (ADS)
Teoli, Pamela; Mazza, Roberto; Capelli, Giuseppe
2010-05-01
The work presented here is the continuation of " Sinkhole Project of Latium Region" (2002), carried out by Researchers of the Laboratory of Applied Geology and Hydrogeology of the Department of Geological Sciences of the University "Roma Tre", Rome (Italy), through which were found, in different plain of the whole Region, Sinkhole prone areas, using a methodology based on the superimposition of thematic layers corresponding to geological and anthropogenic breaking factors. In the last years several specific investigations have been conducted by Researchers of the Laboratory in the Pontina Plain, that is located in the south west of the Latium Region, concerning the geological-stratigraphic setting, the sketch of flow in the aquifers located in the Pontine depression, the chemiphysical groundwater characheteristics, the density of wells, the amount of well pumping and piezometric changes. This required the implementation of several piezometric and chemiphysical surveys, the collection and validation of a large number of stratigraphic and geophysical data. All data in the archive have been computerized and the maps vectorized. This makes it possible today to address the analysis with Geographical Information Systems and to start numerical flow simulations, regarding both the heavily drained deep confined aquifer, and the areas subject to the presence of an important water exchange between the recharge area in the Lepini Mountains (carbonatic ridge) and terrigenous aquifers of the plain. Among the main causes that trigger the catastrophic collapses there are, in fact, all the phenomena that cause the mass density reduction through erosion, leaching, dissolution. All these agents are associated with water circulation: flow, velocity, CO2 saturation rate, carbonates saturation rate. The spread in the Pontina plain of deep and high pumping wells, wrongly built without the correct way of progress, and without the realization of cemented portions properly located, can lead to the rise of the artesian groundwater that flows into sandy horizons scarcely thickened, carrying out liquefaction and collapse phenomena. Thus, thanks to the numerous piezometric surveys, different areas have been identified in the plain: artesian wells areas with full water rise, areas with few artesian wells and areas with partial water lift. The analysis of geophysical data has allowed to draw deep profiles that show how along the axis of the plain the roof of carbonates has very changeable heights, due to a complex tectonic evolution. In correspondence to one of the most senior buried structural heights in the carbonate substratum of the plain, a karst cave was intercepted during a drilling on the vertical of a sinkhole opened in 1989, which could be interpreted as a paleospring. The right application of technologies for drilling and completion of the hole, however, has prevented against the recurrence of the phenomenon of collapse. This poster illustrates a matrix calculus implemented by the authors, by which you can come to an assessment of the distribution of Sinkhole hazard in the Pontina Plain. The matrix takes into account different parameters that are related to the breaking causes of the phenomenon. Each parameter is assigned a value (index) representing its variation. The area studied was divided into 150 m square cells, each cell is assigned the value of the Sinkhole hazard index, that is the sum of the various indices assigned to that cell. Such a methodology widely used in other scientific research must be improved by optimizing the values and weights that were assigned to each parameter and implementing the matrix by adding more parameters that influence the phenomenon. A discussion about the importance of these characterizing parameters is presented for further development of the methodology.
Pavelko, Michael T.
2010-01-01
The water-level database for the Death Valley regional groundwater flow system in Nevada and California was updated. The database includes more than 54,000 water levels collected from 1907 to 2007, from more than 1,800 wells. Water levels were assigned a primary flag and multiple secondary flags that describe hydrologic conditions and trends at the time of the measurement and identify pertinent information about the well or water-level measurement. The flags provide a subjective measure of the relative accuracy of the measurements and are used to identify which water levels are appropriate for calculating head observations in a regional transient groundwater flow model. Included in the report appendix are all water-level data and their flags, selected well data, and an interactive spreadsheet for viewing hydrographs and well locations.
Inhibition of coronary blood flow by a vascular waterfall mechanism.
Downey, J M; Kirk, E S
1975-06-01
The mechanism whereby systole inhibits coronary blood flow was examined. A branch of the left coronary artery was maximally dilated with an adenosine infusion, and the pressure-flow relationship was obtained for beating and arrested states. The pressure-flow curve for the arrested state was shifted toward higher pressures and in the range of pressures above peak ventricular pressure was linear and parallel to that for the arrested state. Below this range the curve for the beating state converged toward that for the arrested state and was convex to the pressure axis. These results were compared with a model of the coronary vasculature that consisted of numerous parallel channels, each responding to local intramyocardial pressure by forming vascular waterfalls. When intramyocardial pressure in the model was assigned values from zero at the epicardium to peak ventricular pressure at the endocardium, pressure-flow curves similar to the experimental ones resulted. Thus, we conclude that systole inhibits coronary perfusion by the formation of vascular waterfalls and that the intramyocardial pressures responsible for this inhibition do not significantly exceed peak ventricular pressure.
Graduate Writing Assignments across Faculties in a Canadian University
ERIC Educational Resources Information Center
Shi, Ling; Dong, Yanning
2015-01-01
This study examines 143 graduate assignments across 12 faculties or schools in a Canadian university in order to identify types of writing tasks. Based on the descriptions provided by the instructors, we identified nine types of assignments, with scholarly essay being the most common, followed by summary and response, literature review, project,…
ERIC Educational Resources Information Center
Martini, Tanya S.; Rail, Ashley; Norton, Cole
2015-01-01
We examined first-year psychology majors' (N = 195) beliefs about the relevance of two types of university assignments (individual essay and group wiki) and their connection to the development of career-related skills. Students reported that assignments were only somewhat relevant to their career goals, and relevance ratings were typically…
ERIC Educational Resources Information Center
Sormunen, Eero; Tanni, Mikko; Heinström, Jannica
2013-01-01
Introduction: Information literacy instruction is often undertaken in schools as collaborative source-based writing assignments. his paper presents the findings of a study on collaboration in two school assignments designed for information literacy. Method: The study draws on the models of cooperative and collaborative learning and the task-based…
Evaluation of a Brief Homework Assignment Designed to Reduce Citation Problems
ERIC Educational Resources Information Center
Schuetze, Pamela
2004-01-01
I evaluated a brief homework assignment designed to reduce citation problems in research-based term papers. Students in 2 developmental psychology classes received a brief presentation and handout defining plagiarism with tips on how to cite sources to avoid plagiarizing. In addition, students in 1 class completed 2 brief homework assignments in…
Detecting Plagiarism in MS Access Assignments
ERIC Educational Resources Information Center
Singh, Anil
2013-01-01
Assurance of individual effort from students in computer-based assignments is a challenge. Due to digitization, students can easily use a copy of their friend's work and submit it as their own. Plagiarism in assignments puts students who cheat at par with those who work honestly and this compromises the learning evaluation process. Using a…
Individual Assignments and Academic Dishonesty--Exploring the Learning Conundrum
ERIC Educational Resources Information Center
Leonard, Valorie; LeBrasseur, Rolland
2008-01-01
A survey of university business professors focused on their use of individual assignments in courses and their views on cheating and its impact on student learning. Based on responses from 456 professors (37% response rate) from Ontario, Canada, it was concluded that most faculty believe that individual assignments are effective learning tools and…
75 FR 66038 - Planning Resource Adequacy Assessment Reliability Standard
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-27
... (R1 and R2) are assigned a violation risk factor (VRF) and violation severity level (VSL). However... base penalty amount. To do so, RFC is to assign a VRF to each Requirement and sub-Requirement of a... each VRF assignment.\\59\\ \\59\\ See North American Electric Reliability Corp., 119 FERC ] 61,145, order...
Teaching Complaint and Adjustment Letters--And Tact (My Favorite Assignment).
ERIC Educational Resources Information Center
Deimling, Paula
1992-01-01
Describes a three-part assignment in which each student writes a complaint letter and an adjustment letter responding to another student's complaint letter. Discusses how the third part of the assignment--journal entries--allows students to formulate their own criteria for excellent letters based upon their reactions to the letters they receive.…
Integrated model-based retargeting and optical proximity correction
NASA Astrophysics Data System (ADS)
Agarwal, Kanak B.; Banerjee, Shayak
2011-04-01
Conventional resolution enhancement techniques (RET) are becoming increasingly inadequate at addressing the challenges of subwavelength lithography. In particular, features show high sensitivity to process variation in low-k1 lithography. Process variation aware RETs such as process-window OPC are becoming increasingly important to guarantee high lithographic yield, but such techniques suffer from high runtime impact. An alternative to PWOPC is to perform retargeting, which is a rule-assisted modification of target layout shapes to improve their process window. However, rule-based retargeting is not a scalable technique since rules cannot cover the entire search space of two-dimensional shape configurations, especially with technology scaling. In this paper, we propose to integrate the processes of retargeting and optical proximity correction (OPC). We utilize the normalized image log slope (NILS) metric, which is available at no extra computational cost during OPC. We use NILS to guide dynamic target modification between iterations of OPC. We utilize the NILS tagging capabilities of Calibre TCL scripting to identify fragments with low NILS. We then perform NILS binning to assign different magnitude of retargeting to different NILS bins. NILS is determined both for width, to identify regions of pinching, and space, to locate regions of potential bridging. We develop an integrated flow for 1x metal lines (M1) which exhibits lesser lithographic hotspots compared to a flow with just OPC and no retargeting. We also observe cases where hotspots that existed in the rule-based retargeting flow are fixed using our methodology. We finally also demonstrate that such a retargeting methodology does not significantly alter design properties by electrically simulating a latch layout before and after retargeting. We observe less than 1% impact on latch Clk-Q and D-Q delays post-retargeting, which makes this methodology an attractive one for use in improving shape process windows without perturbing designed values.
NASA Astrophysics Data System (ADS)
Prior, Phil; Chen, Xinfeng; Botros, Maikel; Paulson, Eric S.; Lawton, Colleen; Erickson, Beth; Li, X. Allen
2016-05-01
The treatment planning in radiation therapy (RT) can be arranged to combine benefits of computed tomography (CT) and magnetic resonance imaging (MRI) together to maintain dose calculation accuracy and improved target delineation. Our aim is study the dosimetric impact of uniform relative electron density assignment on IMRT treatment planning with additional consideration given to the effect of a 1.5 T transverse magnetic field (TMF) in MR-Linac. A series of intensity modulated RT (IMRT) plans were generated for two representative tumor sites, pancreas and prostate, using CT and MRI datasets. Representative CT-based IMRT plans were generated to assess the impact of different electron density (ED) assignment on plan quality using CT without the presence of a 1.5 T TMF. The relative ED (rED) values used were taken from the ICRU report 46. Four types of rED assignment in the organs at risk (OARs), the planning target volumes (PTV) and in the non-specified tissue (NST) were considered. Dose was recalculated (no optimization) using a Monaco 5.09.07a research planning system employing Monte Carlo calculations with an option to include TMF. To investigate the dosimetric effect of different rED assignment, the dose-volume parameters (DVPs) obtained from these specific rED plans were compared to those obtained from the original plans based on CT. Overall, we found that uniform rED assignment results in differences in DVPs within 3% for the PTV and 5% for OAR. The presence of 1.5 T TMF on IMRT DVPs resulted in differences that were generally within 3% of the Gold St for both the pancreas and prostate. The combination of uniform rED assignment and TMF produced differences in DVPs that were within 4-5% of the Gold St. Larger differences in DVPs were observed for OARs on T2-based plans. The effects of using different rED assignments and the presence of 1.5 T TMF for pancreas and prostate IMRT plans are generally within 3% and 5% of PTV and OAR Gold St values. There are noticeable dosimetric differences between the CT- and MRI-based IMRT plans caused by a combination of anatomical changes between the two image acquisition times, uniform rED assignment and 1.5 T TMF. This work was present in part at the 2014 ASTRO annual meeting.
Joint Inference of Population Assignment and Demographic History
Choi, Sang Chul; Hey, Jody
2011-01-01
A new approach to assigning individuals to populations using genetic data is described. Most existing methods work by maximizing Hardy–Weinberg and linkage equilibrium within populations, neither of which will apply for many demographic histories. By including a demographic model, within a likelihood framework based on coalescent theory, we can jointly study demographic history and population assignment. Genealogies and population assignments are sampled from a posterior distribution using a general isolation-with-migration model for multiple populations. A measure of partition distance between assignments facilitates not only the summary of a posterior sample of assignments, but also the estimation of the posterior density for the demographic history. It is shown that joint estimates of assignment and demographic history are possible, including estimation of population phylogeny for samples from three populations. The new method is compared to results of a widely used assignment method, using simulated and published empirical data sets. PMID:21775468
NASA Astrophysics Data System (ADS)
Costa, A.; Molnar, P.; Schmitt, R. J. P.
2017-12-01
The grain size distribution (GSD) of river bed sediment results from the long term balance between transport capacity and sediment supply. Changes in climate and human activities may alter the spatial distribution of transport capacity and sediment supply along channels and hence impact local bedload transport and GSD. The effects of changed flow are not easily inferable due the non-linear, threshold-based nature of the relation between discharge and sediment mobilization, and the network-scale control on local sediment supply. We present a network-scale model for fractional sediment transport to quantify the impact of hydropower (HP) operations on river network GSD. We represent the river network as a series of connected links for which we extract the geometric characteristics from satellite images and a digital elevation model. We assign surface roughness based on the channel bed GSD. Bed shear stress is estimated at link-scale under the assumptions of rectangular prismatic cross sections and normal flow. The mass balance between sediment supply and transport capacity, computed with the Wilcock and Crowe model, determines transport rates of multiple grain size classes and the resulting GSD. We apply the model to the upper Rhone basin, a large Alpine basin in Switzerland. Since 1960s, changed flow conditions due to HP operations and sediment storage behind dams have potentially altered the sediment transport of the basin. However, little is known on the magnitude and spatial distribution of these changes. We force the model with time series of daily discharge derived with a spatially distributed hydrological model for pre and post HP scenarios. We initialize GSD under the assumption that coarse grains (d90) are mobilized only during mean annual maximum flows, and on the basis of ratios between d90 and characteristic diameters estimated from field measurements. Results show that effects of flow regulation vary significantly in space and in time and are grain size dependent. HP operations led to an overall reduction of sediment transport at network scale, especially in summer and for coarser grains, leading to a general coarsening of the river bed sediments at the upstream reaches. The model allows investigating the impact of modified HP operations and climate change projections on sediment dynamics at the network scale.
Evaluation of a School-Based Teen Obesity Prevention Minimal Intervention
ERIC Educational Resources Information Center
Abood, Doris A.; Black, David R.; Coster, Daniel C.
2008-01-01
Objective: A school-based nutrition education minimal intervention (MI) was evaluated. Design: The design was experimental, with random assignment at the school level. Setting: Seven schools were randomly assigned as experimental, and 7 as delayed-treatment. Participants: The experimental group included 551 teens, and the delayed treatment group…
Distributed Collaborative Homework Activities in a Problem-Based Usability Engineering Course
ERIC Educational Resources Information Center
Carroll, John M.; Jiang, Hao; Borge, Marcela
2015-01-01
Teams of students in an upper-division undergraduate Usability Engineering course used a collaborative environment to carry out a series of three distributed collaborative homework assignments. Assignments were case-based analyses structured using a jigsaw design; students were provided a collaborative software environment and introduced to a…
Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows
Wang, Di; Kleinberg, Robert D.
2009-01-01
Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596
Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.
Wang, Di; Kleinberg, Robert D
2009-11-28
Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.
Kugonza, D R; Kiwuwa, G H; Mpairwe, D; Jianlin, H; Nabasirye, M; Okeyo, A M; Hanotte, O
2012-02-01
This study aimed to estimate the level of relatedness within Ankole cattle herds using autosomal microsatellite markers and to assess the accuracy of relationship assignment based on farmers' memory. Eight cattle populations (four from each of two counties in Mbarara district in Uganda) were studied. Cattle in each population shared varying degrees of relatedness (first-, second- and third-degree relatives and unrelated individuals). Only memory-based kinship assignments which farmers knew with some confidence were tested in this experiment. DNA isolated from the blood of a subsample of 304 animals was analysed using 19 microsatellite markers. Average within population relatedness coefficients ranged from 0.010 ± 0.005 (Nshaara) to 0.067 ± 0.004 (Tayebwa). An exclusion probability of 99.9% was observed for both sire-offspring and dam-offspring relationships using the entire panel of 19 markers. Confidence from likelihood tests performed on 292 dyads showed that first-degree relatives were more easily correctly assigned by farmers than second-degree ones (p < 0.01), which were also easier to assign than third-degree relatives (p < 0.01). Accuracy of kinship assignment by the farmers was 91.9% ± 5.0 for dam-offspring dyads, 85.5% ± 3.4 for sire-offspring dyads, 75.6% ± 12.3 for half-sib and 60.0% ± 5.0 for grand dam-grand offspring dyads. Herd size, number of dyads assigned and length of time spent by the herder with their cattle population did not correlate with error in memorizing relationships. However, herd size strongly correlated with number of dyads assigned by the herder (r = 0.967, p < 0.001). Overall, we conclude that memorized records of pastoralists can be used to trace relationships and for pedigree reconstruction within Ankole cattle populations, but with the awareness that herd size constrains the number of kinship assignments remembered by the farmer. © 2011 Blackwell Verlag GmbH.
Advances and Limitations of Modern Macroseismic Data Gathering
NASA Astrophysics Data System (ADS)
Wald, D. J.; Dewey, J. W.; Quitoriano, V. P. R.
2016-12-01
All macroseismic data are not created equal. At about the time that the European Macroseismic Scale 1998 (EMS-98; itself a revision of EMS-92) formalized a procedure to account for building vulnerability and damage grade statistics in assigning intensities from traditional field observations, a parallel universe of internet-based intensity reporting was coming online. The divergence of intensities assigned by field reconnaissance and intensities based on volunteered reports poses unique challenges. U.S. Geological Survey's Did You Feel It? (DYFI) and its Italian (National Institute of Geophysics and Volcanology) counterpart use questionnaires based on the traditional format, submitted by volunteers. The Italian strategy uses fuzzy logic to assign integer values of intensity from questionnaire responses, whereas DYFI assigns weights to macroseismic effects and computes real-valued intensities to a 0.1 MMI unit precision. DYFI responses may be grouped together by postal code, or by smaller latitude-longitude boxes; calculated intensities may vary depending on how observations are grouped. New smartphone-based procedures depart further from tradition by asking respondents to select from cartoons corresponding to various intensity levels that best fit their experience. While nearly instantaneous, these thumbnail-based intensities are strictly integer values and do not record specific macroseismic effects. Finally, a recent variation on traditional intensity assignments derives intensities not from field surveys or questionnaires sent to target audiences but rather from media reports, photojournalism, and internet posts that may or may not constitute the representative observations needed for consistent EMS-98 assignments. We review these issues and suggest due-diligence strategies for utilizing varied macroseismic data sets within real-time applications and in quantitative hazard and engineering analyses.
Constructing cardiovascular fitness knowledge in physical education
Zhang, Tan; Chen, Ang; Chen, Senlin; Hong, Deockki; Loflin, Jerry; Ennis, Catherine
2015-01-01
In physical education, it has become necessary for children to learn kinesiological knowledge for understanding the benefits of physical activity and developing a physically active lifestyle. This study was conducted to determine the extent to which cognitive assignments about healthful living and fitness contributed to knowledge growth on cardiorespiratory fitness and health. Fourth grade students (N = 616) from 15 randomly sampled urban elementary schools completed 34 cognitive assignments related to the cardiorespiratory physical activities they were engaged in across 10 lessons. Performance on the assignments were analyzed in relation to their knowledge gain measured using a standardized knowledge test. A multivariate discriminant analysis revealed that the cognitive assignments contributed to knowledge gain but the contribution varied assignment by assignment. A multiple regression analysis indicated that students’ assignment performance by lesson contributed positively to their knowledge growth scores. A content analysis based on the constructivist learning framework showed that observing–reasoning assignments contributed the most to knowledge growth. Analytical and analytical–application assignments contributed less than the constructivist theories would predict. PMID:25995702
USDA-ARS?s Scientific Manuscript database
A 4-unit dual-flow continuous culture fermentor system was used to evaluate the effects of herbage, a total mixed ration (TMR) and flaxseed on nutrient digestibility and microbial N synthesis. Treatments were randomly assigned to fermentors in a 4 x 4 Latin square design. Each fermentor was fed a to...
Code of Federal Regulations, 2010 CFR
2010-07-01
... characteristics; i.e., levels of biochemical oxygen demand, suspended solids, etc. Each class is then assigned its... works. Factors such as strength, volume, and delivery flow rate characteristics shall be considered and... user charges can be developed on a volume basis in accordance with the model below: Cu = CT/VT(Vu) (2...
Hypermixer Pylon Fuel Injection for Scramjet Combustors
2008-09-11
menu was selected , and the fuel port boundary conditions were assigned the proper mass flow values . Adding Mach number 1.0 ethylene injection from the...45433, 2007. 8Vargaftik, N., Tables on the Thermophysical Properties of Liquids and Gases, John Wiley and Sons, 2nd ed., 1975. 9Curran, E. and Murthy, S...Total Pressure Loss Tradeoff . . . . 131 Energy Analysis . . . . . . . . . . . . . . . . . . . . . 132 VI. Conclusions and Recommendations
da Silva Marques, Duarte Nuno; da Mata, António Duarte Sola Pereira; Patto, José Maria Vaz; Barcelos, Filipe Alexandre Duarte; de Almeida Rato Amaral, João Pedro; de Oliveira, Miguel Constantino Mendes; Ferreira, Cristina Gutierrez Castanheira
2011-11-01
To compare salivary pH changes and stimulation efficacy of two different gustatory stimulants of salivary secretion (GSSS) in patients with primary Sjögren syndrome. Portuguese Institute for Rheumatological Diseases. Double-blind randomized controlled trial. Eighty patients were randomized to two intervention groups. Sample size was calculated using an alpha error of 0.05 and a beta of 0.20. Participants were randomly assigned to receive a new GSSS containing a weaker malic acid, fluoride and xylitol or a traditionally citric acid-based one. Saliva collection was obtained by established methods at different times. The salivary pH of the samples was determined with a pH meter and a microelectrode. Salivary pH variations and counts of subjects with pH below 4.5 for over 1 min and stimulated salivary flow were the main outcome measures. Both GSSS significantly stimulated salivary output without significant differences between the two groups. The new gustatory stimulant of salivary secretion presented an absolute risk reduction of 52.78% [33.42-72.13 (95% CI)] when compared with the traditional one. In Xerostomic Primary Sjögren syndrome patients, gustatory stimulants of salivary secretion based on acid mail only with fluoride and xylitol present similar salivary stimulation capacity when compared to citric acid-based ones, besides significantly reducing the number of salivary pH drops below 4.5. This could be related to a diminished risk for dental erosion and should be confirmed with further studies. © 2011 John Wiley & Sons A/S.
Automatic Classification of Aerial Imagery for Urban Hydrological Applications
NASA Astrophysics Data System (ADS)
Paul, A.; Yang, C.; Breitkopf, U.; Liu, Y.; Wang, Z.; Rottensteiner, F.; Wallner, M.; Verworn, A.; Heipke, C.
2018-04-01
In this paper we investigate the potential of automatic supervised classification for urban hydrological applications. In particular, we contribute to runoff simulations using hydrodynamic urban drainage models. In order to assess whether the capacity of the sewers is sufficient to avoid surcharge within certain return periods, precipitation is transformed into runoff. The transformation of precipitation into runoff requires knowledge about the proportion of drainage-effective areas and their spatial distribution in the catchment area. Common simulation methods use the coefficient of imperviousness as an important parameter to estimate the overland flow, which subsequently contributes to the pipe flow. The coefficient of imperviousness is the percentage of area covered by impervious surfaces such as roofs or road surfaces. It is still common practice to assign the coefficient of imperviousness for each particular land parcel manually by visual interpretation of aerial images. Based on classification results of these imagery we contribute to an objective automatic determination of the coefficient of imperviousness. In this context we compare two classification techniques: Random Forests (RF) and Conditional Random Fields (CRF). Experimental results performed on an urban test area show good results and confirm that the automated derivation of the coefficient of imperviousness, apart from being more objective and, thus, reproducible, delivers more accurate results than the interactive estimation. We achieve an overall accuracy of about 85 % for both classifiers. The root mean square error of the differences of the coefficient of imperviousness compared to the reference is 4.4 % for the CRF-based classification, and 3.8 % for the RF-based classification.
Colnot, Thomas; Dekant, Wolfgang
2017-02-01
The European Food Safety Authority (EFSA) is developing approaches to cumulative risk assessment of pesticides by assigning individual pesticides to cumulative assessment groups (CAGs). For assignment to CAGs, EFSA recommended to rely on adverse effects on the specific target system. Contractors to EFSA have proposed to allocate individual pesticides into CAGs relying on NOAELs for effects on target organs. This manuscript evaluates the assignments by applying EFSAs criteria to the CAGs "Toxicity to the nervous system" and "Toxicity to the thyroid hormone system (gland or hormones)". Assignment to the CAG "Toxicity to the nervous system" based, for example, on neurochemical effects like choline esterase inhibition is well supported, whereas assignment to the CAG "Toxicity to the thyroid hormone system (gland or hormones)" has been based in the examined case studies on non-reproducible effects seen in single studies or on observations that are not adverse. Therefore, a more detailed effects evaluation is required to assign a pesticide to a CAG for a target organ where many confounders regarding effects are present. Relative potency factors in cumulative risk assessment should be based on benchmark doses from studies in one species with identical study design and human relevance of effects on specific target organs should be analyzed to define minimal margins of exposure. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Eardley, Debra L; Krumwiede, Kelly A; Secginli, Selda; Garner, Linda; DeBlieck, Conni; Cosansu, Gulhan; Nahcivan, Nursen O
2018-06-01
Advancements in healthcare systems include adoption of health information technology to ensure healthcare quality. Educators are challenged to determine strategies to integrate health information technology into nursing curricula for building a nursing workforce competent with electronic health records, standardized terminology, evidence-based practice, and evaluation. Nursing informatics, a growing specialty field, comprises health information technology relative to the profession of nursing. It is essential to integrate nursing informatics across nursing curricula to effectively position competent graduates in technology-laden healthcare environments. Nurse scholars developed and evaluated a nursing informatics case study assignment used in undergraduate level public health nursing courses. The assignment included an unfolding scenario followed by electronic health record charting using standardized terminology to guide the nursing process. The assignment was delivered either online or in class. Seventy-two undergraduate students completed the assignment and a posttest. Fifty-one students completed a satisfaction survey. Results indicated that students who completed the assignment online demonstrated a higher level of content mastery than those who completed the assignment in class. Content mastery was based on posttest results, which evaluated students' electronic health record charting for the nursing assessment, evidence-based interventions, and evaluations. This innovative approach may be valuable to educators in response to the National Academy of Sciences recommendations for healthcare education reform.
Preliminary geologic map of the Sleeping Butte volcanic centers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowe, B.M.; Perry, F.V.
1991-07-01
The Sleeping Butte volcanic centers comprise two, spatially separate, small-volume (<0.1 km{sup 3}) basaltic centers. The centers were formed by mildly explosive Strombolian eruptions. The Little Black Peak cone consists of a main scoria cone, two small satellitic scoria mounds, and associated lobate lava flows that vented from sites at the base of the scoria cone. The Hidden Cone center consists of a main scoria cone that developed on the north-facing slope of Sleeping Butte. The center formed during two episodes. The first included the formation of the main scoria cone, and venting of aa lava flows from radial dikesmore » at the northeast base of the cone. The second included eruption of scoria-fall deposits from the summit crater. The ages of the Little Black Peak and the Hidden Cone are estimated to be between 200 to 400 ka based on the whole-rock K-Ar age determinations with large analytical undertainty. This age assignment is consistent with qualitative observations of the degree of soil development and geomorphic degradation of volcanic landforms. The younger episode of the Hidden Cone is inferred to be significantly younger and probably of Late Pleistocene or Holocene age. This is based on the absence of cone slope rilling, the absence of cone-slope apron deposits, and erosional unconformity between the two episodes, the poor horizon- development of soils, and the presence of fall deposits on modern alluvial surfaces. Paleomagnetic data show that the centers record similar but not identical directions of remanent magnetization. Paleomagnetic data have not been obtained for the youngest deposits of the Hidden Cone center. Further geochronology, soils, geomorphic, and petrology studies are planned of the Sleeping Butte volcanic centers 20 refs., 3 figs.« less
Macias, Cathaleene; Barreira, Paul; Hargreaves, William; Bickman, Leonard; Fisher, William; Aronson, Elliot
2009-01-01
Objective The inability to blind research participants to their experimental conditions is the Achilles’ heel of mental health services research. When one experimental condition receives more disappointed participants, or more satisfied participants, research findings can be biased in spite of random assignment. The authors explored the potential for research participants’ preference for one experimental program over another to compromise the generalizability and validity of randomized controlled service evaluations as well as cross-study comparisons. Method Three Cox regression analyses measured the impact of applicants’ service assignment preference on research project enrollment, engagement in assigned services, and a service-related outcome, competitive employment. Results A stated service preference, referral by an agency with a low level of continuity in outpatient care, and willingness to switch from current services were significant positive predictors of research enrollment. Match to service assignment preference was a significant positive predictor of service engagement, and mismatch to assignment preference was a significant negative predictor of both service engagement and employment outcome. Conclusions Referral source type and service assignment preference should be routinely measured and statistically controlled for in all studies of mental health service effectiveness to provide a sound empirical base for evidence-based practice. PMID:15800153
NASA Astrophysics Data System (ADS)
Guex, Guillaume
2016-05-01
In recent articles about graphs, different models proposed a formalism to find a type of path between two nodes, the source and the target, at crossroads between the shortest-path and the random-walk path. These models include a freely adjustable parameter, allowing to tune the behavior of the path toward randomized movements or direct routes. This article presents a natural generalization of these models, namely a model with multiple sources and targets. In this context, source nodes can be viewed as locations with a supply of a certain good (e.g. people, money, information) and target nodes as locations with a demand of the same good. An algorithm is constructed to display the flow of goods in the network between sources and targets. With again a freely adjustable parameter, this flow can be tuned to follow routes of minimum cost, thus displaying the flow in the context of the optimal transportation problem or, by contrast, a random flow, known to be similar to the electrical current flow if the random-walk is reversible. Moreover, a source-targetcoupling can be retrieved from this flow, offering an optimal assignment to the transportation problem. This algorithm is described in the first part of this article and then illustrated with case studies.
Research on Segmentation Monitoring Control of IA-RWA Algorithm with Probe Flow
NASA Astrophysics Data System (ADS)
Ren, Danping; Guo, Kun; Yao, Qiuyan; Zhao, Jijun
2018-04-01
The impairment-aware routing and wavelength assignment algorithm with probe flow (P-IA-RWA) can make an accurate estimation for the transmission quality of the link when the connection request comes. But it also causes some problems. The probe flow data introduced in the P-IA-RWA algorithm can result in the competition for wavelength resources. In order to reduce the competition and the blocking probability of the network, a new P-IA-RWA algorithm with segmentation monitoring-control mechanism (SMC-P-IA-RWA) is proposed. The algorithm would reduce the holding time of network resources for the probe flow. It segments the candidate path suitably for the data transmitting. And the transmission quality of the probe flow sent by the source node will be monitored in the endpoint of each segment. The transmission quality of data can also be monitored, so as to make the appropriate treatment to avoid the unnecessary probe flow. The simulation results show that the proposed SMC-P-IA-RWA algorithm can effectively reduce the blocking probability. It brings a better solution to the competition for resources between the probe flow and the main data to be transferred. And it is more suitable for scheduling control in the large-scale network.
Flexible Multi agent Algorithm for Distributed Decision Making
2015-01-01
How, J. P. Consensus - Based Auction Approaches for Decentralized task Assignment. Proceedings of the AIAA Guidance, Navigation, and Control...G. ; Kim, Y. Market- based Decentralized Task Assignment for Cooperative UA V Mission Including Rendezvous. Proceedings of the AIAA Guidance...scalable and adaptable to a variety of specific mission tasks . Additionally, the algorithm could easily be adapted for use on land or sea- based systems
MacKinnon, Neil; Somashekar, Bagganahalli S; Tripathi, Pratima; Ge, Wencheng; Rajendiran, Thekkelnaycke M; Chinnaiyan, Arul M; Ramamoorthy, Ayyalusamy
2013-01-01
Nuclear magnetic resonance based measurements of small molecule mixtures continues to be confronted with the challenge of spectral assignment. While multi-dimensional experiments are capable of addressing this challenge, the imposed time constraint becomes prohibitive, particularly with the large sample sets commonly encountered in metabolomic studies. Thus, one-dimensional spectral assignment is routinely performed, guided by two-dimensional experiments on a selected sample subset; however, a publicly available graphical interface for aiding in this process is currently unavailable. We have collected spectral information for 360 unique compounds from publicly available databases including chemical shift lists and authentic full resolution spectra, supplemented with spectral information for 25 compounds collected in-house at a proton NMR frequency of 900 MHz. This library serves as the basis for MetaboID, a Matlab-based user interface designed to aid in the one-dimensional spectral assignment process. The tools of MetaboID were built to guide resonance assignment in order of increasing confidence, starting from cursory compound searches based on chemical shift positions to analysis of authentic spike experiments. Together, these tools streamline the often repetitive task of spectral assignment. The overarching goal of the integrated toolbox of MetaboID is to centralize the one dimensional spectral assignment process, from providing access to large chemical shift libraries to providing a straightforward, intuitive means of spectral comparison. Such a toolbox is expected to be attractive to both experienced and new metabolomic researchers as well as general complex mixture analysts. Copyright © 2012 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Swaak, Janine; And Others
In this study, learners worked with a simulation of harmonic oscillation. Two supportive measures were introduced: model progression and assignments. In model progression, the model underlying the simulation is not offered in its full complexity from the start, but variables are gradually introduced. Assignments are small exercises that help the…
Online Assignments in Economics: A Test of Their Effectiveness
ERIC Educational Resources Information Center
Kennelly, Brendan; Considine, John; Flannery, Darragh
2011-01-01
This article compares the effectiveness of online and paper-based assignments and tutorials using summative assessment results. All of the students in a large managerial economics course at National University of Ireland, Galway were asked to do six assignments online using Aplia and to do two on paper. The authors examined whether a student's…
Sparks-Thissen, Rebecca L
2017-02-01
Biology education is undergoing a transformation toward a more student-centered, inquiry-driven classroom. Many educators have designed engaging assignments that are designed to help undergraduate students gain exposure to the scientific process and data analysis. One of these types of assignments is use of a grant proposal assignment. Many instructors have used these assignments in lecture-based courses to help students process information in the literature and apply that information to a novel problem such as design of an antiviral drug or a vaccine. These assignments have been helpful in engaging students in the scientific process in the absence of an inquiry-driven laboratory. This commentary discusses the application of these grant proposal writing assignments to undergraduate biology courses. © FEMS 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Real life working shift assignment problem
NASA Astrophysics Data System (ADS)
Sze, San-Nah; Kwek, Yeek-Ling; Tiong, Wei-King; Chiew, Kang-Leng
2017-07-01
This study concerns about the working shift assignment in an outlet of Supermarket X in Eastern Mall, Kuching. The working shift assignment needs to be solved at least once in every month. Current approval process of working shifts is too troublesome and time-consuming. Furthermore, the management staff cannot have an overview of manpower and working shift schedule. Thus, the aim of this study is to develop working shift assignment simulation and propose a working shift assignment solution. The main objective for this study is to fulfill manpower demand at minimum operation cost. Besides, the day off and meal break policy should be fulfilled accordingly. Demand based heuristic is proposed to assign working shift and the quality of the solution is evaluated by using the real data.
Agagliate, Jacopo; Röttgers, Rüdiger; Twardowski, Michael S; McKee, David
2018-03-01
A flow cytometric (FC) method was developed to retrieve particle size distributions (PSDs) and real refractive index (n r ) information in natural waters. Geometry and signal response of the sensors within the flow cytometer (CytoSense, CytoBuoy b.v., Netherlands) were characterized to form a scattering inversion model based on Mie theory. The procedure produced a mesh of diameter and n r isolines where each particle is assigned the diameter and n r values of the closest node, producing PSDs and particle real refractive index distributions. The method was validated using polystyrene bead standards of known diameter and polydisperse suspensions of oil with known n r , and subsequently applied to natural samples collected across a broad range of UK shelf seas. FC PSDs were compared with independent PSDs produced from data of two LISST-100X instruments (type B and type C). PSD slopes and features were found to be consistent between the FC and the two LISST-100X instruments, but LISST concentrations were found in disagreement with FC concentrations and with each other. FC n r values were found to agree with expected refractive index values of typical marine particle components across all samples considered. The determination of particle size and refractive index distributions enabled by the FC method has potential to facilitate identification of the contribution of individual subpopulations to the bulk inherent optical properties and biogeochemical properties of the particle population.
Splitting of turbulent spot in transitional pipe flow
NASA Astrophysics Data System (ADS)
Wu, Xiaohua; Moin, Parviz; Adrian, Ronald J.
2017-11-01
Recent study (Wu et al., PNAS, 1509451112, 2015) demonstrated the feasibility and accuracy of direct computation of the Osborne Reynolds' pipe transition problem without the unphysical, axially periodic boundary condition. Here we use this approach to study the splitting of turbulent spot in transitional pipe flow, a feature first discovered by E.R. Lindgren (Arkiv Fysik 15, 1959). It has been widely believed that spot splitting is a mysterious stochastic process that has general implications on the lifetime and sustainability of wall turbulence. We address the following two questions: (1) What is the dynamics of turbulent spot splitting in pipe transition? Specifically, we look into any possible connection between the instantaneous strain rate field and the spot splitting. (2) How does the passive scalar field behave during the process of pipe spot splitting. In this study, the turbulent spot is introduced at the inlet plane through a sixty degree wide numerical wedge within which fully-developed turbulent profiles are assigned over a short time interval; and the simulation Reynolds numbers are 2400 for a 500 radii long pipe, and 2300 for a 1000 radii long pipe, respectively. Numerical dye is tagged on the imposed turbulent spot at the inlet. Splitting of the imposed turbulent spot is detected very easily. Preliminary analysis of the DNS results seems to suggest that turbulent spot slitting can be easily understood based on instantaneous strain rate field, and such spot splitting may not be relevant in external flows such as the flat-plate boundary layer.
Performance in the Citing Behavior of Two Student Writers
ERIC Educational Resources Information Center
Harwood, Nigel; Petric, Bojana
2012-01-01
This article reports the results of an interview-based study which investigated the citation behavior in the assignment writing of two second-language postgraduate business management students, Sofie and Tara. Discourse-based interviews were used to elicit the students' own perspectives on their citation behavior in two of their assignments.…
ERIC Educational Resources Information Center
Attwood, Paul V.
1997-01-01
Describes a self-instructional assignment approach to the teaching of advanced enzymology. Presents an assignment that offers a means of teaching enzymology to students that exposes them to modern computer-based techniques of analyzing protein structure and relates structure to enzyme function. (JRH)
Implications of Income-Based School Assignment Policies for Racial School Segregation
ERIC Educational Resources Information Center
Reardon, Sean F.; Yun, John T.; Kurlaender, Michal
2006-01-01
A number of public school districts in the United States have adopted income-based integration policies--policies that use measures of family income or socioeconomic status--in determining school assignment. Some scholars and policymakers contend that such policies will also reduce racial segregation. In this article this assumption is explored by…
Enhancing Self-Motivation in Learning Programming Using Game-Based Simulation and Metrics
ERIC Educational Resources Information Center
Jiau, H. C.; Chen, J. C.; Ssu, Kuo-Feng
2009-01-01
Game-based assignments typically form an integral component of computer programming courses. The effectiveness of the assignments in motivating students to carry out repetitive programming tasks is somewhat limited since their outcomes are invariably limited to a simple win or loss scenario. Accordingly, this paper develops a simulation…
12 CFR 567.6 - Risk-based capital credit risk-weight categories.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) [Reserved] (vi) Indirect ownership interests in pools of assets. Assets representing an indirect holding of a pool of assets, e.g., mutual funds, are assigned to risk-weight categories under this section based upon the risk weight that would be assigned to the assets in the portfolio of the pool. An...
12 CFR 167.6 - Risk-based capital credit risk-weight categories.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) [Reserved] (vi) Indirect ownership interests in pools of assets. Assets representing an indirect holding of a pool of assets, e.g., mutual funds, are assigned to risk-weight categories under this section based upon the risk weight that would be assigned to the assets in the portfolio of the pool. An...
12 CFR 567.6 - Risk-based capital credit risk-weight categories.
Code of Federal Regulations, 2014 CFR
2014-01-01
...) [Reserved] (vi) Indirect ownership interests in pools of assets. Assets representing an indirect holding of a pool of assets, e.g., mutual funds, are assigned to risk-weight categories under this section based upon the risk weight that would be assigned to the assets in the portfolio of the pool. An...
Technological Change in Assessing Economics: A Cautionary Welcome
ERIC Educational Resources Information Center
Kennelly, Brendan; Considine, John; Flannery, Darragh
2009-01-01
The use of computer-based automated assignment systems in economics has expanded significantly in recent years. The most widely used system is Aplia which was developed by Paul Romer in 2000. Aplia is a computer application designed to replace traditional paper-based assignments in economics. The main features of Aplia are: (1) interactive content…
An initial approach towards quality of service based Spectrum Trading
NASA Astrophysics Data System (ADS)
Bastidas, Carlos E. Caicedo; Vanhoy, Garret; Volos, Haris I.; Bose, Tamal
Spectrum scarcity has become an important issue as demands for higher data rates increase in diverse wireless applications and aerospace communication scenarios. To address this problem, it becomes necessary to manage radio spectrum assignment in a way that optimizes the distribution of spectrum resources among several users while taking into account the quality of service (QoS) characteristics desired by the users of spectrum. In this paper, a novel approach to managing spectrum assignment based on Spectrum Trading (ST) will be presented. Market based spectrum assignment mechanisms such as spectrum trading are of growing interest to many spectrum management agencies that are planning to increase the use of these mechanisms for spectrum management and reduce their emphasis on command and control methods. This paper presents some of our initial work into incorporating quality of service information into the mechanisms that determine how spectrum should be traded when using a spectrum exchange. Through simulations and a testbed implementation of a QoS aware spectrum exchange our results show the viability of using QoS based mechanisms in spectrum trading and in the enhancement of dynamic spectrum assignment systems.
NASA Astrophysics Data System (ADS)
Costa, Anna; Molnar, Peter
2017-04-01
Sediment transport rates along rivers and the grain size distribution (GSD) of coarse channel bed sediment are the result of the long term balance between transport capacity and sediment supply. Transport capacity, mainly a function of channel geometry and flow competence, can be altered by changes in climatic forcing as well as by human activities. In Alpine rivers it is hydropower production systems that are the main causes of modification to the transport capacity of water courses through flow regulation, leading over longer time scales to the adjustment of river bed GSDs. We developed a river network bedload transport model to evaluate the impacts of hydropower on the transfer of sediments and the GSDs of the Upper Rhône basin, a 5,200 km2 catchment located in the Swiss Alps. Many large reservoirs for hydropower production have been built along the main tributaries of the Rhône River since the 1960s, resulting in a complex system of intakes, tunnels, and pumping stations. Sediment storage behind dams and intakes, is accompanied by altered discharge due to hydropower operations, mainly higher flow in winter and lower in summer. It is expected that this change in flow regime may have resulted in different bedload transport. However, due the non-linear, threshold-based nature of the relation between discharge and sediment mobilization, the effects of changed hydraulic conditions are not easily deducible, and because observations of bedload in pre- and post-dam conditions are usually not available, a modelling approach is often necessary. In our modelling approach, the river network is conceptualized as a series of connected links (river reaches). Average geometric characteristics of each link (width, length, and slope of cross section) are extracted from digital elevation data, while surface roughness coefficients are assigned based on the GSD. Under the assumptions of rectangular prismatic cross sections and normal flow conditions, bed shear stress is estimated from available time series of daily discharge distributed along the river network. Potential bedload transport is estimated by the Wilcock and Crowe surface-based model for the entire GSD. Mass balance between transport capacity and sediment supply, applied to each individual grain size, determines the actual transport and the resulting GSD of the channel bed. Channel bed erosion is allowed through a long-term erosion rate. Sediment input from hillslopes is included as lateral sediment flux. Initial and boundary conditions are set based on available data of GSDs, while an approximation of the depth of the mobile bed is selected through sensitivity analysis. With the river network bedload model we aim to estimate the effect of flow regulation, i.e. altered transport capacity, on sediment transport and GSD of the entire Rhône river system. The model can also be applied as a tool to explore possible changes in bedload transport and channel GSDs under different discharge scenarios based, for example, on climate change projections or modified hydropower operation policies.
Passive larval transport explains recent gene flow in a Mediterranean gorgonian
NASA Astrophysics Data System (ADS)
Padrón, Mariana; Costantini, Federica; Baksay, Sandra; Bramanti, Lorenzo; Guizien, Katell
2018-06-01
Understanding the patterns of connectivity is required by the Strategic Plan for Biodiversity 2011-2020 and will be used to guide the extension of marine protection measures. Despite the increasing accuracy of ocean circulation modelling, the capacity to model the population connectivity of sessile benthic species with dispersal larval stages can be limited due to the potential effect of filters acting before or after dispersal, which modulates offspring release or settlement, respectively. We applied an interdisciplinary approach that combined demographic surveys, genetic methods (assignment tests and coalescent-based analyses) and larval transport simulations to test the relative importance of demographics and ocean currents in shaping the recent patterns of gene flow among populations of a Mediterranean gorgonian ( Eunicella singularis) in a fragmented rocky habitat (Gulf of Lion, NW Mediterranean Sea). We show that larval transport is a dominant driver of recent gene flow among the populations, and significant correlations were found between recent gene flow and larval transport during an average single dispersal event when the pelagic larval durations (PLDs) ranged from 7 to 14 d. Our results suggest that PLDs that efficiently connect populations distributed over a fragmented habitat are filtered by the habitat layout within the species competency period. Moreover, a PLD ranging from 7 to 14 d is sufficient to connect the fragmented rocky substrate of the Gulf of Lion. The rocky areas located in the centre of the Gulf of Lion, which are currently not protected, were identified as essential hubs for the distribution of migrants in the region. We encourage the use of a range of PLDs instead of a single value when estimating larval transport with biophysical models to identify potential connectivity patterns among a network of Marine Protected Areas or even solely a seascape.
Change in oral health status associated with menopause in Japanese dental hygienists.
Yoshida, N; Sugimoto, K; Suzuki, S; Kudo, H
2018-02-01
Oral symptoms such as xerostomia and burning mouth syndrome have been recognized to increase associated with menopause. The purpose of this study was to clarify the changes in oral health as well as systemic health due to menopause and their relations with hormonal change and mental status. Ninety-seven female dental hygienists aged 40-59 years were assigned to premenopausal, menopausal and post-menopausal groups based on self-reported menstrual condition. Subjective health statuses were evaluated by questionnaire, and objective holistic and oral statuses were evaluated by measuring serum 17β-estradiol (E2), salivary flow rate, α-amylase and secretory IgA (SIgA) and taste sensitivity. A significant difference among the three groups was observed in the self-rating questionnaire of depression (SRQ-D) score and serum E2 level as well as unstimulated salivary flow rate, whereas no significant difference was observed in Simplified menopausal index, Short-Form 36-Item Health Survey, General Oral Health Assessment Index, salivary α-amylase activity, salivary SIgA concentration and taste threshold. Serum E2 levels positively correlated with unstimulated salivary flow rates and negatively correlated with SRQ-D scores and α-amylase activities. The results demonstrated a negative correlation between E2 levels and SRQ-D scores as well as salivary α-amylase activities, suggesting an influence of E2 on mental condition. Furthermore, E2 decrease may result in reduction of salivary flow which in turn causes various problems of oral health. Since the participants were graduates from several dental hygienist schools and working at various places, these results can be generalized to Japanese dental hygienists to some extent. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Simulator test to study hot-flow problems related to a gas cooled reactor
NASA Technical Reports Server (NTRS)
Poole, J. W.; Freeman, M. P.; Doak, K. W.; Thorpe, M. L.
1973-01-01
An advance study of materials, fuel injection, and hot flow problems related to the gas core nuclear rocket is reported. The first task was to test a previously constructed induction heated plasma GCNR simulator above 300 kW. A number of tests are reported operating in the range of 300 kW at 10,000 cps. A second simulator was designed but not constructed for cold-hot visualization studies using louvered walls. A third task was a paper investigation of practical uranium feed systems, including a detailed discussion of related problems. The last assignment resulted in two designs for plasma nozzle test devices that could be operated at 200 atm on hydrogen.
First observation of rotational structures in Re 168
Hartley, D. J.; Janssens, R. V. F.; Riedinger, L. L.; ...
2016-11-30
We assigned first rotational sequences to the odd-odd nucleus 168Re. Coincidence relationships of these structures with rhenium x rays confirm the isotopic assignment, while arguments based on the γ-ray multiplicity (K-fold) distributions observed with the new bands lead to the mass assignment. Configurations for the two bands were determined through analysis of the rotational alignments of the structures and a comparison of the experimental B(M1)/B(E2) ratios with theory. Tentative spin assignments are proposed for the πh 11/2νi 13/2 band, based on energy level systematics for other known sequences in neighboring odd-odd rhenium nuclei, as well as on systematics seen formore » the signature inversion feature that is well known in this region. Furthermore, the spin assignment for the πh 11/2ν(h 9/2/f 7/2) structure provides additional validation of the proposed spins and configurations for isomers in the 176Au → 172Ir → 168Re α-decay chain.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartley, D. J.; Janssens, R. V. F.; Riedinger, L. L.
We assigned first rotational sequences to the odd-odd nucleus 168Re. Coincidence relationships of these structures with rhenium x rays confirm the isotopic assignment, while arguments based on the γ-ray multiplicity (K-fold) distributions observed with the new bands lead to the mass assignment. Configurations for the two bands were determined through analysis of the rotational alignments of the structures and a comparison of the experimental B(M1)/B(E2) ratios with theory. Tentative spin assignments are proposed for the πh 11/2νi 13/2 band, based on energy level systematics for other known sequences in neighboring odd-odd rhenium nuclei, as well as on systematics seen formore » the signature inversion feature that is well known in this region. Furthermore, the spin assignment for the πh 11/2ν(h 9/2/f 7/2) structure provides additional validation of the proposed spins and configurations for isomers in the 176Au → 172Ir → 168Re α-decay chain.« less
The assignment of scores procedure for ordinal categorical data.
Chen, Han-Ching; Wang, Nae-Sheng
2014-01-01
Ordinal data are the most frequently encountered type of data in the social sciences. Many statistical methods can be used to process such data. One common method is to assign scores to the data, convert them into interval data, and further perform statistical analysis. There are several authors who have recently developed assigning score methods to assign scores to ordered categorical data. This paper proposes an approach that defines an assigning score system for an ordinal categorical variable based on underlying continuous latent distribution with interpretation by using three case study examples. The results show that the proposed score system is well for skewed ordinal categorical data.
Lava flow risk maps at Mount Cameroon volcano
NASA Astrophysics Data System (ADS)
Favalli, M.; Fornaciai, A.; Papale, P.; Tarquini, S.
2009-04-01
Mount Cameroon, in the southwest Cameroon, is one of the most active volcanoes in Africa. Rising 4095 m asl, it has erupted nine times since the beginning of the past century, more recently in 1999 and 2000. Mount Cameroon documented eruptions are represented by moderate explosive and effusive eruptions occurred from both summit and flank vents. A 1922 SW-flank eruption produced a lava flow that reached the Atlantic coast near the village of Biboundi, and a lava flow from a 1999 south-flank eruption stopped only 200 m from the sea, threatening the villages of Bakingili and Dibunscha. More than 450,000 people live or work around the volcano, making the risk from lava flow invasion a great concern. In this work we propose both conventional hazard and risk maps and novel quantitative risk maps which relate vent locations to the expected total damage on existing buildings. These maps are based on lava flow simulations starting from 70,000 different vent locations, a probability distribution of vent opening, a law for the maximum length of lava flows, and a database of buildings. The simulations were run over the SRTM Digital Elevation Model (DEM) using DOWNFLOW, a fast DEM-driven model that is able to compute detailed invasion areas of lava flows from each vent. We present three different types of risk maps (90-m-pixel) for buildings around Mount Cameroon volcano: (1) a conventional risk map that assigns a probability of devastation by lava flows to each pixel representing buildings; (2) a reversed risk map where each pixel expresses the total damage expected as a consequence of vent opening in that pixel (the damage is expressed as the total surface of urbanized areas invaded); (3) maps of the lava catchments of the main towns around the volcano, within every catchment the pixels are classified according to the expected impact they might produce on the relative town in the case of a vent opening in that pixel. Maps of type (1) and (3) are useful for long term planning. Maps of type (2) and (3) are useful at the onset of a new eruption, when a vent forms. The combined use of these maps provides an efficient tool for lava flow risk assessment at Mount Cameroon.
Infrared absorption spectrum of the simplest deuterated Criegee intermediate CD{sub 2}OO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yu-Hsuan; Nishimura, Yoshifumi; Witek, Henryk A., E-mail: hwitek@mail.nctu.edu.tw, E-mail: yplee@mail.nctu.edu.tw
We report a transient infrared (IR) absorption spectrum of the simplest deuterated Criegee intermediate CD{sub 2}OO recorded using a step-scan Fourier-transform spectrometer coupled with a multipass absorption cell. CD{sub 2}OO was produced from photolysis of flowing mixtures of CD{sub 2}I{sub 2}, N{sub 2}, and O{sub 2} (13 or 87 Torr) with laser light at 308 nm. The recorded spectrum shows close structural similarity with the spectrum of CH{sub 2}OO reported previously [Y.-T. Su et al., Science 340, 174 (2013)]. The four bands observed at 852, 1017, 1054, and 1318 cm{sup −1} are assigned to the OO stretching mode, two distinctmore » in-plane OCD bending modes, and the CO stretching mode of CD{sub 2}OO, respectively, according to vibrational wavenumbers, IR intensities, rotational contours, and deuterium-isotopic shifts predicted with extensive quantum-chemical calculations. The CO-stretching mode of CD{sub 2}OO at 1318 cm{sup −1} is blue shifted from the corresponding band of CH{sub 2}OO at 1286 cm{sup −1}; this can be explained by a mechanism based on mode mixing and isotope substitution. A band near 936 cm{sup −1}, observed only at higher pressure (87 Torr), is tentatively assigned to the CD{sub 2} wagging mode of CD{sub 2}IOO.« less
Doppler color imaging. Principles and instrumentation.
Kremkau, F W
1992-01-01
DCI acquires Doppler-shifted echoes from a cross-section of tissue scanned by an ultrasound beam. These echoes are then presented in color and superimposed on the gray-scale anatomic image of non-Doppler-shifted echoes received during the scan. The flow echoes are assigned colors according to the color map chosen. Usually red, yellow, or white indicates positive Doppler shifts (approaching flow) and blue, cyan, or white indicates negative shifts (receding flow). Green is added to indicate variance (disturbed or turbulent flow). Several pulses (the number is called the ensemble length) are needed to generate a color scan line. Linear, convex, phased, and annular arrays are used to acquire the gray-scale and color-flow information. Doppler color-flow instruments are pulsed-Doppler instruments and are subject to the same limitations, such as Doppler angle dependence and aliasing, as other Doppler instruments. Color controls include gain, TGC, map selection, variance on/off, persistence, ensemble length, color/gray priority. Nyquist limit (PRF), baseline shift, wall filter, and color window angle, location, and size. Doppler color-flow instruments generally have output intensities intermediate between those of gray-scale imaging and pulsed-Doppler duplex instruments. Although there is no known risk with the use of color-flow instruments, prudent practice dictates that they be used for medical indications and with the minimum exposure time and instrument output required to obtain the needed diagnostic information.
Zierler, R E; Phillips, D J; Beach, K W; Primozich, J F; Strandness, D E
1987-08-01
The combination of a B-mode imaging system and a single range-gate pulsed Doppler flow velocity detector (duplex scanner) has become the standard noninvasive method for assessing the extracranial carotid artery. However, a significant limitation of this approach is the small area of vessel lumen that can be evaluated at any one time. This report describes a new duplex instrument that displays blood flow as colors superimposed on a real-time B-mode image. Returning echoes from a linear array of transducers are continuously processed for amplitude and phase. Changes in phase are produced by tissue motion and are used to calculate Doppler shift frequency. This results in a color assignment: red and blue indicate direction of flow with respect to the ultrasound beam, and lighter shades represent higher velocities. The carotid bifurcations of 10 normal subjects were studied. Changes in flow velocities across the arterial lumen were clearly visualized as varying shades of red or blue during the cardiac cycle. A region of flow separation was observed in all proximal internal carotids as a blue area located along the outer wall of the bulb. Thus, it is possible to detect the localized flow patterns that characterize normal carotid arteries. Other advantages of color-flow imaging include the ability to rapidly identify the carotid bifurcation branches and any associated anatomic variations.
Web-Based Problem-Solving Assignment and Grading System
NASA Astrophysics Data System (ADS)
Brereton, Giles; Rosenberg, Ronald
2014-11-01
In engineering courses with very specific learning objectives, such as fluid mechanics and thermodynamics, it is conventional to reinforce concepts and principles with problem-solving assignments and to measure success in problem solving as an indicator of student achievement. While the modern-day ease of copying and searching for online solutions can undermine the value of traditional assignments, web-based technologies also provide opportunities to generate individualized well-posed problems with an infinite number of different combinations of initial/final/boundary conditions, so that the probability of any two students being assigned identical problems in a course is vanishingly small. Such problems can be designed and programmed to be: single or multiple-step, self-grading, allow students single or multiple attempts; provide feedback when incorrect; selectable according to difficulty; incorporated within gaming packages; etc. In this talk, we discuss the use of a homework/exam generating program of this kind in a single-semester course, within a web-based client-server system that ensures secure operation.
The efficacy of IntraFlow intraosseous injection as a primary anesthesia technique.
Remmers, Todd; Glickman, Gerald; Spears, Robert; He, Jianing
2008-03-01
The purpose of this study was to compare the efficacy of intraosseous injection and inferior alveolar (IA) nerve block in anesthetizing mandibular posterior teeth with irreversible pulpitis. Thirty human subjects were randomly assigned to receive either intraosseous injection using the IntraFlow system (Pro-Dex Inc, Santa Ana, CA) or IA block as the primary anesthesia method. Pulpal anesthesia was evaluated via electric pulp testing at 4-minute intervals for 20 minutes. Two consecutive 80/80 readings were considered successful pulpal anesthesia. Anesthesia success or failure was recorded and groups compared. Intraosseous injection provided successful anesthesia in 13 of 15 subjects (87%). The IA block provided successful anesthesia in 9 of 15 subjects (60%). Although this difference was not statistically significant (p = 0.2148), the results of this preliminary study indicate that the IntraFlow system can be used as the primary anesthesia method in teeth with irreversible pulpitis to achieve predictable pulpal anesthesia.
Orbital Evasive Target Tracking and Sensor Management
2012-03-30
maximize the total information gain in the observer-to-target assignment. We compare the information based approach to the game theoretic criterion where...tracking with multiple space borne observers. The results indicate that the game theoretic approach is more effective than the information based approach in...sensor management is to maximize the total information gain in the observer-to-target assignment. We compare the information based approach to the game
ERIC Educational Resources Information Center
Cyr, Mary Ann
2013-01-01
The purpose of this qualitative study was to examine the engagement of 11 middle school-aged students from a southeast Michigan public school, who were given laptop computers with twenty-four-hour-a-day Internet access in order to complete homework assignments. Specifically, this study examined the perceptions of sixth-grade students regarding the…
When the Due Date Is Not the "Do" Date!
ERIC Educational Resources Information Center
Mastrianni, Theresa M.
2015-01-01
Our students consistently hand in assignments late, or complete them at the last minute. Why is it that they know about assignments all term long yet only begin them the night before the assignment is due? Based on a career-focused learning community at Kingsborough Community College of the City University of New York, this article looks at ways…
Strucken, Eva M; Al-Mamun, Hawlader A; Esquivelzeta-Rabell, Cecilia; Gondro, Cedric; Mwai, Okeyo A; Gibson, John P
2017-09-12
Smallholder dairy farming in much of the developing world is based on the use of crossbred cows that combine local adaptation traits of indigenous breeds with high milk yield potential of exotic dairy breeds. Pedigree recording is rare in such systems which means that it is impossible to make informed breeding decisions. High-density single nucleotide polymorphism (SNP) assays allow accurate estimation of breed composition and parentage assignment but are too expensive for routine application. Our aim was to determine the level of accuracy achieved with low-density SNP assays. We constructed subsets of 100 to 1500 SNPs from the 735k-SNP Illumina panel by selecting: (a) on high minor allele frequencies (MAF) in a crossbred population; (b) on large differences in allele frequency between ancestral breeds; (c) at random; or (d) with a differential evolution algorithm. These panels were tested on a dataset of 1933 crossbred dairy cattle from Kenya/Uganda and on crossbred populations from Ethiopia (N = 545) and Tanzania (N = 462). Dairy breed proportions were estimated by using the ADMIXTURE program, a regression approach, and SNP-best linear unbiased prediction, and tested against estimates obtained by ADMIXTURE based on the 735k-SNP panel. Performance for parentage assignment was based on opposing homozygotes which were used to calculate the separation value (sv) between true and false assignments. Panels of SNPs based on the largest differences in allele frequency between European dairy breeds and a combined Nelore/N'Dama population gave the best predictions of dairy breed proportion (r 2 = 0.962 to 0.994 for 100 to 1500 SNPs) with an average absolute bias of 0.026. Panels of SNPs based on the highest MAF in the crossbred population (Kenya/Uganda) gave the most accurate parentage assignments (sv = -1 to 15 for 100 to 1500 SNPs). Due to the different required properties of SNPs, panels that did well for breed composition did poorly for parentage assignment and vice versa. A combined panel of 400 SNPs was not able to assign parentages correctly, thus we recommend the use of 200 SNPs either for breed proportion prediction or parentage assignment, independently.
Enhancing acronym/abbreviation knowledge bases with semantic information.
Torii, Manabu; Liu, Hongfang
2007-10-11
In the biomedical domain, a terminology knowledge base that associates acronyms/abbreviations (denoted as SFs) with the definitions (denoted as LFs) is highly needed. For the construction such terminology knowledge base, we investigate the feasibility to build a system automatically assigning semantic categories to LFs extracted from text. Given a collection of pairs (SF,LF) derived from text, we i) assess the coverage of LFs and pairs (SF,LF) in the UMLS and justify the need of a semantic category assignment system; and ii) automatically derive name phrases annotated with semantic category and construct a system using machine learning. Utilizing ADAM, an existing collection of (SF,LF) pairs extracted from MEDLINE, our system achieved an f-measure of 87% when assigning eight UMLS-based semantic groups to LFs. The system has been incorporated into a web interface which integrates SF knowledge from multiple SF knowledge bases. Web site: http://gauss.dbb.georgetown.edu/liblab/SFThesurus.
The Lewis Chemical Equilibrium Program with parametric study capability
NASA Technical Reports Server (NTRS)
Sevigny, R.
1981-01-01
The program was developed to determine chemical equilibrium in complex systems. Using a free energy minimization technique, the program permits calculations such as: chemical equilibrium for assigned thermodynamic states; theoretical rocket performance for both equilibrium and frozen compositions during expansion; incident and reflected shock properties; and Chapman-Jouget detonation properties. It is shown that the same program can handle solid coal in an entrained flow coal gasification problem.
Electronic emission spectrum of methylnitrene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrick, P.G.; Engelking, P.C.
The A /sup 3/E--X /sup 3/A/sub 2/ultraviolet emission spectrum of methylnitrene (CH/sub 3/N) was obtained in two ways: (1) by reacting methylazide (CH/sub 3/N/sub 3/) with metastable N/sub 2/ in a flowing afterglow; and (2) by discharging a mixture of methylazide (CH/sub 3/N/sub 3/) and helium in a corona excited supersonic expansion (CESE). The origin appears at T/sub 0/ = 31 811 cm/sup -1/. Several vibrational progressions were observed leading to the determination of a number of vibrational frequencies: v/sup double-prime//sub 1/ = 2938 , v/sup X//sub 2/ = 1350, v/sup double-prime//sub 3/ = 1039, v/sup X//sub 4/ = 3065,more » and v/sup double-prime//sub 6/ = 902 cm/sup -1/. Deuterium substitution confirmed the assignments of the vibrational frequencies. The X /sup 3/A/sub 2/ state is a normal, bound local minimum on the triplet electronic potential surface, and the upper A /sup 3/E state is able to support at least one quantum of vibration, assigned to v/sup //sub 3/, predominantly a C--N stretch. A comparison of flowing afterglow hollow cathode discharge sources and corona excited supersonic expansion sources shows the advantage of the CESE method of radical production for spectroscopy.« less
Parameter interdependence and uncertainty induced by lumping in a hydrologic model
NASA Astrophysics Data System (ADS)
Gallagher, Mark R.; Doherty, John
2007-05-01
Throughout the world, watershed modeling is undertaken using lumped parameter hydrologic models that represent real-world processes in a manner that is at once abstract, but nevertheless relies on algorithms that reflect real-world processes and parameters that reflect real-world hydraulic properties. In most cases, values are assigned to the parameters of such models through calibration against flows at watershed outlets. One criterion by which the utility of the model and the success of the calibration process are judged is that realistic values are assigned to parameters through this process. This study employs regularization theory to examine the relationship between lumped parameters and corresponding real-world hydraulic properties. It demonstrates that any kind of parameter lumping or averaging can induce a substantial amount of "structural noise," which devices such as Box-Cox transformation of flows and autoregressive moving average (ARMA) modeling of residuals are unlikely to render homoscedastic and uncorrelated. Furthermore, values estimated for lumped parameters are unlikely to represent average values of the hydraulic properties after which they are named and are often contaminated to a greater or lesser degree by the values of hydraulic properties which they do not purport to represent at all. As a result, the question of how rigidly they should be bounded during the parameter estimation process is still an open one.
NASA Astrophysics Data System (ADS)
Pack, Robert C.; Standiford, Keith; Lukanc, Todd; Ning, Guo Xiang; Verma, Piyush; Batarseh, Fadi; Chua, Gek Soon; Fujimura, Akira; Pang, Linyong
2014-10-01
A methodology is described wherein a calibrated model-based `Virtual' Variable Shaped Beam (VSB) mask writer process simulator is used to accurately verify complex Optical Proximity Correction (OPC) and Inverse Lithography Technology (ILT) mask designs prior to Mask Data Preparation (MDP) and mask fabrication. This type of verification addresses physical effects which occur in mask writing that may impact lithographic printing fidelity and variability. The work described here is motivated by requirements for extreme accuracy and control of variations for today's most demanding IC products. These extreme demands necessitate careful and detailed analysis of all potential sources of uncompensated error or variation and extreme control of these at each stage of the integrated OPC/ MDP/ Mask/ silicon lithography flow. The important potential sources of variation we focus on here originate on the basis of VSB mask writer physics and other errors inherent in the mask writing process. The deposited electron beam dose distribution may be examined in a manner similar to optical lithography aerial image analysis and image edge log-slope analysis. This approach enables one to catch, grade, and mitigate problems early and thus reduce the likelihood for costly long-loop iterations between OPC, MDP, and wafer fabrication flows. It moreover describes how to detect regions of a layout or mask where hotspots may occur or where the robustness to intrinsic variations may be improved by modification to the OPC, choice of mask technology, or by judicious design of VSB shots and dose assignment.
Cueto Díaz, Sergio; Ruiz Encinar, Jorge; García Alonso, J Ignacio
2014-09-24
We present a novel method for the purity assessment of peptide standards which is applicable to any water soluble peptide. The method is based on the online (13)C isotope dilution approach in which the peptide is separated from its related impurities by liquid chromatography (LC) and the eluent is mixed post-column with a continuous flow of (13)C-enriched sodium bicarbonate. An online oxidation step using sodium persulfate in acidic media at 99°C provides quantitative oxidation to (12)CO2 and (13)CO2 respectively which is extracted to a gaseous phase with the help of a gas permeable membrane. The measurement of the isotope ratio 44/45 in the mass spectrometer allows the construction of the mass flow chromatogram. As the only species that is finally measured in the mass spectrometer is CO2, the peptide content in the standard can be quantified, on the base of its carbon content, using a generic primary standard such as potassium hydrogen phthalate. The approach was validated by the analysis of a reference material (NIST 8327), and applied to the quantification of two commercial synthetic peptide standards. In that case, the results obtained were compared with those obtained using alternative methods, such as amino acid analysis and ICP-MS. The results obtained proved the value of the method for the fast, accurate and precise mass purity assignment of synthetic peptide standards. Copyright © 2014 Elsevier B.V. All rights reserved.
Nonlinear Modeling by Assembling Piecewise Linear Models
NASA Technical Reports Server (NTRS)
Yao, Weigang; Liou, Meng-Sing
2013-01-01
To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.
Smart-Grid Backbone Network Real-Time Delay Reduction via Integer Programming.
Pagadrai, Sasikanth; Yilmaz, Muhittin; Valluri, Pratyush
2016-08-01
This research investigates an optimal delay-based virtual topology design using integer linear programming (ILP), which is applied to the current backbone networks such as smart-grid real-time communication systems. A network traffic matrix is applied and the corresponding virtual topology problem is solved using the ILP formulations that include a network delay-dependent objective function and lightpath routing, wavelength assignment, wavelength continuity, flow routing, and traffic loss constraints. The proposed optimization approach provides an efficient deterministic integration of intelligent sensing and decision making, and network learning features for superior smart grid operations by adaptively responding the time-varying network traffic data as well as operational constraints to maintain optimal virtual topologies. A representative optical backbone network has been utilized to demonstrate the proposed optimization framework whose simulation results indicate that superior smart-grid network performance can be achieved using commercial networks and integer programming.
Some implementational issues of convection schemes for finite volume formulations
NASA Technical Reports Server (NTRS)
Thakur, Siddharth; Shyy, Wei
1993-01-01
Two higher-order upwind schemes - second-order upwind and QUICK - are examined in terms of their interpretation, implementation as well as performance for a recirculating flow in a lid-driven cavity, in the context of a control volume formulation using the SIMPLE algorithm. The present formulation of these schemes is based on a unified framework wherein the first-order upwind scheme is chosen as the basis, with the remaining terms being assigned to the source term. The performance of these schemes is contrasted with the first-order upwind and second-order central difference schemes. Also addressed in this study is the issue of boundary treatment associated with these higher-order upwind schemes. Two different boundary treatments - one that uses a two-point scheme consistently within a given control volume at the boundary, and the other that maintains consistency of flux across the interior face between the adjacent control volumes - are formulated and evaluated.
Some implementational issues of convection schemes for finite-volume formulations
NASA Technical Reports Server (NTRS)
Thakur, Siddharth; Shyy, Wei
1993-01-01
Two higher-order upwind schemes - second-order upwind and QUICK - are examined in terms of their interpretation, implementations, as well as performance for a recirculating flow in a lid-driven cavity, in the context of a control-volume formulation using the SIMPLE algorithm. The present formulation of these schemes is based on a unified framework wherein the first-order upwind scheme is chosen as the basis, with the remaining terms being assigned to the source term. The performance of these schemes is contrasted with the first-order upwind and second-order central difference schemes. Also addressed in this study is the issue of boundary treatment associated with these higher-order upwind schemes. Two different boundary treatments - one that uses a two-point scheme consistently within a given control volume at the boundary, and the other that maintains consistency of flux across the interior face between the adjacent control volumes - are formulated and evaluated.
ERIC Educational Resources Information Center
Layton, Richard A.; Loughry, Misty L.; Ohland, Matthew W.; Ricco, George D.
2010-01-01
A significant body of research identifies a large number of team composition characteristics that affect the success of individuals and teams in cooperative learning and project-based team environments. Controlling these factors when assigning students to teams should result in improved learning experiences. However, it is very difficult for…
Problem-Based Assignments as a Trigger for Developing Ethical and Reflective Competencies
ERIC Educational Resources Information Center
Euler, Dieter; Kühner, Patrizia
2017-01-01
The following research question serves as the starting point of this research and development project: How, in the context of a didactic design, can problem-based assignments trigger learning activities for the development of ethical and reflective competencies in students in economics courses? This paper focuses on the design of problem-based…
ERIC Educational Resources Information Center
Glaser, Rainer E.
2014-01-01
A writing-intensive, upper-level undergraduate course which integrates content, context, collaboration, and communication in a unique fashion, is described. The topic of the seminar is "Scientific Writing in Chemistry" and an assignment-based curriculum was developed to instruct students on best practices in all aspects of science…
ERIC Educational Resources Information Center
Hutchison, Micol
2016-01-01
Empathy and interdisciplinarity are both concepts that are current and relevant--across professions, in research, and in academia. This paper describes a large, interdisciplinary, project-based assignment, the Empathy Project, which allows students to delve into and increase comfort and skill with interdisciplinary thinking and collaborative…
WWC Review of the Report "Effects of Problem Based Economics on High School Economics Instruction"
ERIC Educational Resources Information Center
What Works Clearinghouse, 2012
2012-01-01
The study described in this report included 128 high school economics teachers from 106 schools in Arizona and California, half of whom were randomly assigned to the "Problem Based Economics Instruction" condition and half of whom were randomly assigned to the comparison condition. High levels of teacher attrition occurred after…
Autonomous Guidance Strategy for Spacecraft Formations and Reconfiguration Maneuvers
NASA Astrophysics Data System (ADS)
Wahl, Theodore P.
A guidance strategy for autonomous spacecraft formation reconfiguration maneuvers is presented. The guidance strategy is presented as an algorithm that solves the linked assignment and delivery problems. The assignment problem is the task of assigning the member spacecraft of the formation to their new positions in the desired formation geometry. The guidance algorithm uses an auction process (also called an "auction algorithm''), presented in the dissertation, to solve the assignment problem. The auction uses the estimated maneuver and time of flight costs between the spacecraft and targets to create assignments which minimize a specific "expense'' function for the formation. The delivery problem is the task of delivering the spacecraft to their assigned positions, and it is addressed through one of two guidance schemes described in this work. The first is a delivery scheme based on artificial potential function (APF) guidance. APF guidance uses the relative distances between the spacecraft, targets, and any obstacles to design maneuvers based on gradients of potential fields. The second delivery scheme is based on model predictive control (MPC); this method uses a model of the system dynamics to plan a series of maneuvers designed to minimize a unique cost function. The guidance algorithm uses an analytic linearized approximation of the relative orbital dynamics, the Yamanaka-Ankersen state transition matrix, in the auction process and in both delivery methods. The proposed guidance strategy is successful, in simulations, in autonomously assigning the members of the formation to new positions and in delivering the spacecraft to these new positions safely using both delivery methods. This guidance algorithm can serve as the basis for future autonomous guidance strategies for spacecraft formation missions.
Rule-based support system for multiple UMLS semantic type assignments
Geller, James; He, Zhe; Perl, Yehoshua; Morrey, C. Paul; Xu, Julia
2012-01-01
Background When new concepts are inserted into the UMLS, they are assigned one or several semantic types from the UMLS Semantic Network by the UMLS editors. However, not every combination of semantic types is permissible. It was observed that many concepts with rare combinations of semantic types have erroneous semantic type assignments or prohibited combinations of semantic types. The correction of such errors is resource-intensive. Objective We design a computational system to inform UMLS editors as to whether a specific combination of two, three, four, or five semantic types is permissible or prohibited or questionable. Methods We identify a set of inclusion and exclusion instructions in the UMLS Semantic Network documentation and derive corresponding rule-categories as well as rule-categories from the UMLS concept content. We then design an algorithm adviseEditor based on these rule-categories. The algorithm specifies rules for an editor how to proceed when considering a tuple (pair, triple, quadruple, quintuple) of semantic types to be assigned to a concept. Results Eight rule-categories were identified. A Web-based system was developed to implement the adviseEditor algorithm, which returns for an input combination of semantic types whether it is permitted, prohibited or (in a few cases) requires more research. The numbers of semantic type pairs assigned to each rule-category are reported. Interesting examples for each rule-category are illustrated. Cases of semantic type assignments that contradict rules are listed, including recently introduced ones. Conclusion The adviseEditor system implements explicit and implicit knowledge available in the UMLS in a system that informs UMLS editors about the permissibility of a desired combination of semantic types. Using adviseEditor might help accelerate the work of the UMLS editors and prevent erroneous semantic type assignments. PMID:23041716
Wright, N C; Foster, P J; Mudano, A S; Melnick, J A; Lewiecki, M E; Shergy, W J; Curtis, J R; Cutter, G R; Danila, M I; Kilgore, M L; Lewis, E C; Morgan, S L; Redden, D T; Warriner, A H; Saag, K G
2017-08-01
The Effectiveness of Discontinuing Bisphosphonates (EDGE) study is a planned pragmatic clinical trial to guide "drug holiday" clinical decision making. This pilot study assessed work flow and feasibility of such a study. While participant recruitment and treatment adherence were suboptimal, administrative procedures were generally feasible and minimally disrupted clinic flow. The comparative effectiveness of continuing or discontinuing long-term alendronate (ALN) on fractures is unknown. A large pragmatic ALN discontinuation study has potential to answer this question. We conducted a 6-month pilot study of the planned the EDGE study among current long-term ALN users (women aged ≥65 with ≥3 years of ALN use) to determine study work flow and feasibility including evaluating the administrative aspects of trial conduct (e.g., time to contract, institutional review board (IRB) approval), assessing rates of site and participant recruitment, and evaluating post-randomization outcomes, including adherence, bisphosphonate-associated adverse events, and participant and site satisfaction. We assessed outcomes 1 and 6 months after randomization. Nine sites participated, including seven community-based medical practices and two academic medical centers. On average (SD), contract execution took 3.4 (2.3) months and IRB approval took 13.9 (4.1) days. Sites recruited 27 participants (13 to continue ALN and 14 to discontinue ALN). Over follow-up, 22% of participants did not adhere to their randomization assignment: 30.8% in the continuation arm and 14.3% in the discontinuation arm. No fractures or adverse events were reported. Sites reported no issues regarding work flow, and participants were highly satisfied with the study. Administrative procedures of the EDGE study were generally feasible, with minimal disruption to clinic flow. In this convenience sample, participant recruitment was suboptimal across most practice sites. Accounting for low treatment arm adherence, a comprehensive recruitment approach will be needed to effectively achieve the scientific goals of the EDGE study.
Canovas, Fernando; Ferreira Costa, Joana; Serrão, Ester A.; Pearson, Gareth A.
2011-01-01
Gene flow among hybridizing species with incomplete reproductive barriers blurs species boundaries, while selection under heterogeneous local ecological conditions or along strong gradients may counteract this tendency. Congeneric, externally-fertilizing fucoid brown algae occur as distinct morphotypes along intertidal exposure gradients despite gene flow. Combining analyses of genetic and phenotypic traits, we investigate the potential for physiological resilience to emersion stressors to act as an isolating mechanism in the face of gene flow. Along vertical exposure gradients in the intertidal zone of Northern Portugal and Northwest France, the mid-low shore species Fucus vesiculosus, the upper shore species Fucus spiralis, and an intermediate distinctive morphotype of F. spiralis var. platycarpus were morphologically characterized. Two diagnostic microsatellite loci recovered 3 genetic clusters consistent with prior morphological assignment. Phylogenetic analysis based on single nucleotide polymorphisms in 14 protein coding regions unambiguously resolved 3 clades; sympatric F. vesiculosus, F. spiralis, and the allopatric (in southern Iberia) population of F. spiralis var. platycarpus. In contrast, the sympatric F. spiralis var. platycarpus (from Northern Portugal) was distributed across the 3 clades, strongly suggesting hybridization/introgression with both other entities. Common garden experiments showed that physiological resilience following exposure to desiccation/heat stress differed significantly between the 3 sympatric genetic taxa; consistent with their respective vertical distribution on steep environmental clines in exposure time. Phylogenetic analyses indicate that F. spiralis var. platycarpus is a distinct entity in allopatry, but that extensive gene flow occurs with both higher and lower shore species in sympatry. Experimental results suggest that strong selection on physiological traits across steep intertidal exposure gradients acts to maintain the 3 distinct genetic and morphological taxa within their preferred vertical distribution ranges. On the strength of distributional, genetic, physiological and morphological differences, we propose elevation of F. spiralis var. platycarpus from variety to species level, as F. guiryi. PMID:21695117
Apaydin, Mehmet Serkan; Çatay, Bülent; Patrick, Nicholas; Donald, Bruce R
2011-05-01
Nuclear magnetic resonance (NMR) spectroscopy is an important experimental technique that allows one to study protein structure and dynamics in solution. An important bottleneck in NMR protein structure determination is the assignment of NMR peaks to the corresponding nuclei. Structure-based assignment (SBA) aims to solve this problem with the help of a template protein which is homologous to the target and has applications in the study of structure-activity relationship, protein-protein and protein-ligand interactions. We formulate SBA as a linear assignment problem with additional nuclear overhauser effect constraints, which can be solved within nuclear vector replacement's (NVR) framework (Langmead, C., Yan, A., Lilien, R., Wang, L. and Donald, B. (2003) A Polynomial-Time Nuclear Vector Replacement Algorithm for Automated NMR Resonance Assignments. Proc. the 7th Annual Int. Conf. Research in Computational Molecular Biology (RECOMB) , Berlin, Germany, April 10-13, pp. 176-187. ACM Press, New York, NY. J. Comp. Bio. , (2004), 11, pp. 277-298; Langmead, C. and Donald, B. (2004) An expectation/maximization nuclear vector replacement algorithm for automated NMR resonance assignments. J. Biomol. NMR , 29, 111-138). Our approach uses NVR's scoring function and data types and also gives the option of using CH and NH residual dipolar coupling (RDCs), instead of NH RDCs which NVR requires. We test our technique on NVR's data set as well as on four new proteins. Our results are comparable to NVR's assignment accuracy on NVR's test set, but higher on novel proteins. Our approach allows partial assignments. It is also complete and can return the optimum as well as near-optimum assignments. Furthermore, it allows us to analyze the information content of each data type and is easily extendable to accept new forms of input data, such as additional RDCs.
Epps, Clinton W.; Wehausen, John D.; Sloan, William B.; Holt, Stacy; Creech, Tyler G.; Crowhurst, Rachel S.; Jaeger, Jef R.; Longshore, Kathleen M.; Monello, Ryan J.
2013-01-01
Where possible, we revisited many of the water sources and other locations originally investigated by Welles and Welles (1961) and earlier researchers. We extracted DNA from fecal pellets, carcass tissue samples, and blood samples archived from earlier captures and genotyped them using highly variable genetic markers (15 microsatellite loci) with sufficient power to distinguish individuals and characterize gene flow and genetic structure. We also analyzed DNA samples collected from other bighorn sheep populations extending north to the White Mountains, west to the Inyo Mountains, south to the Avawatz Mountains, and southeast to the Clark Mountain Range, Kingston Range, and Spring Mountains of Nevada. We estimated genetic structure and recent gene flow among nearly all known populations of bighorn sheep in and around Death Valley National Park (DEVA), and used assignment tests to evaluate individual and population-level genetic structure to infer connectivity across the region. We found that bighorn sheep are still widely distributed in mountain ranges throughout DEVA, including many of the areas described by Welles and Welles (1961), although some use patterns appear to have changed and other areas still require resurvey. Gene flow was relatively high through some sections of fairly continuous habitat, such as the Grapevine and Funeral Mountains along the eastern side of Death Valley, but other populations were more isolated. Genetic diversity was relatively high throughout the park. Although southern Death Valley populations were genetically distinct from populations to the southeast, population assignment tests and recent gene flow estimates suggested that individuals occasionally migrate between those regions, indicating the potential for the recent outbreak of respiratory disease in the southern Mojave Desert to spread into the Death Valley system. We recommend careful monitoring of bighorn sheep using remote cameras to check for signs of respiratory disease in southeastern DEVA and ground surveys in the still-understudied southwestern part of DEVA.
Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination
NASA Astrophysics Data System (ADS)
Li, Weihua; Sankarasubramanian, A.
2012-12-01
Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.
ERIC Educational Resources Information Center
Gupta, Vipul; Ganegoda, Hasitha; Engelhard, Mark H.; Terry, Jeff; Linford, Matthew R.
2014-01-01
The traditional assignment of oxidation states to organic molecules is problematic. Accordingly, in 1999, Calzaferri proposed a simple and elegant solution that is based on the similar electronegativities of carbon and hydrogen: hydrogen would be assigned an oxidation state of zero when bonded to carbon. Here, we show that X-ray photoelectron…
ERIC Educational Resources Information Center
Ness, Bryan M.; Sohlberg, McKay Moore
2013-01-01
The purpose of this study was to evaluate the impact of a classroom-based strategy instruction package grounded in self-regulated learning. The Self-Regulated Assignment Attack Strategy (SAAS) targeted self-regulation of assignment management and related academic-behavioral variables for 6th grade students in resource support classrooms. SAAS was…
The effects of hillslope-scale variability in burn severity on post-fire sediment delivery
NASA Astrophysics Data System (ADS)
Quinn, Dylan; Brooks, Erin; Dobre, Mariana; Lew, Roger; Robichaud, Peter; Elliot, William
2017-04-01
With the increasing frequency of wildfire and the costs associated with managing the burned landscapes, there is an increasing need for decision support tools that can be used to assess the effectiveness of targeted post-fire management strategies. The susceptibility of landscapes to post-fire soil erosion and runoff have been closely linked with the severity of the wildfire. Wildfire severity maps are often spatial complex and largely dependent upon total vegetative biomass, fuel moisture patterns, direction of burn, wind patterns, and other factors. The decision to apply targeted treatment to a specific landscape and the amount of resources dedicated to treating a landscape should ideally be based on the potential for excessive sediment delivery from a particular hillslope. Recent work has suggested that the delivery of sediment to a downstream water body from a hillslope will be highly influenced by the distribution of wildfire severity across a hillslope and that models that do not capture this hillslope scale variability would not provide reliable sediment and runoff predictions. In this project we compare detailed (10 m) grid-based model predictions to lumped and semi-lumped hillslope approaches where hydrologic parameters are fixed based on hillslope scale averaging techniques. We use the watershed scale version of the process-based Watershed Erosion Prediction Projection (WEPP) model and its GIS interface, GeoWEPP, to simulate the fire impacts on runoff and sediment delivery using burn severity maps at a watershed scale. The flowpath option in WEPP allows for the most detail representation of wildfire severity patterns (10 m) but depending upon the size of the watershed, simulations are time consuming and computational demanding. The hillslope version is a simpler approach which assigns wildfire severity based on the severity level that is assigned to the majority of the hillslope area. In the third approach we divided hillslopes in overland flow elements (OFEs) and assigned representative input values on a finer scale within single hillslopes. Each of these approaches were compared for several large wildfires in the mountainous ranges of central Idaho, USA. Simulations indicated that predictions based on lumped hillslope modeling over-predict sediment transport by as much as 4.8x in areas of high to moderate burn severity. Annual sediment yield within the simulated watersheds ranged from 1.7 tonnes/ha to 6.8 tonnes/ha. The disparity between simulated sediment yield with these approaches was attributed to hydrologic connectivity of the burn patterns within the hillslope. High infiltration rates between high severity sites can greatly reduce the delivery of sediment. This research underlines the importance of accurately representing soil burn severity along individual hillslopes in hydrologic models and the need for modeling approaches to capture this variability to reliability simulate soil erosion.
Genetic connectivity for two bear species at wildlife crossing structures in Banff National Park.
Sawaya, Michael A; Kalinowski, Steven T; Clevenger, Anthony P
2014-04-07
Roads can fragment and isolate wildlife populations, which will eventually decrease genetic diversity within populations. Wildlife crossing structures may counteract these impacts, but most crossings are relatively new, and there is little evidence that they facilitate gene flow. We conducted a three-year research project in Banff National Park, Alberta, to evaluate the effectiveness of wildlife crossings to provide genetic connectivity. Our main objective was to determine how the Trans-Canada Highway and crossing structures along it affect gene flow in grizzly (Ursus arctos) and black bears (Ursus americanus). We compared genetic data generated from wildlife crossings with data collected from greater bear populations. We detected a genetic discontinuity at the highway in grizzly bears but not in black bears. We assigned grizzly bears that used crossings to populations north and south of the highway, providing evidence of bidirectional gene flow and genetic admixture. Parentage tests showed that 47% of black bears and 27% of grizzly bears that used crossings successfully bred, including multiple males and females of both species. Differentiating between dispersal and gene flow is difficult, but we documented gene flow by showing migration, reproduction and genetic admixture. We conclude that wildlife crossings allow sufficient gene flow to prevent genetic isolation.
Genetic connectivity for two bear species at wildlife crossing structures in Banff National Park
Sawaya, Michael A.; Kalinowski, Steven T.; Clevenger, Anthony P.
2014-01-01
Roads can fragment and isolate wildlife populations, which will eventually decrease genetic diversity within populations. Wildlife crossing structures may counteract these impacts, but most crossings are relatively new, and there is little evidence that they facilitate gene flow. We conducted a three-year research project in Banff National Park, Alberta, to evaluate the effectiveness of wildlife crossings to provide genetic connectivity. Our main objective was to determine how the Trans-Canada Highway and crossing structures along it affect gene flow in grizzly (Ursus arctos) and black bears (Ursus americanus). We compared genetic data generated from wildlife crossings with data collected from greater bear populations. We detected a genetic discontinuity at the highway in grizzly bears but not in black bears. We assigned grizzly bears that used crossings to populations north and south of the highway, providing evidence of bidirectional gene flow and genetic admixture. Parentage tests showed that 47% of black bears and 27% of grizzly bears that used crossings successfully bred, including multiple males and females of both species. Differentiating between dispersal and gene flow is difficult, but we documented gene flow by showing migration, reproduction and genetic admixture. We conclude that wildlife crossings allow sufficient gene flow to prevent genetic isolation. PMID:24552834
Janyacharoen, Taweesak; Kunbootsri, Narupon; Arayawichanon, Preeda; Chainansamit, Seksun; Sawanyawisuth, Kittisak
2015-06-01
Allergic rhinitis is a chronic respiratory disease. Sympathetic hypofunction is identified in all of the allergic rhinitis patients. Moreover, allergic rhinitis is associated with decreased peak nasal inspiratory flow (PNIF) and impaired lung functions. The aim of this study was to investigate effects of six-week of aquatic exercise on the autonomic nervous system function, PNIF and lung functions in allergic rhinitis patients. Twenty-six allergic rhinitis patients, 12 males and 14 females were recruited in this study. Subjects were diagnosed by a physician based on history, physical examination, and positive reaction to a skin prick test. Subjects were randomly assigned to two groups. The control allergic rhinitis group received education and maintained normal life. The aquatic group performed aquatic exercise for 30 minutes a day, three days a week for six weeks. Heart rate variability, PNIF and lung functions were measured at the beginning, after three weeks and six weeks. There were statistically significant increased low frequency normal units (LF n.u.), PNIF and showed decreased high frequency normal units (HF n.u.) at six weeks after aquatic exercise compared with the control group. Six weeks of aquatic exercise could increase sympathetic activity and PNIF in allergic rhinitis patients.
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
Langille, B L; Perry, R; Keefe, D; Barker, O; Marshall, H D
2016-08-01
Two hundred and eighty-seven longnose sucker Catostomus catostomus were collected from 14 lakes in Labrador, 52 from three lakes in Ontario, 43 from two lakes in British Columbia and 32 from a lake in Yukon; a total of 414 in all. The resulting 34 haplotypes (20 in Labrador) contained moderate haplotypic diversity (h = 0·657) and relatively low nucleotide diversity (π = 3·730 × 10(-3) . Mean ϕST (0·453, P < 0·05) over all populations revealed distinct genetic structuring among C. catostomus populations across Canada, based on province, which was validated by the analysis and spatial analysis of molecular variance (c. 80% variation between provinces). These results probably reflect the historical imprint of recolonization from different refugia and possibly indicate limited ongoing gene flow within provinces. A haplotype network revealed one major and two minor clades within Labrador that were assigned to the Atlantic, Beringian and Mississippian refugia, respectively, with tests of neutrality and mismatch distribution indicative of a recent population expansion in Labrador, dated between c. 3500 and 8300 years ago. © 2016 The Fisheries Society of the British Isles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sibbett, B.S.; Blackett, R.E.
1982-02-01
Lithologies penetrated throughout the upper 732 to 838 m (2400 to 2750 ft) within the Stillwater prospect area are terrigenous sediments of Pleistocene to Recent age. A sill of dacite to andesite composition with a thickness variable between 122 to 208 m (400 to 680 ft) is present below the terrigenous sediments. Between the base of the sill and the top of the Bunejug Formation are intercalated volcanic and sedimentary rocks. All formations overlying the Bunejug Formation are probably of Pleistocene age. The basalt and basaltic-andesite flows and ash below the depth of approximately 1128 m (3700 ft) are hereinmore » assigned to the Bunejug Formation (Morrison, 1964) of Pliocene and possibly early Pleistocene age. The Bunejug Formation is a thick sequence of basalt to andesite flows and hyaloclastite exposed in the mountains surrounding the south half of the Carson Desert. The De Braga No. 2 well bottomed in Bunejug volcanics at a depth of 2109 m (6920 ft). The Richard Weishaupt No. 1 well penetrated the entire Bunejug sequence and entered felsic volcanics and tuffaceous sediments, which possibly represent part of the Truckee Formation, at a depth of approximately 2412 m (7915 ft).« less
Recharge to the surficial aquifer system in Lee and Hendry counties, Florida
Krulikas, R.K.; Giese, G.L.
1995-01-01
Protection of ground-water recharge areas against contamination is of great interest in Florida, a State whose population depends heavily on ground water and that is experiencing rapid growth. The Florida Legislature is considering implementation of a tax incentive program to owners of high-rate recharge lands that remain undeveloped. High-rate recharge was arbitrarily set at 10 or more inches per year. The U.S. Geological Survey, in cooperation with the South Florida Water Management District, conducted a study to investigate the efficacy of several methods for estimating recharge to the surficial aquifer system in southwestern Florida and to map recharge at a scale of 1:100,000. Four maps were constructed at a scale of 1:100,000 for Lee and Hendry Counties, depicting the configuration of the water table of the surficial aquifer system, direction of ground-water flow, general soil characteristics, and recharge rates. Point recharge rates calculated for 25 sites in Lee County from comparisons of chloride concentrations in precipitation and in water from the surficial aquifer system ranged from 0.6 to 9.0 inches per year. Local recharge rates estimated by increases in flow along theoretical flow tubes in the surficial aquifer system were 8.0 inches per year in a part of Lee County and 8.2 inches per year in a part of Hendry County. Information on oxygen isotopes in precipitation and water from the surficial aquifer system was used to verify that the source of chlorides in the aquifer system was from precipitation rather than upward leakage of saline water. Soil maps and general topographic and hydrologic considerations were used with calculated point and local recharge rates to regionalize rates throughout Lee and Hendry Counties. The areas of greatest recharge were found in soils of flatwoods and sloughs, which were assigned estimated recharge rates of 0 to 10 inches per year. Soils of swamps and sloughs were assigned values of 0 to 3.0 inches per year; soils of tidal areas and barrier islands, soils of the Everglades, and soils of sloughs and freshwater marshes were assigned values of 0 to 2.0 inches per year; lastly, soils of manmade areas were assigned values of 0.5 to 1.5 inches per year. Small isolated areas of high-rate recharge (greater than 10 inches per year) might exist in Lee and Hendry Counties, but the maximum rate calculated in this study was 9.0 inches per year. Despite low natural recharge rates, lowering of the water table through pumping or canalization could create a potential for induced recharge in excess of 10 inches per year in parts of Lee and Hendry Counties.
ERIC Educational Resources Information Center
Biddle, Christopher J.
2013-01-01
The purpose of this qualitative holistic multiple-case study was to identify the optimal theoretical approach for a Counter-Terrorism Reality-Based Training (CTRBT) model to train post-9/11 police officers to perform effectively in their counter-terrorism assignments. Post-9/11 police officers assigned to counter-terrorism duties are not trained…
ERIC Educational Resources Information Center
Neiva de Figueiredo, Joao; Mauri, Alfredo J.
2013-01-01
This article describes the "Cross-Cultural Assignment," an experiential learning technique for students of business that deepens self-awareness of their own attitudes toward different cultures and develops international managerial skills. The technique consists of pairing up small teams of U.S.-based business students with small teams of…
Low-Dose, High-Frequency CPR Training Improves Skill Retention of In-Hospital Pediatric Providers
Niles, Dana; Meaney, Peter A.; Aplenc, Richard; French, Benjamin; Abella, Benjamin S.; Lengetti, Evelyn L.; Berg, Robert A.; Helfaer, Mark A.; Nadkarni, Vinay
2011-01-01
OBJECTIVE: To investigate the effectiveness of brief bedside cardiopulmonary resuscitation (CPR) training to improve the skill retention of hospital-based pediatric providers. We hypothesized that a low-dose, high-frequency training program (booster training) would improve CPR skill retention. PATIENTS AND METHODS: CPR recording/feedback defibrillators were used to evaluate CPR quality during simulated arrest. Basic life support–certified, hospital-based providers were randomly assigned to 1 of 4 study arms: (1) instructor-only training; (2) automated defibrillator feedback only; (3) instructor training combined with automated feedback; and (4) control (no structured training). Each session (time: 0, 1, 3, and 6 months after training) consisted of a pretraining evaluation (60 seconds), booster training (120 seconds), and a posttraining evaluation (60 seconds). Excellent CPR was defined as chest compression (CC) depth ≥ one-third anterior-posterior chest depth, rate ≥ 90 and ≤120 CC per minute, ≤20% of CCs with incomplete release (>2500 g), and no flow fraction ≤ 0.30. MEASUREMENTS AND MAIN RESULTS: Eighty-nine providers were randomly assigned; 74 (83%) completed all sessions. Retention of CPR skills was 2.3 times (95% confidence interval [CI]: 1.1–4.5; P = .02) more likely after 2 trainings and 2.9 times (95% CI: 1.4–6.2; P = .005) more likely after 3 trainings. The automated defibrillator feedback only group had lower retention rates compared with the instructor-only training group (odds ratio: 0.41 [95% CI: 0.17–0.97]; P = .043). CONCLUSIONS: Brief bedside booster CPR training improves CPR skill retention. Our data reveal that instructor-led training improves retention compared with automated feedback training alone. Future studies should investigate whether bedside training improves CPR quality during actual pediatric arrests. PMID:21646262
Assigning breed origin to alleles in crossbred animals.
Vandenplas, Jérémie; Calus, Mario P L; Sevillano, Claudia A; Windig, Jack J; Bastiaansen, John W M
2016-08-22
For some species, animal production systems are based on the use of crossbreeding to take advantage of the increased performance of crossbred compared to purebred animals. Effects of single nucleotide polymorphisms (SNPs) may differ between purebred and crossbred animals for several reasons: (1) differences in linkage disequilibrium between SNP alleles and a quantitative trait locus; (2) differences in genetic backgrounds (e.g., dominance and epistatic interactions); and (3) differences in environmental conditions, which result in genotype-by-environment interactions. Thus, SNP effects may be breed-specific, which has led to the development of genomic evaluations for crossbred performance that take such effects into account. However, to estimate breed-specific effects, it is necessary to know breed origin of alleles in crossbred animals. Therefore, our aim was to develop an approach for assigning breed origin to alleles of crossbred animals (termed BOA) without information on pedigree and to study its accuracy by considering various factors, including distance between breeds. The BOA approach consists of: (1) phasing genotypes of purebred and crossbred animals; (2) assigning breed origin to phased haplotypes; and (3) assigning breed origin to alleles of crossbred animals based on a library of assigned haplotypes, the breed composition of crossbred animals, and their SNP genotypes. The accuracy of allele assignments was determined for simulated datasets that include crosses between closely-related, distantly-related and unrelated breeds. Across these scenarios, the percentage of alleles of a crossbred animal that were correctly assigned to their breed origin was greater than 90 %, and increased with increasing distance between breeds, while the percentage of incorrectly assigned alleles was always less than 2 %. For the remaining alleles, i.e. 0 to 10 % of all alleles of a crossbred animal, breed origin could not be assigned. The BOA approach accurately assigns breed origin to alleles of crossbred animals, even if their pedigree is not recorded.
Ceresini, Graziano; Marchini, Lorenzo; Rebecchi, Isabella; Morganti, Simonetta; Bertone, Luca; Montanari, Ilaria; Bacchi-Modena, Alberto; Sgarabotto, Maria; Baldini, Monica; Denti, Licia; Ablondi, Fabrizio; Ceda, Gian Paolo; Valenti, Giorgio
2003-03-01
Raloxifene is one of the most important selective estrogen receptor modulators currently employed for the treatment of postmenopausal osteoporosis. However, it has also been suggested that this compound affects the vascular system. We evaluated both carotid blood flow resistance and endothelium-dependent vasodilation in 50 healthy postmenopausal women randomly assigned to receive, in a double blind design, either raloxifene (60 mg per day; N=25 subjects) or placebo (N=25 subjects) for 4 months. Indices of carotid blood flow resistance, such as the pulsatility index (PI) and resistance index (RI), as well as the flow-mediated brachial artery dilation were measured both at baseline and at the end of treatment. Changes in PI were -1.86+/-2.24 and -2.15+/-2.22% after placebo and raloxifene treatment, respectively, with no significant differences between groups. Changes in RI were -0.77+/-1.72 and -1.81+/-1.54% after placebo and raloxifene treatment, respectively, with no significant differences between groups. At the end of the treatment period, the increments in artery diameter measured after the flow stimulus were 10.79+/-2.39 and 6.70+/-1.23% for placebo and raloxifene, respectively, with no significant differences between groups. These results demonstrate no significant effects of raloxifene on either carotid blood flow resistance or brachial artery flow-mediated dilation in postmenopausal women.
NASA Technical Reports Server (NTRS)
Keyes, David E.; Smooke, Mitchell D.
1987-01-01
A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n.
Biological species in the viral world.
Bobay, Louis-Marie; Ochman, Howard
2018-06-05
Due to their dependence on cellular organisms for metabolism and replication, viruses are typically named and assigned to species according to their genome structure and the original host that they infect. But because viruses often infect multiple hosts and the numbers of distinct lineages within a host can be vast, their delineation into species is often dictated by arbitrary sequence thresholds, which are highly inconsistent across lineages. Here we apply an approach to determine the boundaries of viral species based on the detection of gene flow within populations, thereby defining viral species according to the biological species concept (BSC). Despite the potential for gene transfer between highly divergent genomes, viruses, like the cellular organisms they infect, assort into reproductively isolated groups and can be organized into biological species. This approach revealed that BSC-defined viral species are often congruent with the taxonomic partitioning based on shared gene contents and host tropism, and that bacteriophages can similarly be classified in biological species. These results open the possibility to use a single, universal definition of species that is applicable across cellular and acellular lifeforms.
Metro passengers’ route choice model and its application considering perceived transfer threshold
Jin, Fanglei; Zhang, Yongsheng; Liu, Shasha
2017-01-01
With the rapid development of the Metro network in China, the greatly increased route alternatives make passengers’ route choice behavior and passenger flow assignment more complicated, which presents challenges to the operation management. In this paper, a path sized logit model is adopted to analyze passengers’ route choice preferences considering such parameters as in-vehicle time, number of transfers, and transfer time. Moreover, the “perceived transfer threshold” is defined and included in the utility function to reflect the penalty difference caused by transfer time on passengers’ perceived utility under various numbers of transfers. Next, based on the revealed preference data collected in the Guangzhou Metro, the proposed model is calibrated. The appropriate perceived transfer threshold value and the route choice preferences are analyzed. Finally, the model is applied to a personalized route planning case to demonstrate the engineering practicability of route choice behavior analysis. The results show that the introduction of the perceived transfer threshold is helpful to improve the model’s explanatory abilities. In addition, personalized route planning based on route choice preferences can meet passengers’ diversified travel demands. PMID:28957376
Image categorization for marketing purposes
NASA Astrophysics Data System (ADS)
Almishari, Mishari I.; Lee, Haengju; Gnanasambandam, Nathan
2011-03-01
Images meant for marketing and promotional purposes (i.e. coupons) represent a basic component in incentivizing customers to visit shopping outlets and purchase discounted commodities. They also help department stores in attracting more customers and potentially, speeding up their cash flow. While coupons are available from various sources - print, web, etc. categorizing these monetary instruments is a benefit to the users. We are interested in an automatic categorizer system that aggregates these coupons from different sources (web, digital coupons, paper coupons, etc) and assigns a type to each of these coupons in an efficient manner. While there are several dimensions to this problem, in this paper we study the problem of accurately categorizing/classifying the coupons. We propose and evaluate four different techniques for categorizing the coupons namely, word-based model, n-gram-based model, externally weighing model, weight decaying model which take advantage of known machine learning algorithms. We evaluate these techniques and they achieve high accuracies in the range of 73.1% to 93.2%. We provide various examples of accuracy optimizations that can be performed and show a progressive increase in categorization accuracy for our test dataset.
Fluidity models in ancient Greece and current practices of sex assignment
Chen, Min-Jye; McCann-Crosby, Bonnie; Gunn, Sheila; Georgiadis, Paraskevi; Placencia, Frank; Mann, David; Axelrad, Marni; Karaviti, L.P; McCullough, Laurence B.
2018-01-01
Disorders of sexual differentiation such as androgen insensitivity and gonadal dysgenesis can involve an intrinsic fluidity at different levels, from the anatomical and biological to the social (gender) that must be considered in the context of social constraints. Sex assignment models based on George Engel’s biopsychosocial aspects model of biology accept fluidity of gender as a central concept and therefore help establish expectations within the uncertainty of sex assignment and anticipate potential changes. The biology underlying the fluidity inherent to these disorders should be presented to parents at diagnosis, an approach that the gender medicine field should embrace as good practice. Greek mythology provides many accepted archetypes of change, and the ancient Greek appreciation of metamorphosis can be used as context with these patients. Our goal is to inform expertise and optimal approaches, knowing that this fluidity may eventually necessitate sex reassignment. Physicians should provide sex assignment education based on different components of sexual differentiation, prepare parents for future hormone-triggered changes in their children, and establish a sex-assignment algorithm. PMID:28478088
Fluidity models in ancient Greece and current practices of sex assignment.
Chen, Min-Jye; McCann-Crosby, Bonnie; Gunn, Sheila; Georgiadis, Paraskevi; Placencia, Frank; Mann, David; Axelrad, Marni; Karaviti, L P; McCullough, Laurence B
2017-06-01
Disorders of sexual differentiation such as androgen insensitivity and gonadal dysgenesis can involve an intrinsic fluidity at different levels, from the anatomical and biological to the social (gender) that must be considered in the context of social constraints. Sex assignment models based on George Engel's biopsychosocial aspects model of biology accept fluidity of gender as a central concept and therefore help establish expectations within the uncertainty of sex assignment and anticipate potential changes. The biology underlying the fluidity inherent to these disorders should be presented to parents at diagnosis, an approach that the gender medicine field should embrace as good practice. Greek mythology provides many accepted archetypes of change, and the ancient Greek appreciation of metamorphosis can be used as context with these patients. Our goal is to inform expertise and optimal approaches, knowing that this fluidity may eventually necessitate sex reassignment. Physicians should provide sex assignment education based on different components of sexual differentiation, prepare parents for future hormone-triggered changes in their children, and establish a sex-assignment algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.
Kent, Angela D.; Smith, Dan J.; Benson, Barbara J.; Triplett, Eric W.
2003-01-01
Culture-independent DNA fingerprints are commonly used to assess the diversity of a microbial community. However, relating species composition to community profiles produced by community fingerprint methods is not straightforward. Terminal restriction fragment length polymorphism (T-RFLP) is a community fingerprint method in which phylogenetic assignments may be inferred from the terminal restriction fragment (T-RF) sizes through the use of web-based resources that predict T-RF sizes for known bacteria. The process quickly becomes computationally intensive due to the need to analyze profiles produced by multiple restriction digests and the complexity of profiles generated by natural microbial communities. A web-based tool is described here that rapidly generates phylogenetic assignments from submitted community T-RFLP profiles based on a database of fragments produced by known 16S rRNA gene sequences. Users have the option of submitting a customized database generated from unpublished sequences or from a gene other than the 16S rRNA gene. This phylogenetic assignment tool allows users to employ T-RFLP to simultaneously analyze microbial community diversity and species composition. An analysis of the variability of bacterial species composition throughout the water column in a humic lake was carried out to demonstrate the functionality of the phylogenetic assignment tool. This method was validated by comparing the results generated by this program with results from a 16S rRNA gene clone library. PMID:14602639
2017-01-24
Objectives: Open surgical reconstruction using expanded polytetrafluoroethylene stent grafts to create a sutureless anastomosis is an alternative to...French Argyle shunt was inserted into one randomly assigned artery, with a self-expanding ePTFE stent deployed in the other. Arterial flow measurements...for histopathology were obtained during the terminal procedure. Results: Angiography revealed no difference in patency at 72 hours. The stent grafts
Population genetic structure of moose (Alces alces) of South-central Alaska
Wilson, Robert E.; McDonough, John T.; Barboza, Perry S.; Talbot, Sandra L.; Farley, Sean D.
2015-01-01
The location of a population can influence its genetic structure and diversity by impacting the degree of isolation and connectivity to other populations. Populations at range margins are often thought to have less genetic variation and increased genetic structure, and a reduction in genetic diversity can have negative impacts on the health of a population. We explored the genetic diversity and connectivity between 3 peripheral populations of moose (Alces alces) with differing potential for connectivity to other areas within interior Alaska. Populations on the Kenai Peninsula and from the Anchorage region were found to be significantly differentiated (FST= 0.071, P < 0.0001) with lower levels of genetic diversity observed within the Kenai population. Bayesian analyses employing assignment methodologies uncovered little evidence of contemporary gene flow between Anchorage and Kenai, suggesting regional isolation. Although gene flow outside the peninsula is restricted, high levels of gene flow were detected within the Kenai that is explained by male-biased dispersal. Furthermore, gene flow estimates differed across time scales on the Kenai Peninsula which may have been influenced by demographic fluctuations correlated, at least in part, with habitat change.
Chemically generated convective transport in microfluidic system
NASA Astrophysics Data System (ADS)
Shklyaev, Oleg; Das, Sambeeta; Altemose, Alicia; Shum, Henry; Balazs, Anna; Sen, Ayusman
High precision manipulation of small volumes of fluid, containing suspended micron sized objects like cells, viruses, and large molecules, is one of the main goals in designing modern lab-on-a-chip devices which can find a variety of chemical and biological applications. To transport the cargo toward sensing elements, typical microfluidic devices often use pressure driven flows. Here, we propose to use enzymatic chemical reactions which decompose reagent into less dense products and generate flows that can transport particles. Density variations that lead to flow in the assigned direction are created between the place where reagent is fed into the solution and the location where it is decomposed by enzymes attached to the surface of the microchannel. When the reagent is depleted, the fluid motion stops and particles sediment to the bottom. We demonstrate how the choice of chemicals, leading to specific reaction rates, can affect the transport properties. In particular, we show that the intensity of the fluid flow, the final location of cargo, and the time for cargo delivery are controlled by the amount and type of reagent in the system.
Law, Anandi V; Jackevicius, Cynthia A; Bounthavong, Mark
2011-02-10
To describe the development and assessment of monographs as an assignment to incorporate evidence-based medicine (EBM) and pharmacoeconomic principles into a third-year pharmacoeconomic course. Eight newly FDA-approved drugs were assigned to 16 teams of students, where each drug was assigned to 2 teams. Teams had to research their drug, write a professional monograph, deliver an oral presentation, and answer questions posed by faculty judges. One team was asked to present evidence for inclusion of the drug into a formulary, while another team presented evidence against inclusion. The teams' average score on the written report was 99.1%; on the oral presentation, 92.5%, and on the online quiz given at the end of the presentations, 77%. Monographs are a successful method of incorporating and integrating learning across different concepts, as well as increasing relevance of pharmacoeconomics in the PharmD curriculum.
Jackevicius, Cynthia A.; Bounthavong, Mark
2011-01-01
Objective To describe the development and assessment of monographs as an assignment to incorporate evidence-based medicine (EBM) and pharmacoeconomic principles into a third-year pharmacoeconomic course. Design Eight newly FDA-approved drugs were assigned to 16 teams of students, where each drug was assigned to 2 teams. Teams had to research their drug, write a professional monograph, deliver an oral presentation, and answer questions posed by faculty judges. One team was asked to present evidence for inclusion of the drug into a formulary, while another team presented evidence against inclusion. Assessment The teams' average score on the written report was 99.1%; on the oral presentation, 92.5%, and on the online quiz given at the end of the presentations, 77%. Conclusions Monographs are a successful method of incorporating and integrating learning across different concepts, as well as increasing relevance of pharmacoeconomics in the PharmD curriculum. PMID:21451753
Assigning value to energy storage systems at multiple points in an electrical grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balducci, Patrick J.; Alam, M. Jan E.; Hardy, Trevor D.
This article presents a taxonomy for assigning benefits to the services provided by energy storage systems (ESSs), defines approaches for monetizing the value associated with these services, assigns values to major ESS applications by region based on a review of an extensive set of literature, and summarizes and evaluates the capabilities of several tools currently used to estimate value for specific ESS deployments.
Assigning value to energy storage systems at multiple points in an electrical grid
Balducci, Patrick J.; Alam, M. Jan E.; Hardy, Trevor D.; ...
2018-01-01
This article presents a taxonomy for assigning benefits to the services provided by energy storage systems (ESSs), defines approaches for monetizing the value associated with these services, assigns values to major ESS applications by region based on a review of an extensive set of literature, and summarizes and evaluates the capabilities of several tools currently used to estimate value for specific ESS deployments.
McDonnell, Diana D; Graham, Carrie L
2015-03-01
In 2011 California began transitioning approximately 340,000 seniors and people with disabilities from Medicaid fee-for-service (FFS) to Medicaid managed care plans. When beneficiaries did not actively choose a managed care plan, the state assigned them to one using an algorithm based on their previous FFS primary and specialty care use. When no clear link could be established, beneficiaries were assigned by default to a managed care plan based on weighted randomization. In this article we report the results of a telephone survey of 1,521 seniors and people with disabilities enrolled in Medi-Cal (California Medicaid) and who were recently transitioned to a managed care plan. We found that 48 percent chose their own plan, 11 percent were assigned to a plan by algorithm, and 41 percent were assigned to a plan by default. People in the latter two categories reported being similarly less positive about their experiences compared to beneficiaries who actively chose a plan. Many states in addition to California are implementing mandatory transitions of Medicaid-only beneficiaries to managed care plans. Our results highlight the importance of encouraging beneficiaries to actively choose their health plan; when beneficiaries do not choose, states should employ robust intelligent assignment algorithms. Project HOPE—The People-to-People Health Foundation, Inc.
QUICR-learning for Multi-Agent Coordination
NASA Technical Reports Server (NTRS)
Agogino, Adrian K.; Tumer, Kagan
2006-01-01
Coordinating multiple agents that need to perform a sequence of actions to maximize a system level reward requires solving two distinct credit assignment problems. First, credit must be assigned for an action taken at time step t that results in a reward at time step t > t. Second, credit must be assigned for the contribution of agent i to the overall system performance. The first credit assignment problem is typically addressed with temporal difference methods such as Q-learning. The second credit assignment problem is typically addressed by creating custom reward functions. To address both credit assignment problems simultaneously, we propose the "Q Updates with Immediate Counterfactual Rewards-learning" (QUICR-learning) designed to improve both the convergence properties and performance of Q-learning in large multi-agent problems. QUICR-learning is based on previous work on single-time-step counterfactual rewards described by the collectives framework. Results on a traffic congestion problem shows that QUICR-learning is significantly better than a Q-learner using collectives-based (single-time-step counterfactual) rewards. In addition QUICR-learning provides significant gains over conventional and local Q-learning. Additional results on a multi-agent grid-world problem show that the improvements due to QUICR-learning are not domain specific and can provide up to a ten fold increase in performance over existing methods.
Nishimoto, Naoki; Yokooka, Yuki; Yagahara, Ayako; Uesugi, Masahito; Ogasawara, Katsuhiko
2011-01-01
Our purpose in this study was to investigate the expression differences in report assignments between students in nursing and radiologic technology departments. We have known that faculties could identify differences, such as word usage, through grading their students' assignments. However, there are no reports in the literature dealing with expression differences in vocabulary usage in medical informatics education based on statistical techniques or other quantitative measures. The report assignment asked for students' opinions in the event that they found a rare case of a disease in a hospital after they graduated from professional school. We processed student report data automatically, and we applied the space vector model and TF/IDF (term frequency/inverse document frequency) scoring to 129 report assignments. The similarity-score distributions among the assignments for these two departments were close to normal. We focused on the sets of terms that occurred exclusively in either department. For terms such as "radiation therapy" or "communication skills" that occurred in the radiologic technology department, the TF/IDF score was 8.01. The same score was obtained for terms such as "privacy guidelines" or "consent of patients" that occurred in the nursing department. These results will help faculties to provide a better education based on identified expression differences from students' background knowledge.
Harsch, Tobias; Schneider, Philipp; Kieninger, Bärbel; Donaubauer, Harald; Kalbitzer, Hans Robert
2017-02-01
Side chain amide protons of asparagine and glutamine residues in random-coil peptides are characterized by large chemical shift differences and can be stereospecifically assigned on the basis of their chemical shift values only. The bimodal chemical shift distributions stored in the biological magnetic resonance data bank (BMRB) do not allow such an assignment. However, an analysis of the BMRB shows, that a substantial part of all stored stereospecific assignments is not correct. We show here that in most cases stereospecific assignment can also be done for folded proteins using an unbiased artificial chemical shift data base (UACSB). For a separation of the chemical shifts of the two amide resonance lines with differences ≥0.40 ppm for asparagine and differences ≥0.42 ppm for glutamine, the downfield shifted resonance lines can be assigned to H δ21 and H ε21 , respectively, at a confidence level >95%. A classifier derived from UASCB can also be used to correct the BMRB data. The program tool AssignmentChecker implemented in AUREMOL calculates the Bayesian probability for a given stereospecific assignment and automatically corrects the assignments for a given list of chemical shifts.
Tian, Ye; Schwieters, Charles D.; Opella, Stanley J.; Marassi, Francesca M.
2011-01-01
AssignFit is a computer program developed within the XPLOR-NIH package for the assignment of dipolar coupling (DC) and chemical shift anisotropy (CSA) restraints derived from the solid-state NMR spectra of protein samples with uniaxial order. The method is based on minimizing the difference between experimentally observed solid-state NMR spectra and the frequencies back calculated from a structural model. Starting with a structural model and a set of DC and CSA restraints grouped only by amino acid type, as would be obtained by selective isotopic labeling, AssignFit generates all of the possible assignment permutations and calculates the corresponding atomic coordinates oriented in the alignment frame, together with the associated set of NMR frequencies, which are then compared with the experimental data for best fit. Incorporation of AssignFit in a simulated annealing refinement cycle provides an approach for simultaneous assignment and structure refinement (SASR) of proteins from solid-state NMR orientation restraints. The methods are demonstrated with data from two integral membrane proteins, one α-helical and one β-barrel, embedded in phospholipid bilayer membranes. PMID:22036904
NASA Astrophysics Data System (ADS)
Turnbull, S. J.
2017-12-01
Within the US Army Corps of Engineers (USACE), reservoirs are typically operated according to a rule curve that specifies target water levels based on the time of year. The rule curve is intended to maximize flood protection by specifying releases of water before the dominant rainfall period for a region. While some operating allowances are permissible, generally the rule curve elevations must be maintained. While this operational approach provides for the required flood control purpose, it may not result in optimal reservoir operations for multi-use impoundments. In the Russian River Valley of California a multi-agency research effort called Forecast-Informed Reservoir Operations (FIRO) is assessing the application of forecast weather and streamflow predictions to potentially enhance the operation of reservoirs in the watershed. The focus of the study has been on Lake Mendocino, a USACE project important for flood control, water supply, power generation and ecological flows. As part of this effort the Engineer Research and Development Center is assessing the ability of utilizing the physics based, distributed watershed model Gridded Surface Subsurface Hydrologic Analysis (GSSHA) model to simulate stream flows, reservoir stages, and discharges while being driven by weather forecast products. A key question in this application is the effect of watershed model resolution on forecasted stream flows. To help resolve this question, GSSHA models of multiple grid resolutions, 30, 50, and 270m, were developed for the upper Russian River, which includes Lake Mendocino. The models were derived from common inputs: DEM, soils, land use, stream network, reservoir characteristics, and specified inflows and discharges. All the models were calibrated in both event and continuous simulation mode using measured precipitation gages and then driven with the West-WRF atmospheric model in prediction mode to assess the ability of the model to function in short term, less than one week, forecasting mode. In this presentation we will discuss the effect the grid resolution has model development, parameter assignment, streamflow prediction and forecasting capability utilizing the West-WRF forecast hydro-meteorology.
Late quaternary geomagnetic secular variation from historical and 14C-dated lava flows on Hawaii
NASA Astrophysics Data System (ADS)
Hagstrum, Jonathan T.; Champion, Duane E.
1995-12-01
A paleomagnetic record of geomagnetic paleosecular variation (PSV) is constructed for the last 4400 years based on 191 sites in historical and 14C-dated lava flows from Mauna Loa, Kilauea, and Hualalai Volcanoes on the island of Hawaii. The features of this new record are similar to those recorded by sediments from Lake Waiau near the summit of Mauna Kea Volcano, but overall mean inclinations for the lava flows (31° to 33°, depending on window size) are nearer the expected dipole-field value (35°) than is that for the sediments (27°). Divergence of the inclination records with increasing age suggests that the Lake Waiau values at depths below 2 m have been affected by compaction-related inclination shallowing, although magnetic terrain effects cannot be ruled out. The rate of PSV indicated by the record presented here is highly variable (<0.5°/century to >20°/century), and a pronounced shift in inclination from 25° to 40° occurred between ~1030 and ~975 years B.P. Paleomagnetic directions from undated materials can be correlated with our calibrated curve, but the resolution is largely dependent on the PSV rate and data densities for both the reference and unknown directions. The upper part of the Puna Basalt (18 lava flows), previously sampled for paleomagnetism along the northern wall of Kilauea's caldera (Uwekahuna Bluff), was likely deposited sometime between 1030 and 750 years B.P., but the lowest two flows beneath the Uwekahuna Ash (~2100 years B.P.) are correlated with an age of ~3034 years B.P. Paleomagnetic data for 54 lava flows of the Ka'u Basalt, exposed in the northwest wall of Mauna Loa's summit caldera (Mokuaweoweo), indicate that they probably accumulated over a relatively short time interval (~200+years) and are assigned to a 1000 to 1199 year B.P. time window. The mean of ages within this window is ~1030 years B.P., but mapping and other 14C dates indicate that these summit overflows are probably closer to ~1200 years B.P. in age.
Evaluation of an Infiltration Model with Microchannels
NASA Astrophysics Data System (ADS)
Garcia-Serrana, M.; Gulliver, J. S.; Nieber, J. L.
2015-12-01
This research goal is to develop and demonstrate the means by which roadside drainage ditches and filter strips can be assigned the appropriate volume reduction credits by infiltration. These vegetated surfaces convey stormwater, infiltrate runoff, and filter and/or settle solids, and are often placed along roads and other impermeable surfaces. Infiltration rates are typically calculated by assuming that water flows as sheet flow over the slope. However, for most intensities water flow occurs in narrow and shallow micro-channels and concentrates in depressions. This channelization reduces the fraction of the soil surface covered with the water coming from the road. The non-uniform distribution of water along a hillslope directly affects infiltration. First, laboratory and field experiments have been conducted to characterize the spatial pattern of flow for stormwater runoff entering onto the surface of a sloped surface in a drainage ditch. In the laboratory experiments different micro-topographies were tested over bare sandy loam soil: a smooth surface, and three and five parallel rills. All the surfaces experienced erosion; the initially smooth surface developed a system of channels over time that increased runoff generation. On average, the initially smooth surfaces infiltrated 10% more volume than the initially rilled surfaces. The field experiments were performed in the side slope of established roadside drainage ditches. Three rates of runoff from a road surface into the swale slope were tested, representing runoff from 1, 2, and 10-year storm events. The average percentage of input runoff water infiltrated in the 32 experiments was 67%, with a 21% standard deviation. Multiple measurements of saturated hydraulic conductivity were conducted to account for its spatial variability. Second, a rate-based coupled infiltration and overland model has been designed that calculates stormwater infiltration efficiency of swales. The Green-Ampt-Mein-Larson assumptions were implemented to calculate infiltration along with a kinematic wave model for overland flow that accounts for short-circuiting of flow. Additionally, a sensitivity analysis on the parameters implemented in the model has been performed. Finally, the field experiments results have been used to quantify the validity of the coupled model.
High-resolution photoabsorption spectrum of jet-cooled propyne
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacovella, U.; Holland, D. M. P.; Boyé-Péronne, S.
2014-09-21
The absolute photoabsorption cross section of propyne was recorded between 62 000 and 88 000 cm{sup −1} by using the vacuum-ultraviolet, Fourier-transform spectrometer at the Synchrotron Soleil. This cross section spans the region including the lowest Rydberg bands and extends above the Franck-Condon envelope for ionization to the ground electronic state of the propyne cation, X{sup ~+}. Room-temperature spectra were recorded in a flowing cell at 0.9 cm{sup −1} resolution, and jet-cooled spectra were recorded at 1.8 cm{sup −1} resolution and a rotational temperature of ∼100 K. The reduced widths of the rotational band envelopes in the latter spectra reveal new structuremore » and simplify a number of assignments. Although nf Rydberg series have not been assigned previously in the photoabsorption spectrum of propyne, arguments are presented for their potential importance, and the assignment of one nf series is proposed. As expected from previous photoelectron spectra, Rydberg series are also observed above the adiabatic ionization threshold that converge to the v{sub 3}{sup +} = 1 and 2 levels of the C≡C stretching vibration.« less
NASA Astrophysics Data System (ADS)
Gelfan, Alexander; Kalugin, Andrei; Motovilov, Yury
2017-04-01
A regional hydrological model was setup to assess possible impact of climate change on the hydrological regime of the Amur drainage basin (the catchment area is 1 855 000 km2). The model is based on the ECOMAG hydrological modeling platform and describes spatially distributed processes of water cycle in this great basin with account for flow regulation by the Russian and Chinese reservoirs. Earlier, the regional hydrological model was intensively evaluated against 20-year streamflow data over the whole Amur basin and, being driven by 252-station meteorological observations as input data, demonstrated good performance. In this study, we firstly assessed the reliability of the model to reproduce the historical streamflow series when Global Climate Model (GCM) simulation data are used as input into the hydrological model. Data of nine GCMs involved in CMIP5 project was utilized and we found that ensemble mean of annual flow is close to the observed flow (error is about 14%) while data of separate GCMs may result in much larger errors. Reproduction of seasonal flow for the historical period turned out weaker; first of all because of large errors in simulated seasonal precipitation, so hydrological consequences of climate change were estimated just in terms of annual flow. We analyzed the hydrological projections from the climate change scenarios. The impacts were assessed in four 20-year periods: early- (2020-2039), mid- (2040-2059) and two end-century (2060-2079; 2080-2099) periods using an ensemble of nine GCMs and four Representative Concentration Pathways (RCP) scenarios. Mean annual runoff anomalies calculated as percentages of the future runoff (simulated under 36 GCM-RCP combinations of climate scenarios) to the historical runoff (simulated under the corresponding GCM outputs for the reference 1986-2005 period) were estimated. Hydrological model gave small negative runoff anomalies for almost all GCM-RCP combinations of climate scenarios and for all 20-year periods. The largest ensemble mean anomaly was about minus 8% by the end of XXI century under the most severe RCP8.5 scenario. We compared the mean annual runoff anomalies projected under the GCM-based data for the XXI century with the corresponding anomalies projected under a modified observed climatology using the delta-change (DC) method. Use of the modified observed records as driving forces for hydrological model-based projections can be considered as an alternative to the GCM-based scenarios if the latter are uncertain. The main advantage of the DC approach is its simplicity: in its simplest version only differences between present and future climates (i.e. between the long-term means of the climatic variables) are considered as DC-factors. In this study, the DC-factors for the reference meteorological series (1986-2005) of climate parameters were calculated from the GCM-based scenarios. The modified historical data were used as input into the hydrological models. For each of four 20-year period, runoff anomalies simulated under the delta-changed historical time series were compared with runoff anomalies simulated under the corresponding GCM-data with the same mean. We found that the compared projections are closely correlated. Thus, for the Amur basin, the modified observed climatology can be used as driving force for hydrological model-based projections and considered as an alternative to the GCM-based scenarios if only annual flow projections are of the interest.
Jan, Yih-Kuen; Lee, Bernard; Liao, Fuyuan; Foreman, Robert D
2012-10-01
The objectives of this study were to investigate the effects of local cooling on skin blood flow response to prolonged surface pressure and to identify associated physiological controls mediating these responses using the wavelet analysis of blood flow oscillations in rats. Twelve Sprague-Dawley rats were randomly assigned to three protocols, including pressure with local cooling (Δt = -10 °C), pressure with local heating (Δt = 10 °C) and pressure without temperature changes. Pressure of 700 mmHg was applied to the right trochanter area of rats for 3 h. Skin blood flow was measured using laser Doppler flowmetry. The 3 h loading period was divided into non-overlapping 30 min epochs for the analysis of the changes of skin blood flow oscillations using wavelet spectral analysis. The wavelet amplitudes and powers of three frequencies (metabolic, neurogenic and myogenic) of skin blood flow oscillations were calculated. The results showed that after an initial loading period of 30 min, skin blood flow continually decreased under the conditions of pressure with heating and of pressure without temperature changes, but maintained stable under the condition of pressure with cooling. Wavelet analysis revealed that stable skin blood flow under pressure with cooling was attributed to changes in the metabolic and myogenic frequencies. This study demonstrates that local cooling may be useful for reducing ischemia of weight-bearing soft tissues that prevents pressure ulcers.
Jan, Yih-Kuen; Lee, Bernard; Liao, Fuyuan; Foreman, Robert D.
2012-01-01
The objectives of this study were to investigate the effects of local cooling on skin blood flow response to prolonged surface pressure and to identify associated physiological controls mediating these responses using wavelet analysis of blood flow oscillations in rats. Twelve Sprague Dawley rats were randomly assigned into three protocols, including pressure with local cooling (Δt= −10°C), pressure with local heating (Δt= 10°C), and pressure without temperature changes. Pressure of 700 mmHg was applied to the right trochanter area of rats for 3 hours. Skin blood flow was measured using laser Doppler flowmetry. The 3-hour loading period was divided into non-overlapping 30 min epochs for analysis of the changes of skin blood flow oscillations using wavelet spectral analysis. The wavelet amplitudes and powers of three frequencies (metabolic, neurogenic and myogenic) of skin blood flow oscillations were calculated. The results showed that after an initial loading period of 30 min, skin blood flow continually decreased in the conditions of pressure with heating and of pressure without temperature changes, but maintained stable in the condition of pressure with cooling. Wavelet analysis revealed that stable skin blood flow under pressure with cooling was attributed to changes in the metabolic and myogenic frequencies. This study demonstrates that local cooling may be useful for reducing ischemia of weight-bearing soft tissues that prevents pressure ulcers. PMID:23010955
Porous media flux sensitivity to pore-scale geostatistics: A bottom-up approach
NASA Astrophysics Data System (ADS)
Di Palma, P. R.; Guyennon, N.; Heße, F.; Romano, E.
2017-04-01
Macroscopic properties of flow through porous media can be directly computed by solving the Navier-Stokes equations at the scales related to the actual flow processes, while considering the porous structures in an explicit way. The aim of this paper is to investigate the effects of the pore-scale spatial distribution on seepage velocity through numerical simulations of 3D fluid flow performed by the lattice Boltzmann method. To this end, we generate multiple random Gaussian fields whose spatial correlation follows an assigned semi-variogram function. The Exponential and Gaussian semi-variograms are chosen as extreme-cases of correlation for short distances and statistical properties of the resulting porous media (indicator field) are described using the Matèrn covariance model, with characteristic lengths of spatial autocorrelation (pore size) varying from 2% to 13% of the linear domain. To consider the sensitivity of the modeling results to the geostatistical representativeness of the domain as well as to the adopted resolution, porous media have been generated repetitively with re-initialized random seeds and three different resolutions have been tested for each resulting realization. The main difference among results is observed between the two adopted semi-variograms, indicating that the roughness (short distances autocorrelation) is the property mainly affecting the flux. However, computed seepage velocities show additionally a wide variability (about three orders of magnitude) for each semi-variogram model in relation to the assigned correlation length, corresponding to pore sizes. The spatial resolution affects more the results for short correlation lengths (i.e., small pore sizes), resulting in an increasing underestimation of the seepage velocity with the decreasing correlation length. On the other hand, results show an increasing uncertainty as the correlation length approaches the domain size.
Medical Support for ISS Crewmember Training in Star City, Russia
NASA Technical Reports Server (NTRS)
Chough, Natacha; Pattarini, James; Cole, Richard; Patlach, Robert; Menon, Anil
2017-01-01
Medical support of spaceflight training operations across international lines is a unique circumstance with potential applications to other aerospace medicine support scenarios. KBRwyle's Star City Medical Support Group (SCMSG) has fulfilled this role since the Mir-Shuttle era, with extensive experience and updates to share with the greater AsMA community. OVERVIEW: The current Soyuz training flow for assigned ISS crewmembers takes place in Star City, Russia. Soyuz training flow involves numerous activities that pose potential physical and occupational risks to crewmembers, including centrifuge runs and pressurized suit simulations at ambient and hypobaric pressures. In addition, Star City is a relatively remote location in a host nation with variable access to reliable, Western-standard medical care. For these reasons, NASA's Human Health & Performance contract allocates full-time physician support to assigned ISS crewmembers training in Star City. The Star City physician also treats minor injuries and illnesses as needed for both long- and short-term NASA support personnel traveling in the area, while working to develop and maintain relationships with local health care resources in the event of more serious medical issues that cannot be treated on-site. The specifics of this unique scope of practice will be discussed. SIGNIFICANCE: ISS crewmembers training in Star City are at potential physical and occupational risk of trauma or dysbarism during nominal Soyuz training flow, requiring medical support from an on-duty aerospace medicine specialist. This support maintains human health and performance by preserving crewmember safety and well-being for mission success; sharing information regarding this operational model may contribute to advances in other areas of international, military, and civilian operational aerospace medicine.
Abbas, Ahmed; Guo, Xianrong; Jing, Bing-Yi; Gao, Xin
2014-06-01
Despite significant advances in automated nuclear magnetic resonance-based protein structure determination, the high numbers of false positives and false negatives among the peaks selected by fully automated methods remain a problem. These false positives and negatives impair the performance of resonance assignment methods. One of the main reasons for this problem is that the computational research community often considers peak picking and resonance assignment to be two separate problems, whereas spectroscopists use expert knowledge to pick peaks and assign their resonances at the same time. We propose a novel framework that simultaneously conducts slice picking and spin system forming, an essential step in resonance assignment. Our framework then employs a genetic algorithm, directed by both connectivity information and amino acid typing information from the spin systems, to assign the spin systems to residues. The inputs to our framework can be as few as two commonly used spectra, i.e., CBCA(CO)NH and HNCACB. Different from the existing peak picking and resonance assignment methods that treat peaks as the units, our method is based on 'slices', which are one-dimensional vectors in three-dimensional spectra that correspond to certain ([Formula: see text]) values. Experimental results on both benchmark simulated data sets and four real protein data sets demonstrate that our method significantly outperforms the state-of-the-art methods while using a less number of spectra than those methods. Our method is freely available at http://sfb.kaust.edu.sa/Pages/Software.aspx.
Interactive visual exploration and refinement of cluster assignments.
Kern, Michael; Lex, Alexander; Gehlenborg, Nils; Johnson, Chris R
2017-09-12
With ever-increasing amounts of data produced in biology research, scientists are in need of efficient data analysis methods. Cluster analysis, combined with visualization of the results, is one such method that can be used to make sense of large data volumes. At the same time, cluster analysis is known to be imperfect and depends on the choice of algorithms, parameters, and distance measures. Most clustering algorithms don't properly account for ambiguity in the source data, as records are often assigned to discrete clusters, even if an assignment is unclear. While there are metrics and visualization techniques that allow analysts to compare clusterings or to judge cluster quality, there is no comprehensive method that allows analysts to evaluate, compare, and refine cluster assignments based on the source data, derived scores, and contextual data. In this paper, we introduce a method that explicitly visualizes the quality of cluster assignments, allows comparisons of clustering results and enables analysts to manually curate and refine cluster assignments. Our methods are applicable to matrix data clustered with partitional, hierarchical, and fuzzy clustering algorithms. Furthermore, we enable analysts to explore clustering results in context of other data, for example, to observe whether a clustering of genomic data results in a meaningful differentiation in phenotypes. Our methods are integrated into Caleydo StratomeX, a popular, web-based, disease subtype analysis tool. We show in a usage scenario that our approach can reveal ambiguities in cluster assignments and produce improved clusterings that better differentiate genotypes and phenotypes.
Mentoring in Schools: An Impact Study of Big Brothers Big Sisters School-Based Mentoring
ERIC Educational Resources Information Center
Herrera, Carla; Grossman, Jean Baldwin; Kauh, Tina J.; McMaken, Jennifer
2011-01-01
This random assignment impact study of Big Brothers Big Sisters School-Based Mentoring involved 1,139 9- to 16-year-old students in 10 cities nationwide. Youth were randomly assigned to either a treatment group (receiving mentoring) or a control group (receiving no mentoring) and were followed for 1.5 school years. At the end of the first school…
13 CFR 125.15 - What requirements must an SDVO SBC meet to submit an offer on a contract?
Code of Federal Regulations, 2010 CFR
2010-01-01
... corresponding to the NAICS code assigned to the contract, provided: (A) For a procurement having a revenue-based... to the contract; or (B) For a procurement having an employee-based size standard, the procurement... specific contract: (1) It is an SDVO SBC; (2) It is small under the NAICS code assigned to the procurement...
Do Those Who Benefit the Most Need it the Least? A Four-Year Experiment in Enquiry-Based Feedback
ERIC Educational Resources Information Center
Adcroft, Andy; Willis, Robert
2013-01-01
The aim of this paper is to report on an ongoing experiment in an enquiry-based approach to feedback. Over the course of four years, almost 1800 students have studied a final-year undergraduate module involving a mid-term assignment and end of module examination. Feedback on the assignment is delivered through a process which involves the…
Graham, David Y
2015-10-01
As a general rule, any clinical study where the result is already known or when the investigator(s) compares an assigned treatment against another assigned treatment known to be ineffective in the study population (e.g., in a population with known clarithromycin resistance) is unethical. As susceptibility-based therapy will always be superior to empiric therapy in any population with a prevalence of antimicrobial resistance >0%, any trial that randomizes susceptibility-based therapy with empiric therapy would be unethical. The journal Helicobacter welcomes susceptibility or culture-guided studies, studies of new therapies, and studies of adjuvants and probiotics. However, the journal will not accept for review any study we judge to be lacking clinical equipoise or which assign subjects to a treatment known to be ineffective, such as a susceptibility-based clinical trial with an empiric therapy comparator. To assist authors, we provide examples and suggestions regarding trial design for comparative studies, for susceptibility-based studies, and for studies testing adjuvants or probiotics. © 2015 John Wiley & Sons Ltd.
Graham, David Y.
2016-01-01
As a general rule, any clinical study where the result is already known or when the investigator(s) compares an assigned treatment against another assigned treatment known to be ineffective in the study population (e.g. in a population with known clarithromycin resistance) is unethical. Since susceptibility-based therapy will always be superior to empiric therapy in any population with a prevalence of antimicrobial resistance greater than 0%, any trial that randomizes susceptibility-based therapy with empiric therapy would be unethical. The journal Helicobacter welcomes susceptibility or culture-guided studies, studies of new therapies and of adjuvants and probiotics. However, the Journal will not accept for review any study we judge to be lacking clinical equipoise or which assign subjects to a treatment known to be ineffective, such as a susceptibility-based clinical trial with an empiric therapy comparator. To assist authors we provide examples and suggestion regarding trial design for comparative studies, for susceptibility-based studies, and for studies testing adjuvants or probiotics. PMID:26123529
The reference frame of figure-ground assignment.
Vecera, Shaun P
2004-10-01
Figure-ground assignment involves determining which visual regions are foreground figures and which are backgrounds. Although figure-ground processes provide important inputs to high-level vision, little is known about the reference frame in which the figure's features and parts are defined. Computational approaches have suggested a retinally based, viewer-centered reference frame for figure-ground assignment, but figural assignment could also be computed on the basis of environmental regularities in an environmental reference frame. The present research used a newly discovered cue, lower region, to examine the reference frame of figure-ground assignment. Possible reference frames were misaligned by changing the orientation of viewers by having them tilt their heads (Experiments 1 and 2) or turn them upside down (Experiment 3). The results of these experiments indicated that figure-ground perception followed the orientation of the viewer, suggesting a viewer-centered reference frame for figure-ground assignment.
Rewarding psychiatric aides for the behavioral improvement of assigned patients1
Pomerleau, Ovide F.; Bobrove, Philip H.; Smith, Rita H.
1973-01-01
Different ways of modifying the aide-patient relationship to promote improvement in psychiatric patients were investigated. Psychiatric aides were given information about the behavior of assigned patients, cash awards based on the improvement of assigned patients, and different kinds of supervision by the psychology staff; the effects of these variables on a large number of psychiatrically relevant behaviors were measured. Appropriate behavior of patients increased when the aides were given quantitative information about the improvement of assigned patients. Cash awards for aides, which were not contingent on the behavior of patients had little effect, while cash awards contingent on the behavior of assigned patients were associated with more appropriate behavior. Direct supervision of aide-patient interactions was associated with an increase in appropriate behavior, while required consultation for the aides about assigned patients was not. Behavior of patients deteriorated when the program was terminated. PMID:16795420
Health Information System Role-Based Access Control Current Security Trends and Challenges.
de Carvalho Junior, Marcelo Antonio; Bandiera-Paiva, Paulo
2018-01-01
This article objective is to highlight implementation characteristics, concerns, or limitations over role-based access control (RBAC) use on health information system (HIS) using industry-focused literature review of current publishing for that purpose. Based on the findings, assessment for indication of RBAC is obsolete considering HIS authorization control needs. We have selected articles related to our investigation theme "RBAC trends and limitations" in 4 different sources related to health informatics or to the engineering technical field. To do so, we have applied the following search query string: "Role-Based Access Control" OR "RBAC" AND "Health information System" OR "EHR" AND "Trends" OR "Challenges" OR "Security" OR "Authorization" OR "Attacks" OR "Permission Assignment" OR "Permission Relation" OR "Permission Mapping" OR "Constraint". We followed PRISMA applicable flow and general methodology used on software engineering for systematic review. 20 articles were selected after applying inclusion and exclusion criteria resulting contributions from 10 different countries. 17 articles advocate RBAC adaptations. The main security trends and limitations mapped were related to emergency access, grant delegation, and interdomain access control. Several publishing proposed RBAC adaptations and enhancements in order to cope current HIS use characteristics. Most of the existent RBAC studies are not related to health informatics industry though. There is no clear indication of RBAC obsolescence for HIS use.
NASA Astrophysics Data System (ADS)
Blais-Stevens, A.; Behnia, P.
2016-02-01
This research activity aimed at reducing risk to infrastructure, such as a proposed pipeline route roughly parallel to the Yukon Alaska Highway Corridor (YAHC), by filling geoscience knowledge gaps in geohazards. Hence, the Geological Survey of Canada compiled an inventory of landslides including debris flow deposits, which were subsequently used to validate two different debris flow susceptibility models. A qualitative heuristic debris flow susceptibility model was produced for the northern region of the YAHC, from Kluane Lake to the Alaska border, by integrating data layers with assigned weights and class ratings. These were slope angle, slope aspect, surficial geology, plan curvature, and proximity to drainage system. Validation of the model was carried out by calculating a success rate curve which revealed a good correlation with the susceptibility model and the debris flow deposit inventory compiled from air photos, high-resolution satellite imagery, and field verification. In addition, the quantitative Flow-R method was tested in order to define the potential source and debris flow susceptibility for the southern region of Kluane Lake, an area where documented debris flow events have blocked the highway in the past (e.g. 1988). Trial and error calculations were required for this method because there was not detailed information on the debris flows for the YAHC to allow us to define threshold values for some parameters when calculating source areas, spreading, and runout distance. Nevertheless, correlation with known documented events helped define these parameters and produce a map that captures most of the known events and displays debris flow susceptibility in other, usually smaller, steep channels that had not been previously documented.
NASA Astrophysics Data System (ADS)
Blais-Stevens, A.; Behnia, P.
2015-05-01
This research activity aimed at reducing risk to infrastructure, such as a proposed pipeline route roughly parallel to the Yukon Alaska Highway Corridor (YAHC) by filling geoscience knowledge gaps in geohazards. Hence, the Geological Survey of Canada compiled an inventory of landslides including debris flow deposits, which were subsequently used to validate two different debris flow susceptibility models. A qualitative heuristic debris flow susceptibility model was produced for the northern region of the YAHC, from Kluane Lake to the Alaska border, by integrating data layers with assigned weights and class ratings. These were slope angle, slope aspect (derived from a 5 m × 5 m DEM), surficial geology, permafrost distribution, and proximity to drainage system. Validation of the model was carried out by calculating a success rate curve which revealed a good correlation with the susceptibility model and the debris flow deposit inventory compiled from air photos, high resolution satellite imagery, and field verification. In addition, the quantitative Flow-R method was tested in order to define the potential source and debris flow susceptibility for the southern region of Kluane Lake, an area where documented debris flow events have blocked the highway in the past (e.g., 1988). Trial and error calculations were required for this method because there was not detailed information on the debris flows for the YAHC to allow us to define threshold values for some parameters when calculating source areas, spreading, and runout distance. Nevertheless, correlation with known documented events helped define these parameters and produce a map that captures most of the known events and displays debris flow susceptibility in other, usually smaller, steep channels that had not been previously documented.
Closing the loop: from paper to protein annotation using supervised Gene Ontology classification.
Gobeill, Julien; Pasche, Emilie; Vishnyakova, Dina; Ruch, Patrick
2014-01-01
Gene function curation of the literature with Gene Ontology (GO) concepts is one particularly time-consuming task in genomics, and the help from bioinformatics is highly requested to keep up with the flow of publications. In 2004, the first BioCreative challenge already designed a task of automatic GO concepts assignment from a full text. At this time, results were judged far from reaching the performances required by real curation workflows. In particular, supervised approaches produced the most disappointing results because of lack of training data. Ten years later, the available curation data have massively grown. In 2013, the BioCreative IV GO task revisited the automatic GO assignment task. For this issue, we investigated the power of our supervised classifier, GOCat. GOCat computes similarities between an input text and already curated instances contained in a knowledge base to infer GO concepts. The subtask A consisted in selecting GO evidence sentences for a relevant gene in a full text. For this, we designed a state-of-the-art supervised statistical approach, using a naïve Bayes classifier and the official training set, and obtained fair results. The subtask B consisted in predicting GO concepts from the previous output. For this, we applied GOCat and reached leading results, up to 65% for hierarchical recall in the top 20 outputted concepts. Contrary to previous competitions, machine learning has this time outperformed standard dictionary-based approaches. Thanks to BioCreative IV, we were able to design a complete workflow for curation: given a gene name and a full text, this system is able to select evidence sentences for curation and to deliver highly relevant GO concepts. Contrary to previous competitions, machine learning this time outperformed dictionary-based systems. Observed performances are sufficient for being used in a real semiautomatic curation workflow. GOCat is available at http://eagl.unige.ch/GOCat/. http://eagl.unige.ch/GOCat4FT/. © The Author(s) 2014. Published by Oxford University Press.
Disability Evaluation System and Temporary Limited Duty Assignment Process: A Qualitative Review.
1998-03-01
Statement addressing the requirement for monitoring, frequency of treat- ments/ therapy , and the associated operational assignment limitation; Informed...ACC does not exist in the EAIS, ARIS, or the EMF data bases. The system is able to track changes in duty station, but not ACC’s. If a member is on...specific geographic assignment. 4. Requires extensive or prolonged medical therapy . 5. Who through continued military service would probably result in
ERIC Educational Resources Information Center
Bolton, William; Clyde, Albert
This document provides guidelines for the development of interdisciplinary assignments to help prepare learners for the developing needs of industry; it also contains a collection of model assignments produced by 12 British colleges. An introduction explains how to use the document and offers a checklist for the development of interdisciplinary…
Belcher, Wayne R.; Faunt, Claudia C.; D'Agnese, Frank A.
2002-01-01
The U.S. Geological Survey, in cooperation with the Department of Energy and other Federal, State, and local agencies, is evaluating the hydrogeologic characteristics of the Death Valley regional ground-water flow system. The ground-water flow system covers an area of about 100,000 square kilometers from latitude 35? to 38?15' North to longitude 115? to 118? West, with the flow system proper comprising about 45,000 square kilometers. The Death Valley regional ground-water flow system is one of the larger flow systems within the Southwestern United States and includes in its boundaries the Nevada Test Site, Yucca Mountain, and much of Death Valley. Part of this study includes the construction of a three-dimensional hydrogeologic framework model to serve as the foundation for the development of a steady-state regional ground-water flow model. The digital framework model provides a computer-based description of the geometry and composition of the hydrogeologic units that control regional flow. The framework model of the region was constructed by merging two previous framework models constructed for the Yucca Mountain Project and the Environmental Restoration Program Underground Test Area studies at the Nevada Test Site. The hydrologic characteristics of the region result from a currently arid climate and complex geology. Interbasinal regional ground-water flow occurs through a thick carbonate-rock sequence of Paleozoic age, a locally thick volcanic-rock sequence of Tertiary age, and basin-fill alluvium of Tertiary and Quaternary age. Throughout the system, deep and shallow ground-water flow may be controlled by extensive and pervasive regional and local faults and fractures. The framework model was constructed using data from several sources to define the geometry of the regional hydrogeologic units. These data sources include (1) a 1:250,000-scale hydrogeologic-map compilation of the region; (2) regional-scale geologic cross sections; (3) borehole information, and (4) gridded surfaces from a previous three-dimensional geologic model. In addition, digital elevation model data were used in conjunction with these data to define ground-surface altitudes. These data, properly oriented in three dimensions by using geographic information systems, were combined and gridded to produce the upper surfaces of the hydrogeologic units used in the flow model. The final geometry of the framework model is constructed as a volumetric model by incorporating the intersections of these gridded surfaces and by applying fault truncation rules to structural features from the geologic map and cross sections. The cells defining the geometry of the hydrogeologic framework model can be assigned several attributes such as lithology, hydrogeologic unit, thickness, and top and bottom altitudes.
Midwifery students' evaluation of team-based academic assignments involving peer-marking.
Parratt, Jenny A; Fahy, Kathleen M; Hastie, Carolyn R
2014-03-01
Midwives should be skilled team workers in maternity units and in group practices. Poor teamwork skills are a significant cause of adverse maternity care outcomes. Despite Australian and International regulatory requirements that all midwifery graduates are competent in teamwork, the systematic teaching and assessment of teamwork skills is lacking in higher education. How do midwifery students evaluate participation in team-based academic assignments, which include giving and receiving peer feedback? First and third year Bachelor of Midwifery students who volunteered (24 of 56 students). Participatory Action Research with data collection via anonymous online surveys. There was general agreement that team based assignments; (i) should have peer-marking, (ii) help clarify what is meant by teamwork, (iii) develop communication skills, (iv) promote student-to-student learning. Third year students strongly agreed that teams: (i) are valuable preparation for teamwork in practice, (ii) help meet Australian midwifery competency 8, and (iii) were enjoyable. The majority of third year students agreed with statements that their teams were effectively coordinated and team members shared responsibility for work equally; first year students strongly disagreed with these statements. Students' qualitative comments substantiated and expanded on these findings. The majority of students valued teacher feedback on well-developed drafts of the team's assignment prior to marking. Based on these findings we changed practice and created more clearly structured team-based assignments with specific marking criteria. We are developing supporting lessons to teach specific teamwork skills: together these resources are called "TeamUP". TeamUP should be implemented in all pre-registration Midwifery courses to foster students' teamwork skills and readiness for practice. Copyright © 2013 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.
Do Doppler color flow algorithms for mapping disturbed flow make sense?
Gardin, J M; Lobodzinski, S M
1990-01-01
It has been suggested that a major advantage of Doppler color flow mapping is its ability to visualize areas of disturbed ("turbulent") flow, for example, in valvular stenosis or regurgitation and in shunts. To investigate how various color flow mapping instruments display disturbed flow information, color image processing was used to evaluate the most common velocity-variance color encoding algorithms of seven commercially available ultrasound machines. In six of seven machines, green was reportedly added by the variance display algorithms to map areas of disturbed flow. The amount of green intensity added to each pixel along the red and blue portions of the velocity reference color bar was calculated for each machine. In this study, velocities displayed on the reference color bar ranged from +/- 46 to +/- 64 cm/sec, depending on the Nyquist limit. Of note, changing the Nyquist limits depicted on the color reference bars did not change the distribution of the intensities of red, blue, or green within the contour of the reference map, but merely assigned different velocities to the pixels. Most color flow mapping algorithms in our study added increasing intensities of green to increasing positive (red) or negative (blue) velocities along their color reference bars. Most of these machines also added increasing green to red and blue color intensities horizontally across their reference bars as a marker of increased variance (spectral broadening). However, at any given velocity, marked variations were noted between different color flow mapping instruments in the amount of green added to their color velocity reference bars.(ABSTRACT TRUNCATED AT 250 WORDS)
Ramos, M H; Kerley, M S
2012-03-01
Continuous culture and in vivo experiments were conducted to measure changes in ruminal fermentation and animal performance when crude glycerol was added to diets. For the continuous culture experiment (n = 6), diets consisted of 4 levels of crude glycerol (0, 5, 10, and 20%) that replaced corn grain. Dry matter and OM digestibility decreased linearly (P < 0.05) when crude glycerol increased in the diet, and no effect (P = 0.20 and 0.65, respectively) was observed for CP and NDF digestibility. Total VFA concentration and ammonia did not change (P > 0.05) due to crude glycerol level. Microbial efficiency increased quadratically (P = 0.012) as crude glycerol increased, whereas microbial N flow did not differ (P = 0.36) among treatments. As crude glycerol increased in the diet, crude glycerol digestibility decreased (P < 0.05). Seventy-two crossbred steer calves (250 ± 2.0 kg) were assigned to 4 treatments: 0, 5, 10, and 20% crude glycerol that replaced corn grain. Animals were fed for a total of 150 d. No differences (P = 0.08) between treatments were measured for DMI. Average daily gain and GF responded quadratically (P < 0.05), with 10% crude glycerol resulting in the greatest values. In the second in vivo experiment, 100 crossbred steer calves (300 ± 2.0 kg) were assigned to 5 treatments: 0, 5, 10, 12.5, or 15% crude glycerol replaced corn grain. Calves were fed for a total of 135 d. No significant differences (P > 0.05) were measured in growth performance. For Exp. 3, one hundred heifer calves (270 ± 2.0 kg) were assigned to 4 treatments: 0, 5, 10, or 20% crude glycerol that replaced hay. No differences (P > 0.05) were measured in animal performance. We concluded that crude glycerol addition to a diet did not negatively affect ruminal fermentation, and addition of up to 20% in concentrate and hay-based diets should not affect performance or carcass characteristics.
High-resolution absorption measurements of NH3 at high temperatures: 500-2100 cm-1
NASA Astrophysics Data System (ADS)
Barton, Emma J.; Yurchenko, Sergei N.; Tennyson, Jonathan; Clausen, Sønnik; Fateev, Alexander
2015-12-01
High-resolution absorption spectra of NH3 in the region 500-2100 cm-1 at temperatures up to 1027 °C and approximately atmospheric pressure (1013±20 mbar) are measured. NH3 concentrations of 1000 ppm, 0.5% and 1% in volume fraction were used in the measurements. Spectra are recorded in high temperature gas flow cells using a Fourier Transform Infrared (FTIR) spectrometer at a nominal resolution of 0.09 cm-1. Measurements at 22.7 °C are compared to high-resolution cross sections available from the Pacific Northwest National Laboratory (PNNL). The higher temperature spectra are analysed by comparison to a variational line list, BYTe, and experimental energy levels determined using the MARVEL procedure. Approximately 2000 lines have been assigned, of which 851 are newly assigned to mainly hot bands involving vibrational states as high as v2=5.