Sample records for efficiency run area

  1. Energy consumption behavior of submersible pumps using in the Barind area of Bangladesh

    NASA Astrophysics Data System (ADS)

    Haque, M. E.; Islam, M. R.; Masud, M. H.; Ferdous, J.; Haniu, H.

    2017-06-01

    In this work the ground water level and water pumping for irrigation and drinking purposes in Barind area of Bangladesh have been studied. The depth of ground water level remains under 30ft throughout the year that enforcing the use of submersible pumps in most parts of Barind zone. The Barind Multipurpose Development Authority (BMDA) and Rajshahi WASA are the major water supplying authority in the Northern Part of Bangladesh by using 14386 and 87 nos of submersible pumps, respectively. An investigation for the values of life cycle cost elements of submersible pumps has also been carried out. The performance of the pumps running in different sites in Barind area were investigated and compared with the lab test results of new pumps. Energy consumption cost is dominating the life cycle cost of the pumps using in Barind region and improper matching of pump standard running conditions and operation/system requirements are the main causes of lower efficiency. It is found that the efficiency of the running pumps is reduced by 20 - 40% than that of lab test results.

  2. ERP=Efficiency

    ERIC Educational Resources Information Center

    Violino, Bob

    2008-01-01

    This article discusses the enterprise resource planning (ERP) system. Deploying an ERP system is one of the most extensive--and expensive--IT projects a college or university can undertake. The potential benefits of ERP are significant: a more smoothly running operation with efficiencies in virtually every area of administration, from automated…

  3. Stormwater run-off and pollutant transport related to the activities carried out in a modern waste management park.

    PubMed

    Marques, M; Hogland, W

    2001-02-01

    Stormwater run-off from twelve different areas and roads has been characterized in a modern waste disposal site, where several waste management activities are carried out. Using nonparametric statistics, medians and confidence intervals of the medians, 22 stormwater quality parameters were calculated. Suspended solids, chemical oxygen demand, biochemical oxygen demand, total nitrogen and total phosphorus, as well as run-off from several areas, showed measured values above standard limits for discharge into recipient waters--even higher than those of leachate from covered landfill cells. Of the heavy metals analyzed, copper, zinc and nickel were the most prevalent, being detected in every sample. Higher concentrations of metals such as zinc, nickel, cobalt, iron and cadmium were found in run-off from composting areas, compared to areas containing stored and exposed scrap metal. This suggests that factors other than the total amount of exposed material affect the concentration of metals in run-off, such as binding to organic compounds and hydrological transport efficiency. The pollutants transported by stormwater represent a significant environmental threat, comparable to leachate. Careful design, monitoring and maintenance of stormwater run-off drainage systems and infiltration elements are needed if infiltration is to be used as an on-site treatment strategy.

  4. Economic barriers to implementation of innovations in health care: is the long run-short run efficiency discrepancy a paradox?

    PubMed

    Adang, Eddy M M; Wensing, Michel

    2008-12-01

    Favourable cost-effectiveness of innovative technologies is more and more a necessary condition for implementation in clinical practice. But proven cost-effectiveness itself does not guarantee successful implementation. The reason for this is a potential discrepancy between long run efficiency, on which cost-effectiveness is based, and short run efficiency. Long run and short run efficiency is dependent upon economies of scale. This paper addresses the potential discrepancy between long run and short run efficiency of innovative technologies in healthcare, explores diseconomies of scale in Dutch hospitals and suggests what strategies might help to overcome hurdles to implement innovations due to that discrepancy.

  5. Analysis of Consequences of Using Gas Fuels for Running Auxiliary Ship Boilers in the Light of Contemporary Environmental Protection Requirements

    NASA Astrophysics Data System (ADS)

    Adamkiewicz, Andrzej; Bartoszewski, Marek; Kendra, Martin

    2016-09-01

    The article justifies the application of gas fuels for supplying auxiliary ship boilers. It presents legal regulations on maritime environmental protection areas and their requirements which are in power. It shows the chronology of introduced limitations on sulphur and nitrogen dioxide emissions and thresholds of carbon dioxide emission reduction expressed by EEDI (Energy Efficiency Design Indicator) and EEOI (Energy Efficiency Operational Indicator). Ways to decrease the values of EEDI and EEOI in the ship energy effectiveness management have been shown. Consequences of replacing marine fuels with LNG for running auxiliary ship boilers have been considered thoroughly, taking into account ecological, constructional, operational, procedural and logistic limitations as well as economic consequences. The summary shows the influence of particular consequences of using LNG for running boilers on the methods of maintenance of auxiliary boilers.

  6. Effect of Minimalist Footwear on Running Efficiency: A Randomized Crossover Trial.

    PubMed

    Gillinov, Stephen M; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M

    2015-05-01

    Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Randomized crossover trial. Level 3. Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes.

  7. Effect of Minimalist Footwear on Running Efficiency

    PubMed Central

    Gillinov, Stephen M.; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M.

    2015-01-01

    Background: Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Hypothesis: Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Study Design: Randomized crossover trial. Level of Evidence: Level 3. Methods: Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Results: Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. Conclusion: When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. Clinical Relevance: With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes. PMID:26131304

  8. Volume sharing of reservoir water

    NASA Astrophysics Data System (ADS)

    Dudley, Norman J.

    1988-05-01

    Previous models optimize short-, intermediate-, and long-run irrigation decision making in a simplified river valley system characterized by highly variable water supplies and demands for a single decision maker controlling both reservoir releases and farm water use. A major problem in relaxing the assumption of one decision maker is communicating the stochastic nature of supplies and demands between reservoir and farm managers. In this paper, an optimizing model is used to develop release rules for reservoir management when all users share equally in releases, and computer simulation is used to generate an historical time sequence of announced releases. These announced releases become a state variable in a farm management model which optimizes farm area-to-irrigate decisions through time. Such modeling envisages the use of growing area climatic data by the reservoir authority to gauge water demand and the transfer of water supply data from reservoir to farm managers via computer data files. Alternative model forms, including allocating water on a priority basis, are discussed briefly. Results show lower mean aggregate farm income and lower variance of aggregate farm income than in the single decision-maker case. This short-run economic efficiency loss coupled with likely long-run economic efficiency losses due to the attenuated nature of property rights indicates the need for quite different ways of integrating reservoir and farm management.

  9. Exploring storage and runoff generation processes for urban flooding through a physically based watershed model

    NASA Astrophysics Data System (ADS)

    Smith, B. K.; Smith, J. A.; Baeck, M. L.; Miller, A. J.

    2015-03-01

    A physically based model of the 14 km2 Dead Run watershed in Baltimore County, MD was created to test the impacts of detention basin storage and soil storage on the hydrologic response of a small urban watershed during flood events. The Dead Run model was created using the Gridded Surface Subsurface Hydrologic Analysis (GSSHA) algorithms and validated using U.S. Geological Survey stream gaging observations for the Dead Run watershed and 5 subbasins over the largest 21 warm season flood events during 2008-2012. Removal of the model detention basins resulted in a median peak discharge increase of 11% and a detention efficiency of 0.5, which was defined as the percent decrease in peak discharge divided by percent detention controlled area. Detention efficiencies generally decreased with increasing basin size. We tested the efficiency of detention basin networks by focusing on the "drainage network order," akin to the stream order but including storm drains, streams, and culverts. The detention efficiency increased dramatically between first-order detention and second-order detention but was similar for second and third-order detention scenarios. Removal of the soil compacted layer, a common feature in urban soils, resulted in a 7% decrease in flood peak discharges. This decrease was statistically similar to the flood peak decrease caused by existing detention. Current soil storage within the Dead Run watershed decreased flood peak discharges by a median of 60%. Numerical experiment results suggested that detention basin storage and increased soil storage have the potential to substantially decrease flood peak discharges.

  10. Comparing Run-Out Efficiency of Fluidized Ejecta on Mars with Terrestrial and Martian Mass Movements

    NASA Technical Reports Server (NTRS)

    Barnouin-Jha, O. S.; Baloga, S.

    2003-01-01

    We broadly characterize the rheology of fluidized ejecta on Mars as it flows during its final stages of emplacement by using the concept of run-out efficiency. Run-out efficiency for ejecta can be obtained through an energy balance between the kinetic energy of the excavated ejecta, and the total work lost during its deposition. Such an efficiency is directly comparable to run-out efficiency (i.e., L/H analyzes where L is the run-out distance and H is onset height) of terrestrial and extraterrestrial mass movements. Determination of the L/H ratio is commonly used in terrestrial geology to broadly determine the type and rheology of mass movements

  11. An efficient way of layout processing based on calibre DRC and pattern matching for defects inspection application

    NASA Astrophysics Data System (ADS)

    Li, Helen; Lee, Robben; Lee, Tyzy; Xue, Teddy; Liu, Hermes; Wu, Hall; Wan, Qijian; Du, Chunshan; Hu, Xinyi; Liu, Zhengfang

    2018-03-01

    As technology advances, escalating layout design complexity and chip size make defect inspection becomes more challenging than ever before. The YE (Yield Enhancement) engineers are seeking for an efficient strategy to ensure accuracy without suffering running time. A smart way is to set different resolutions for different pattern structures, for examples, logic pattern areas have a higher scan resolution while the dummy areas have a lower resolution, SRAM area may have another different resolution. This can significantly reduce the scan processing time meanwhile the accuracy does not suffer. Due to the limitation of the inspection equipment, the layout must be processed in order to output the Care Area marker in line with the requirement of the equipment, for instance, the marker shapes must be rectangle and the number of the rectangle shapes should be as small as possible. The challenge is how to select the different Care Areas by pattern structures, merge the areas efficiently and then partition them into pieces of rectangle shapes. This paper presents a solution based on Calibre DRC and Pattern Matching. Calibre equation-based DRC is a powerful layout processing engine and Calibre Pattern Matching's automated visual capture capability enables designers to define these geometries as layout patterns and store them in libraries which can be re-used in multiple design layouts. Pattern Matching simplifies the description of very complex relationships between pattern shapes efficiently and accurately. Pattern matching's true power is on display when it is integrated with normal DRC deck. In this application of defects inspection, we first run Calibre DRC to get rule based Care Area then use Calibre Pattern Matching's automated pattern capture capability to capture Care Area shapes which need a higher scan resolution with a tune able pattern halo. In the pattern matching step, when the patterns are matched, a bounding box marker will be output to identify the high resolution area. The equation-based DRC and Pattern Matching effectively work together for different scan phases.

  12. Within-Subject Correlation Analysis to Detect Functional Areas Associated With Response Inhibition.

    PubMed

    Yamasaki, Tomoko; Ogawa, Akitoshi; Osada, Takahiro; Jimura, Koji; Konishi, Seiki

    2018-01-01

    Functional areas in fMRI studies are often detected by brain-behavior correlation, calculating across-subject correlation between the behavioral index and the brain activity related to a function of interest. Within-subject correlation analysis is also employed in a single subject level, which utilizes cognitive fluctuations in a shorter time period by correlating the behavioral index with the brain activity across trials. In the present study, the within-subject analysis was applied to the stop-signal task, a standard task to probe response inhibition, where efficiency of response inhibition can be evaluated by the stop-signal reaction time (SSRT). Since the SSRT is estimated, by definition, not in a trial basis but from pooled trials, the correlation across runs was calculated between the SSRT and the brain activity related to response inhibition. The within-subject correlation revealed negative correlations in the anterior cingulate cortex and the cerebellum. Moreover, the dissociation pattern was observed in the within-subject analysis when earlier vs. later parts of the runs were analyzed: negative correlation was dominant in earlier runs, whereas positive correlation was dominant in later runs. Regions of interest analyses revealed that the negative correlation in the anterior cingulate cortex, but not in the cerebellum, was dominant in earlier runs, suggesting multiple mechanisms associated with inhibitory processes that fluctuate on a run-by-run basis. These results indicate that the within-subject analysis compliments the across-subject analysis by highlighting different aspects of cognitive/affective processes related to response inhibition.

  13. Two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images.

    PubMed

    He, Lifeng; Chao, Yuyan; Suzuki, Kenji

    2011-08-01

    Whenever one wants to distinguish, recognize, and/or measure objects (connected components) in binary images, labeling is required. This paper presents two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images. One is voxel based and the other is run based. For the voxel-based one, we present an efficient method of deciding the order for checking voxels in the mask. For the run-based one, instead of assigning each foreground voxel, we assign each run a provisional label. Moreover, we use run data to label foreground voxels without scanning any background voxel in the second scan. Experimental results have demonstrated that our voxel-based algorithm is efficient for 3-D binary images with complicated connected components, that our run-based one is efficient for those with simple connected components, and that both are much more efficient than conventional 3-D labeling algorithms.

  14. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.

    PubMed

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  15. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    PubMed Central

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946

  16. The influence of wind resistance in running and walking and the mechanical efficiency of work against horizontal or vertical forces

    PubMed Central

    Pugh, L. G. C. E.

    1971-01-01

    1. O2 intakes were determined on subjects running and walking at various constant speeds, (a) against wind of up to 18·5 m/sec (37 knots) in velocity, and (b) on gradients ranging from 2 to 8%. 2. In running and walking against wind, O2 intakes increased as the square of wind velocity. 3. In running on gradients the relation of O2 intake and lifting work was linear and independent of speed. In walking on gradients the relation was linear at work rates above 300 kg m/min, but curvilinear at lower work rates. 4. In a 65 kg athlete running at 4·45 m/sec (marathon speed) V̇O2 increased from 3·0 l./min with minimal wind to 5·0 l./min at a wind velocity of 18·5 m/sec. The corresponding values for a 75 kg subject walking at 1·25 m/sec were 0·8 l./min with minimal wind and 3·1 l./min at a wind velocity of 18·5 m/sec. 5. Direct measurements of wind pressure on shapes of similar area to one of the subjects yielded higher values than those predicted from the relation of wind velocity and lifting work at equal O2 intakes. Horizontal work against wind was more efficient than vertical work against gravity. 6. The energy cost of overcoming air resistance in track running may be 7·5% of the total energy cost at middle distance speed and 13% at sprint speed. Running 1 m behind another runner virtually eliminated air resistance and reduced V̇O2 by 6·5% at middle distance speed. PMID:5574828

  17. Comparison of the influence of age on cycling efficiency and the energy cost of running in well-trained triathletes.

    PubMed

    Peiffer, Jeremiah; Abbiss, Chris R; Sultana, Frederic; Bernard, Thierry; Brisswalter, Jeanick

    2016-01-01

    Locomotive efficiency is cited as an important component to endurance performance; however, inconsistent observations of age-related changes in efficiency question its influence in the performance of masters athletes. This study examined locomotive efficiency in young and masters triathletes during both a run and cycle test. Twenty young (28.5 ± 2.6 years) and 20 masters (59.8 ± 1.3 years) triathletes completed an incremental cycling and running test to determine maximal aerobic consumption (VO2max) and the first ventilatory threshold (VT1). Participants then completed 10-min submaximal running and cycling tests at VT1 during which locomotive efficiency was calculated from expired ventilation. Additionally, body fat percentage was determined using skin-fold assessment. During the cycle and run, VO2max was lower in the masters (48.3 ± 5.4 and 49.6 ± 4.8 ml kg(-1) min(-1), respectively) compared with young (61.6 ± 5.7 and 62.4 ± 5.2 ml kg(-1) min(-1), respectively) cohort. Maximal running speed and the cycling power output corresponding to VO2max were also lower in the masters (15.1 ± 0.8 km h(-1) and 318.6 ± 26.0 W) compared with the young (19.5 ± 1.3 km h(-1) and 383.6 ± 35.0 W) cohort. Cycling efficiency was lower (-11.2%) in the masters compared with young cohort. Similar results were observed for the energy cost of running (+10.8%); however, when scaled to lean body mass, changes were more pronounced during the run (+22.1%). Within trained triathletes, ageing can influence efficiency in both the run and cycle discipline. While disregarded in the past, efficiency should be considered in research examining performance in ageing athletes.

  18. Characterization and modeling of turbidity density plume induced into stratified reservoir by flood runoffs.

    PubMed

    Chung, S W; Lee, H S

    2009-01-01

    In monsoon climate area, turbidity flows typically induced by flood runoffs cause numerous environmental impacts such as impairment of fish habitat and river attraction, and degradation of water supply efficiency. This study was aimed to characterize the physical dynamics of turbidity plume induced into a stratified reservoir using field monitoring and numerical simulations, and to assess the effect of different withdrawal scenarios on the control of downstream water quality. Three different turbidity models (RUN1, RUN2, RUN3) were developed based on a two-dimensional laterally averaged hydrodynamic and transport model, and validated against field data. RUN1 assumed constant settling velocity of suspended sediment, while RUN2 estimated the settling velocity as a function of particle size, density, and water temperature to consider vertical stratification. RUN3 included a lumped first-order turbidity attenuation rate taking into account the effects of particles aggregation and degradable organic particles. RUN3 showed best performance in replicating the observed variations of in-reservoir and release turbidity. Numerical experiments implemented to assess the effectiveness of different withdrawal depths showed that the alterations of withdrawal depth can modify the pathway and flow regimes of the turbidity plume, but its effect on the control of release water quality could be trivial.

  19. Analytical Cost Metrics : Days of Future Past

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prajapati, Nirmal; Rajopadhye, Sanjay; Djidjev, Hristo Nikolov

    As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems researchmore » is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”« less

  20. Scale-dependency of macroinvertebrate communities: responses to contaminated sediments within run-of-river dams.

    PubMed

    Colas, Fanny; Archaimbault, Virginie; Devin, Simon

    2011-03-01

    Due to their nutrient recycling function and their importance in food-webs, macroinvertebrates are essential for the functioning of aquatic ecosystems. These organisms also constitute an important component of biodiversity. Sediment evaluation and monitoring is an essential aspect of ecosystem monitoring since sediments represent an important component of aquatic habitats and are also a potential source of contamination. In this study, we focused on macroinvertebrate communities within run-of-river dams, that are prime areas for sediment and pollutant accumulation. Little is known about littoral macroinvertebrate communities within run-of-river dam or their response to sediment levels and pollution. We therefore aimed to evaluate the following aspects: the functional and structural composition of macroinvertebrate communities in run-of-river dams; the impact of pollutant accumulation on such communities, and the most efficient scales and tools needed for the biomonitoring of contaminated sediments in such environments. Two run-of-river dams located in the French alpine area were selected and three spatial scales were examined: transversal (banks and channel), transversal x longitudinal (banks/channel x tail/middle/dam) and patch scale (erosion, sedimentation and vegetation habitats). At the patch scale, we noted that the heterogeneity of littoral habitats provided many available niches that allow for the development of diversified macroinvertebrate communities. This implies highly variable responses to contamination. Once combined on a global 'banks' spatial scale, littoral habitats can highlight the effects of toxic disturbances. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Multi-GPGPU Tsunami simulation at Toyama-bay

    NASA Astrophysics Data System (ADS)

    Furuyama, Shoichi; Ueda, Yuki

    2017-07-01

    Accelerated multi General Purpose Graphics Processing Unit (GPGPU) calculation for Tsunami run-up simulation was achieved at the wide area (whole Toyama-bay in Japan) by faster computation technique. Toyama-bay has active-faults at the sea-bed. It has a high possibility to occur earthquakes and Tsunami waves in the case of the huge earthquake, that's why to predict the area of Tsunami run-up is important for decreasing damages to residents by the disaster. However it is very hard task to achieve the simulation by the computer resources problem. A several meter's order of the high resolution calculation is required for the running-up Tsunami simulation because artificial structures on the ground such as roads, buildings, and houses are very small. On the other hand the huge area simulation is also required. In the Toyama-bay case the area is 42 [km] × 15 [km]. When 5 [m] × 5 [m] size computational cells are used for the simulation, over 26,000,000 computational cells are generated. To calculate the simulation, a normal CPU desktop computer took about 10 hours for the calculation. An improvement of calculation time is important problem for the immediate prediction system of Tsunami running-up, as a result it will contribute to protect a lot of residents around the coastal region. The study tried to decrease this calculation time by using multi GPGPU system which is equipped with six NVIDIA TESLA K20xs, InfiniBand network connection between computer nodes by MVAPICH library. As a result 5.16 times faster calculation was achieved on six GPUs than one GPU case and it was 86% parallel efficiency to the linear speed up.

  2. Do Generous Welfare States Generate Efficiency Gains Which Counterbalance Short Run Losses? Testing Downside Risk Theory with Economic Panel Data for the U.S., Germany and the Netherlands

    ERIC Educational Resources Information Center

    Headey, Bruce; Muffels, Ruud

    2008-01-01

    The purpose of the paper is to assess the theory that the downside risk insurance provided by more generous welfare states generates long run efficiency gains, which counterbalance the short run efficiency losses caused by work disincentives in these states (Feldstein 1974, 1976; Sinn 1995, 1996). Testing downside risk theory requires long term…

  3. Progress in amorphous silicon based large-area multijunction modules

    NASA Astrophysics Data System (ADS)

    Carlson, D. E.; Arya, R. R.; Bennett, M.; Chen, L.-F.; Jansen, K.; Li, Y.-M.; Maley, N.; Morris, J.; Newton, J.; Oswald, R. S.; Rajan, K.; Vezzetti, D.; Willing, F.; Yang, L.

    1996-01-01

    Solarex, a business unit of Amoco/Enron Solar, is scaling up its a-Si:H/a-SiGe:H tandem device technology for the production of 8 ft2 modules. The current R&D effort is focused on improving the performance, reliability and cost-effectiveness of the tandem junction technology by systematically optimizing the materials and interfaces in small-area single- and tandem junction cells. Average initial conversion efficiencies of 8.8% at 85% yield have been obtained in pilot production runs with 4 ft2 tandem modules.

  4. Versatile large-mode-area femtosecond laser-written Tm:ZBLAN glass chip lasers.

    PubMed

    Lancaster, D G; Gross, S; Fuerbach, A; Heidepriem, H Ebendorff; Monro, T M; Withford, M J

    2012-12-03

    We report performance characteristics of a thulium doped ZBLAN waveguide laser that supports the largest fundamental modes reported in a rare-earth doped planar waveguide laser (to the best of our knowledge). The high mode quality of waveguides up to 45 um diameter (~1075 μm(2) mode-field area) is validated by a measured beam quality of M(2)~1.1 ± 0.1. Benefits of these large mode-areas are demonstrated by achieving 1.9 kW peak-power output Q-switched pulses. The 1.89 μm free-running cw laser produces 205 mW and achieves a 67% internal slope efficiency corresponding to a quantum efficiency of 161%. The 9 mm long planar chip developed for concept demonstration is rapidly fabricated by single-step optical processing, contains 15 depressed-cladding waveguides, and can operate in semi-monolithic or external cavity laser configurations.

  5. [Impacts on skin blood flow under moving cupping along meridians in different directions].

    PubMed

    Tian, Yu-Ying; Wang, Guang-Jun; Huang, Tao; Jia, Shu-Yong; Zhang, Yu-Qin; Zhang, Wei-Bo

    2013-03-01

    To compare the impacts on skin blood flow between moving cupping following the meridian running direction and that against the running direction. JLG-2 meridian cupping drainage instru ment was used for moving cupping on the back along the Bladder Meridian running course in either single direction for 20 times. The cupping device was Bian stone cup, 44 mm in inner diameter, negative pressure from -0.03 to -0.04 MPa. PeriScan PIM II laser Doppler perfusion imager was used to observe the changes in skin blood flow on the running course of the Bladder Meridian with cup moved up and down and in the same region on the contralateral Bladder Meridian. Blood flow was measured before cupping, at the immediate time after cupping and 10 min after cupping separately. Fourteen healthy volunteers received the test. The measuring region was subdivided into a moving cupping area, an upstream area, a downstream area, a contralateral moving cupping area, a contralateral upstream area and a contralateral downstream area. The mean blood flow was calculated in each area. Blood flow was increased significantly in each area and was more apparently increased in the moving cupping area. In comparison of the changing rate of blood flow between cupping following the meridian running direction and that against the running direction, it was only found that the changing rate in the upstream area of moving cupping against the running direction was significantly higher than that following the running direction (P < 0.05). The differences were not statistically significant in comparison among the other areas. Additionally, the changing rates of blood flow in the upstream and downstream area of the Bladder Meridian were increased significantly as compared with the contralateral Bladder Meridian. The local effects are similar between moving cupping following the meridian running direction and that against the running direction. The abscopal effect of moving cupping against the running direction is superior to that following the running direction. It is suggested that the dual-directional moving cupping is applicable for the treatment of local disorders and the abscopal effect is better with moving cupping against the meridian running direction.

  6. Slicing of silicon into sheet material: Silicon sheet growth development for the large area silicon sheet task of the Low Cost Silicon Solar Array project

    NASA Technical Reports Server (NTRS)

    Fleming, J. R.

    1978-01-01

    The limits of blade tolerance were defined. The standard blades are T-2 thickness tolerance. Good results were obtained by using a slurry fluid consisting of mineral oil and a lubricity additive. Adjustments of the formulation and fine tuning of the cutting process with the new fluid are necessary. Test results and consultation indicate that the blade breakage encountered with water based slurries is unavoidable. Two full capacity (974 wafer) runs were made on the large prototype saw. Both runs resulted in extremely low yield. However, the reasons for the low yield were lack of proper technique rather than problems with machine function. The test on the effect of amount of material etched off of an as-sawn wafer on solar cell efficiency were completed. The results agree with previous work at JPL in that the minimum material removed per side that gives maximum efficiency is on the order of 10 microns.

  7. Fast Running Urban Dispersion Model for Radiological Dispersal Device (RDD) Releases: Model Description and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gowardhan, Akshay; Neuscamman, Stephanie; Donetti, John

    Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a moremore » detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).« less

  8. An 81.6 μW FastICA processor for epileptic seizure detection.

    PubMed

    Yang, Chia-Hsiang; Shih, Yi-Hsin; Chiueh, Herming

    2015-02-01

    To improve the performance of epileptic seizure detection, independent component analysis (ICA) is applied to multi-channel signals to separate artifacts and signals of interest. FastICA is an efficient algorithm to compute ICA. To reduce the energy dissipation, eigenvalue decomposition (EVD) is utilized in the preprocessing stage to reduce the convergence time of iterative calculation of ICA components. EVD is computed efficiently through an array structure of processing elements running in parallel. Area-efficient EVD architecture is realized by leveraging the approximate Jacobi algorithm, leading to a 77.2% area reduction. By choosing proper memory element and reduced wordlength, the power and area of storage memory are reduced by 95.6% and 51.7%, respectively. The chip area is minimized through fixed-point implementation and architectural transformations. Given a latency constraint of 0.1 s, an 86.5% area reduction is achieved compared to the direct-mapped architecture. Fabricated in 90 nm CMOS, the core area of the chip is 0.40 mm(2). The FastICA processor, part of an integrated epileptic control SoC, dissipates 81.6 μW at 0.32 V. The computation delay of a frame of 256 samples for 8 channels is 84.2 ms. Compared to prior work, 0.5% power dissipation, 26.7% silicon area, and 3.4 × computation speedup are achieved. The performance of the chip was verified by human dataset.

  9. Virtualization and cloud computing in dentistry.

    PubMed

    Chow, Frank; Muftu, Ali; Shorter, Richard

    2014-01-01

    The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.

  10. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  11. Price Analysis and the Effects of Competition.

    DTIC Science & Technology

    1985-10-01

    state of the market . For instance, is it possible that competition can squeeze a company to greater efficiency or lower profits in the short run, but...dual- source competition . The Stackelberg model recognizes two types of firm behavior. A firm may choose to be a leader and pursue a dominant market ...strate- gies in areas of potential competition . In this instance, the follower firm will serve that segment of the market that the leader firm cannot

  12. High power operation of cladding pumped holmium-doped silica fibre lasers.

    PubMed

    Hemming, Alexander; Bennetts, Shayne; Simakov, Nikita; Davidson, Alan; Haub, John; Carter, Adrian

    2013-02-25

    We report the highest power operation of a resonantly cladding-pumped, holmium-doped silica fibre laser. The cladding pumped all-glass fibre utilises a fluorine doped glass layer to provide low loss cladding guidance of the 1.95 µm pump radiation. The operation of both single mode and large-mode area fibre lasers was demonstrated, with up to 140 W of output power achieved. A slope efficiency of 59% versus launched pump power was demonstrated. The free running emission was measured to be 2.12-2.15 µm demonstrating the potential of this architecture to address the long wavelength operation of silica based fibre lasers with high efficiency.

  13. Grinding efficiency of abutment tooth with both dentin and core composite resin on axial plane.

    PubMed

    Miho, Otoaki; Sato, Toru; Matsukubo, Takashi

    2015-01-01

    The purpose of this study was to evaluate grinding efficiency in abutment teeth comprising both dentin and core composite resin in the axial plane. Grinding was performed over 5 runs at two loads (0.5 or 0.25 N) and two feed rates (1 or 2 mm/sec). The grinding surface was observed with a 3-D laser microscope. Tomographic images of the grinding surfaces captured perpendicular to the feed direction were also analyzed. Using a non-ground surface as a reference, areas comprising only dentin, both dentin and core composite resin, or only core composite resin were analyzed to determine the angle of the grinding surface. Composite resins were subjected to the Vickers hardness test and scanning electron microscopy. Data were statistically analyzed using a one-way analysis of variance and multiple comparison tests. Multiple regression analysis was performed for load, feed rate, and Vickers hardness of the build-up material depending on number of runs. When grinding was performed at a constant load and feed rate, a greater grinding angle was observed in areas comprising both dentin and composite resin or only composite resin than in areas consisting of dentin alone. A correlation was found between machinability and load or feed rate in areas comprising both dentin and composite resin or composite resin alone, with a particularly high correlation being observed between machinability and load. These results suggest that great caution should be exercised in a clinical setting when the boundary between the dentin and composite resin is to be ground, as the angle of the grinding surface changes when the rotating diamond point begins grinding the composite resin.

  14. Oxygen production on Mars and the Moon

    NASA Technical Reports Server (NTRS)

    Sridhar, K. R.; Vaniman, B.; Miller, S.

    1992-01-01

    Significant progress was made in the area of in-situ oxygen production in the last year. In order to reduce sealing problems due to thermal expansion mismatch in the disk configuration, several all-Zirconia cells were constructed and are being tested. Two of these cells were run successfully for extended periods of time. One was run for over 200 hours and the other for over 800 hours. These extended runs, along with gas sample analysis, showed that the oxygen being produced is definitely from CO2 and not from air leaks or from the disk material. A new tube system is being constructed that is more rugged, portable, durable, and energy efficient. The important operating parameters of this system will be better controlled compared to previous systems. An electrochemical compressor will also be constructed with a similar configuration. The electrochemical compressor will use less energy since the feed stock is already heated in the separation unit. In addition, it does not have moving parts.

  15. Continuous Czochralski growth: Silicon sheet growth development of the large area silicon sheet task of the Low Cost Silicon Solar Array project

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The primary objective of this contract is to develop equipment and methods for the economic production of single crystal ingot material by the continuous Czochralski (CZ) process. Continuous CZ is defined for the purpose of this work as the growth of at least 100 kilograms of ingot from only one melt container. During the reporting period (October, 1977 - September, 1978), a modified grower was made fully functional and several recharge runs were performed. The largest run lasted 44 hours and over 42 kg of ingot was produced. Little, if any, degradation in efficiency was observed as a result of pulling multiple crystals from one crucible. Solar efficiencies observed were between 9.3 and 10.4% AMO (13.0 and 14.6% AMI) compared to 10.5% (14.7% AMI) for optimum CZ material control samples. Using the SAMICS/IPEG format, economic analysis of continuous CZ suggests that 1986 DoE cost goals can only be met by the growth of large diameter, large mass crystals.

  16. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed Central

    Nadkarni, P. M.; Miller, P. L.

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations. PMID:1807632

  17. Silicon-on-ceramic Process: Silicon Sheet Growth and Device Development for the Large-area Silicon Sheet and Cell Development Tasks of the Low-cost Solar Array Project

    NASA Technical Reports Server (NTRS)

    Chapman, P. W.; Zook, J. D.; Heaps, J. D.; Grung, B. L.; Koepke, B.; Schuldt, S. B.

    1979-01-01

    Significant progress is reported in fabricating a 4 sq cm cell having a 10.1 percent conversion efficiency and a 10 sq cm cell having a 9.2 percent conversion efficiency. The continuous (SCIM) coater succeeded in producing a 16 sq cm coating exhibiting unidirectional solidification and large grain size. A layer was grown at 0.2 cm/sec in the experimental coater which was partially dendritic but also contained a large smooth area approximately 100 micron m thick. The dark characteristic measurements of a typical SCC solar cell yield shunt resistance values of 10K ohms and series resistance values and 0.4 ohm. The production dip-coater is operating at over 50 percent yield in terms of good cell quality material. The most recent run yielded 13 good substrates out of 15.

  18. An Efficient Remote Authentication Scheme for Wireless Body Area Network.

    PubMed

    Omala, Anyembe Andrew; Kibiwott, Kittur P; Li, Fagen

    2017-02-01

    Wireless body area network (WBAN) provide a mechanism of transmitting a persons physiological data to application providers e.g. hospital. Given the limited range of connectivity associated with WBAN, an intermediate portable device e.g. smartphone, placed within WBAN's connectivity, forwards the data to a remote server. This data, if not protected from an unauthorized access and modification may be lead to poor diagnosis. In order to ensure security and privacy between WBAN and a server at the application provider, several authentication schemes have been proposed. Recently, Wang and Zhang proposed an authentication scheme for WBAN using bilinear pairing. However, in their scheme, an application provider could easily impersonate a client. In order to overcome this weakness, we propose an efficient remote authentication scheme for WBAN. In terms of performance, our scheme can not only provide a malicious insider security, but also reduce running time of WBAN (client) by 51 % as compared to Wang and Zhang scheme.

  19. The Weekly Fab Five: Things You Should Do Every Week To Keep Your Computer Running in Tip-Top Shape.

    ERIC Educational Resources Information Center

    Crispen, Patrick

    2001-01-01

    Describes five steps that school librarians should follow every week to keep their computers running at top efficiency. Explains how to update virus definitions; run Windows update; run ScanDisk to repair errors on the hard drive; run a disk defragmenter; and backup all data. (LRW)

  20. Two Blades-Up Runs Using the JetStream Navitus Atherectomy Device Achieve Optimal Tissue Debulking of Nonocclusive In-Stent Restenosis: Observations From a Porcine Stent/Balloon Injury Model.

    PubMed

    Shammas, Nicolas W; Aasen, Nicole; Bailey, Lynn; Budrewicz, Jay; Farago, Trent; Jarvis, Gary

    2015-08-01

    To determine the number of runs with blades up (BU) using the JetStream Navitus to achieving optimal debulking in a porcine model of femoropopliteal artery in-stent restenosis (ISR). In this porcine model, 8 limbs were implanted with overlapping nitinol self-expanding stents. ISR was treated initially with 2 blades-down (BD) runs followed by 4 BU runs (BU1 to BU4). Quantitative vascular angiography (QVA) was performed at baseline, after 2 BD runs, and after each BU run. Plaque surface area and percent stenosis within the treated stented segment were measured. Intravascular ultrasound (IVUS) was used to measure minimum lumen area (MLA) and determine IVUS-derived plaque surface area. QVA showed that plaque surface area was significantly reduced between baseline (83.9%±14.8%) and 2 BD (67.7%±17.0%, p=0.005) and BU1 (55.4%±9.0%, p=0.005) runs, and between BU1 and BU2 runs (50.7%±9.7%, p<0.05). Percent stenosis behaved similarly with no further reduction after BU2. There were no further reductions in plaque surface area or percent stenosis with BU 3 and 4 runs (p=0.10). Similarly, IVUS (24 lesions) confirmed optimal results with BU2 runs and no additional gain in MLA or reduction in plaque surface area with BU3 and 4. IVUS confirmed no orbital cutting with JetStream Navitus. There were no stent strut discontinuities on high-resolution radiographs following atherectomy. JetStream Navitus achieved optimal tissue debulking after 2 BD and 2 BU runs with no further statistical gain in debulking after the BU2 run. Operators treating ISR with JetStream Navitus may be advised to limit their debulking to 2 BD and 2 BU runs to achieve optimal debulking. © The Author(s) 2015.

  1. Running Parallel Discrete Event Simulators on Sierra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, P. D.; Jefferson, D. R.

    2015-12-03

    In this proposal we consider porting the ROSS/Charm++ simulator and the discrete event models that run under its control so that they run on the Sierra architecture and make efficient use of the Volta GPUs.

  2. 10 CFR 431.446 - Small electric motors energy conservation standards and their effective dates.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... full load efficiency Capacitor-start capacitor-run and capacitor-start induction-run Open motors... 10 Energy 3 2014-01-01 2014-01-01 false Small electric motors energy conservation standards and... EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Small Electric Motors Energy Conservation...

  3. 10 CFR 431.446 - Small electric motors energy conservation standards and their effective dates.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... full load efficiency Capacitor-start capacitor-run and capacitor-start induction-run Open motors... 10 Energy 3 2012-01-01 2012-01-01 false Small electric motors energy conservation standards and... EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Small Electric Motors Energy Conservation...

  4. 10 CFR 431.446 - Small electric motors energy conservation standards and their effective dates.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... full load efficiency Capacitor-start capacitor-run and capacitor-start induction-run Open motors... 10 Energy 3 2013-01-01 2013-01-01 false Small electric motors energy conservation standards and... EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Small Electric Motors Energy Conservation...

  5. Efficiently Distributing Component-based Applications Across Wide-Area Environments

    DTIC Science & Technology

    2002-01-01

    a variety of sophisticated network-accessible services such as e-mail, banking, on-line shopping, entertainment, and serv - ing as a data exchange...product database Customer Serves as a façade to Order and Account Stateful Session Beans ShoppingCart Maintains list of items to be bought by customer...Pet Store tests; and JBoss 3.0.3 with Jetty 4.1.0, for the RUBiS tests) and a sin- gle database server ( Oracle 8.1.7 Enterprise Edition), each running

  6. Efficiently Distributing Component-Based Applications Across Wide-Area Environments

    DTIC Science & Technology

    2002-01-01

    Oracle 8.1.7 Enterprise Edition), each running on a dedicated 1GHz dual-processor Pentium III workstation. For the RUBiS tests, we used a MySQL 4.0.12...a variety of sophisticated network-accessible services such as e-mail, banking, on-line shopping, entertainment, and serv - ing as a data exchange...Beans Catalog Handles read-only queries to product database Customer Serves as a façade to Order and Account Stateful Session Beans ShoppingCart

  7. Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach

    NASA Technical Reports Server (NTRS)

    Mak, Victor W. K.

    1986-01-01

    Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.

  8. Effects of human running cadence and experimental validation of the bouncing ball model

    NASA Astrophysics Data System (ADS)

    Bencsik, László; Zelei, Ambrus

    2017-05-01

    The biomechanical analysis of human running is a complex problem, because of the large number of parameters and degrees of freedom. However, simplified models can be constructed, which are usually characterized by some fundamental parameters, like step length, foot strike pattern and cadence. The bouncing ball model of human running is analysed theoretically and experimentally in this work. It is a minimally complex dynamic model when the aim is to estimate the energy cost of running and the tendency of ground-foot impact intensity as a function of cadence. The model shows that cadence has a direct effect on energy efficiency of running and ground-foot impact intensity. Furthermore, it shows that higher cadence implies lower risk of injury and better energy efficiency. An experimental data collection of 121 amateur runners is presented. The experimental results validate the model and provides information about the walk-to-run transition speed and the typical development of cadence and grounded phase ratio in different running speed ranges.

  9. Performance verification of the CMS Phase-1 Upgrade Pixel detector

    NASA Astrophysics Data System (ADS)

    Veszpremi, V.

    2017-12-01

    The CMS tracker consists of two tracking systems utilizing semiconductor technology: the inner pixel and the outer strip detectors. The tracker detectors occupy the volume around the beam interaction region between 3 cm and 110 cm in radius and up to 280 cm along the beam axis. The pixel detector consists of 124 million pixels, corresponding to about 2 m 2 total area. It plays a vital role in the seeding of the track reconstruction algorithms and in the reconstruction of primary interactions and secondary decay vertices. It is surrounded by the strip tracker with 10 million read-out channels, corresponding to 200 m 2 total area. The tracker is operated in a high-occupancy and high-radiation environment established by particle collisions in the LHC . The current strip detector continues to perform very well. The pixel detector that has been used in Run 1 and in the first half of Run 2 was, however, replaced with the so-called Phase-1 Upgrade detector. The new system is better suited to match the increased instantaneous luminosity the LHC would reach before 2023. It was built to operate at an instantaneous luminosity of around 2×1034 cm-2s-1. The detector's new layout has an additional inner layer with respect to the previous one; it allows for more efficient tracking with smaller fake rate at higher event pile-up. The paper focuses on the first results obtained during the commissioning of the new detector. It also includes challenges faced during the first data taking to reach the optimal measurement efficiency. Details will be given on the performance at high occupancy with respect to observables such as data-rate, hit reconstruction efficiency, and resolution.

  10. Centrifugal Contactor Efficiency Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mincher, Bruce Jay; Tillotson, Richard Dean; Grimes, Travis Shane

    2017-01-01

    The contactor efficiency of a 2-cm acrylic centrifugal contactor, fabricated by ANL using 3D printer technology was measured by comparing a contactor test run to 5-min batch contacts. The aqueous phase was ~ 3 ppm depleted uranium in 3 M HNO3, and the organic phase was 1 M DAAP/dodecane. Sampling during the contactor run showed that equilibrium was achieved within < 3 minutes. The contactor efficiency at equilibrium was 95% to 100 %, depending on flowrate.

  11. Energy-Efficiency Retrofits in Small-Scale Multifamily Rental Housing: A Business Model

    NASA Astrophysics Data System (ADS)

    DeChambeau, Brian

    The goal of this thesis to develop a real estate investment model that creates a financial incentive for property owners to perform energy efficiency retrofits in small multifamily rental housing in southern New England. The medium for this argument is a business plan that is backed by a review of the literature and input from industry experts. In addition to industry expertise, the research covers four main areas: the context of green building, efficient building technologies, precedent programs, and the Providence, RI real estate market for the business plan. The thesis concludes that the model proposed can improve the profitability of real estate investment in small multifamily rental properties, though the extent to which this is possible depends partially on utility-run incentive programs and the capital available to invest in retrofit measures.

  12. Spatially explicit shallow landslide susceptibility mapping over large areas

    USGS Publications Warehouse

    Bellugi, Dino; Dietrich, William E.; Stock, Jonathan D.; McKean, Jim; Kazian, Brian; Hargrove, Paul

    2011-01-01

    Recent advances in downscaling climate model precipitation predictions now yield spatially explicit patterns of rainfall that could be used to estimate shallow landslide susceptibility over large areas. In California, the United States Geological Survey is exploring community emergency response to the possible effects of a very large simulated storm event and to do so it has generated downscaled precipitation maps for the storm. To predict the corresponding pattern of shallow landslide susceptibility across the state, we have used the model Shalstab (a coupled steady state runoff and infinite slope stability model) which susceptibility spatially explicit estimates of relative potential instability. Such slope stability models that include the effects of subsurface runoff on potentially destabilizing pore pressure evolution require water routing and hence the definition of upslope drainage area to each potential cell. To calculate drainage area efficiently over a large area we developed a parallel framework to scale-up Shalstab and specifically introduce a new efficient parallel drainage area algorithm which produces seamless results. The single seamless shallow landslide susceptibility map for all of California was accomplished in a short run time, and indicates that much larger areas can be efficiently modelled. As landslide maps generally over predict the extent of instability for any given storm. Local empirical data on the fraction of predicted unstable cells that failed for observed rainfall intensity can be used to specify the likely extent of hazard for a given storm. This suggests that campaigns to collect local precipitation data and detailed shallow landslide location maps after major storms could be used to calibrate models and improve their use in hazard assessment for individual storms.

  13. A New Concept of Dual Fuelled SI Engines Run on Gasoline and Alcohol

    NASA Astrophysics Data System (ADS)

    Stelmasiak, Zdzisław

    2011-06-01

    The paper discusses tests results of dual-fuel spark ignition engine with multipoint injection of alcohol and gasoline, injected in area of inlet valve. Fuelling of the engine was accomplished via prototype inlet system comprising duplex injectors controlled electronically. Implemented system enables feeding of the engine with gasoline only or alcohol only, and simultaneous combustion of a mixture of the both fuels with any fraction of alcohol. The tests were performed on four cylinders, spark ignition engine of Fiat 1100 MPI type. The paper presents comparative results of dual-fuel engine test when the engine runs on changing fraction of methyl alcohol. The tests have demonstrated an advantageous effect of alcohol additive on efficiency and TCH and NOx emission of the engine, especially in case of bigger shares of the alcohol and higher engine loads.

  14. Cache and energy efficient algorithms for Nussinov's RNA Folding.

    PubMed

    Zhao, Chunchun; Sahni, Sartaj

    2017-12-06

    An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.

  15. Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters

    PubMed Central

    Torres-Huitzil, Cesar

    2013-01-01

    Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k × k kernel requires of k 2 − 1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024 × 1024 images with up to 255 × 255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding. PMID:24288456

  16. 600 C Logic Gates Using Silicon Carbide JFET's

    NASA Technical Reports Server (NTRS)

    Neudeck, Philip G.; Beheim, Glenn M.; Salupo, Carl S.a

    2000-01-01

    Complex electronics and sensors are increasingly being relied on to enhance the capabilities and efficiency of modernjet aircraft. Some of these electronics and sensors monitor and control vital engine components and aerosurfaces that operate at high temperatures above 300 C. However, since today's silicon-based electronics technology cannot function at such high temperatures, these electronics must reside in environmentally controlled areas. This necessitates either the use of long wire runs between sheltered electronics and hot-area sensors and controls, or the fuel cooling of electronics and sensors located in high-temperature areas. Both of these low-temperature-electronics approaches suffer from serious drawbacks in terms of increased weight, decreased fuel efficiency, and reduction of aircraft reliability. A family of high-temperature electronics and sensors that could function in hot areas would enable substantial aircraft performance gains. Especially since, in the future, some turbine-engine electronics may need to function at temperatures as high as 600 C. This paper reports the fabrication and demonstration of the first semiconductor digital logic gates ever to function at 600 C. Key obstacles blocking the realization of useful 600 C turbine engine integrated sensor and control electronics are outlined.

  17. Implementing Parquet equations using HPX

    NASA Astrophysics Data System (ADS)

    Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark

    A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  18. Ethanol production in small- to medium-size facilities

    NASA Astrophysics Data System (ADS)

    Hiler, E. A.; Coble, C. G.; Oneal, H. P.; Sweeten, J. M.; Reidenbach, V. G.; Schelling, G. T.; Lawhon, J. T.; Kay, R. D.; Lepori, W. A.; Aldred, W. H.

    1982-04-01

    In early 1980 system design criteria were developed for a small-scale ethanol production plant. The plant was eventually installed on November 1, 1980. It has a production capacity of 30 liters per hour; this can be increased easily (if desired) to 60 liters per hour with additional fermentation tanks. Sixty-six test runs were conducted to date in the alcohol production facility. Feedstocks evaluated in these tests include: corn (28 runs); grain sorghum (33 runs); grain sorghum grits (1 run); half corn/half sorghum (1 run); and sugarcane juice (3 runs). In addition, a small bench-scale fermentation and distillation system was used to evaluate sugarcane and sweet sorghum feedstocks prior to their evaluation in the larger unit. In each of these tests, evaluation of the following items was conducted: preprocessing requirements; operational problems; conversion efficiency (for example, liters of alcohol produced per kilogram of feedstock); energy balance and efficiency; nutritional recovery from stillage; solids separation by screw press; chemical characterization of stillage including liquid and solids fractions; wastewater requirements; and air pollution potential.

  19. Large area thinned planar sensors for future high-luminosity-LHC upgrades

    NASA Astrophysics Data System (ADS)

    Wittig, T.; Lawerenz, A.; Röder, R.

    2016-12-01

    Planar hybrid silicon sensors are a well proven technology for past and current particle tracking detectors in HEP experiments. However, the future high-luminosity upgrades of the inner trackers at the LHC experiments pose big challenges to the detectors. A first challenge is an expected radiation damage level of up to 2ṡ 1016 neq/cm2. For planar sensors, one way to counteract the charge loss and thus increase the radiation hardness is to decrease the thickness of their active area. A second challenge is the large detector area which has to be built as cost-efficient as possible. The CiS research institute has accomplished a proof-of-principle run with n-in-p ATLAS-Pixel sensors in which a cavity is etched to the sensor's back side to reduce its thickness. One advantage of this technology is the fact that thick frames remain at the sensor edges and guarantee mechanical stability on wafer level while the sensor is left on the resulting thin membrane. For this cavity etching technique, no handling wafers are required which represents a benefit in terms of process effort and cost savings. The membranes with areas of up to ~ 4 × 4 cm2 and thicknesses of 100 and 150 μm feature a sufficiently good homogeneity across the whole wafer area. The processed pixel sensors show good electrical behaviour with an excellent yield for a suchlike prototype run. First sensors with electroless Ni- and Pt-UBM are already successfully assembled with read-out chips.

  20. NONMEMory: a run management tool for NONMEM.

    PubMed

    Wilkins, Justin J

    2005-06-01

    NONMEM is an extremely powerful tool for nonlinear mixed-effect modelling and simulation of pharmacokinetic and pharmacodynamic data. However, it is a console-based application whose output does not lend itself to rapid interpretation or efficient management. NONMEMory has been created to be a comprehensive project manager for NONMEM, providing detailed summary, comparison and overview of the runs comprising a given project, including the display of output data, simple post-run processing, fast diagnostic plots and run output management, complementary to other available modelling aids. Analysis time ought not to be spent on trivial tasks, and NONMEMory's role is to eliminate these as far as possible by increasing the efficiency of the modelling process. NONMEMory is freely available from http://www.uct.ac.za/depts/pha/nonmemory.php.

  1. Fast discovery and visualization of conserved regions in DNA sequences using quasi-alignment

    PubMed Central

    2013-01-01

    Background Next Generation Sequencing techniques are producing enormous amounts of biological sequence data and analysis becomes a major computational problem. Currently, most analysis, especially the identification of conserved regions, relies heavily on Multiple Sequence Alignment and its various heuristics such as progressive alignment, whose run time grows with the square of the number and the length of the aligned sequences and requires significant computational resources. In this work, we present a method to efficiently discover regions of high similarity across multiple sequences without performing expensive sequence alignment. The method is based on approximating edit distance between segments of sequences using p-mer frequency counts. Then, efficient high-throughput data stream clustering is used to group highly similar segments into so called quasi-alignments. Quasi-alignments have numerous applications such as identifying species and their taxonomic class from sequences, comparing sequences for similarities, and, as in this paper, discovering conserved regions across related sequences. Results In this paper, we show that quasi-alignments can be used to discover highly similar segments across multiple sequences from related or different genomes efficiently and accurately. Experiments on a large number of unaligned 16S rRNA sequences obtained from the Greengenes database show that the method is able to identify conserved regions which agree with known hypervariable regions in 16S rRNA. Furthermore, the experiments show that the proposed method scales well for large data sets with a run time that grows only linearly with the number and length of sequences, whereas for existing multiple sequence alignment heuristics the run time grows super-linearly. Conclusion Quasi-alignment-based algorithms can detect highly similar regions and conserved areas across multiple sequences. Since the run time is linear and the sequences are converted into a compact clustering model, we are able to identify conserved regions fast or even interactively using a standard PC. Our method has many potential applications such as finding characteristic signature sequences for families of organisms and studying conserved and variable regions in, for example, 16S rRNA. PMID:24564200

  2. Fast discovery and visualization of conserved regions in DNA sequences using quasi-alignment.

    PubMed

    Nagar, Anurag; Hahsler, Michael

    2013-01-01

    Next Generation Sequencing techniques are producing enormous amounts of biological sequence data and analysis becomes a major computational problem. Currently, most analysis, especially the identification of conserved regions, relies heavily on Multiple Sequence Alignment and its various heuristics such as progressive alignment, whose run time grows with the square of the number and the length of the aligned sequences and requires significant computational resources. In this work, we present a method to efficiently discover regions of high similarity across multiple sequences without performing expensive sequence alignment. The method is based on approximating edit distance between segments of sequences using p-mer frequency counts. Then, efficient high-throughput data stream clustering is used to group highly similar segments into so called quasi-alignments. Quasi-alignments have numerous applications such as identifying species and their taxonomic class from sequences, comparing sequences for similarities, and, as in this paper, discovering conserved regions across related sequences. In this paper, we show that quasi-alignments can be used to discover highly similar segments across multiple sequences from related or different genomes efficiently and accurately. Experiments on a large number of unaligned 16S rRNA sequences obtained from the Greengenes database show that the method is able to identify conserved regions which agree with known hypervariable regions in 16S rRNA. Furthermore, the experiments show that the proposed method scales well for large data sets with a run time that grows only linearly with the number and length of sequences, whereas for existing multiple sequence alignment heuristics the run time grows super-linearly. Quasi-alignment-based algorithms can detect highly similar regions and conserved areas across multiple sequences. Since the run time is linear and the sequences are converted into a compact clustering model, we are able to identify conserved regions fast or even interactively using a standard PC. Our method has many potential applications such as finding characteristic signature sequences for families of organisms and studying conserved and variable regions in, for example, 16S rRNA.

  3. 76 FR 52972 - United States v. Regal Beloit Corp. and A.O. Smith Corp.; Proposed Final Judgment and Competitive...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-24

    ... magnet technology, thereby allowing the motor to run more efficiently. 15. Motors sold for use in pool...-efficient motors because pool pumps typically run for many hours a day, sometimes even continuously. Pool... and fan blades are among the more difficult design aspects of furnace draft inducers. 51. Furnaces are...

  4. Application configuration selection for energy-efficient execution on multicore systems

    DOE PAGES

    Wang, Shinan; Luo, Bing; Shi, Weisong; ...

    2015-09-21

    Balanced performance and energy consumption are incorporated in the design of modern computer systems. Several runtime factors, such as concurrency levels, thread mapping strategies, and dynamic voltage and frequency scaling (DVFS) should be considered in order to achieve optimal energy efficiency fora workload. Selecting appropriate run-time factors, however, is one of the most challenging tasks because the run-time factors are architecture-specific and workload-specific. And while most existing works concentrate on either static analysis of the workload or run-time prediction results, we present a hybrid two-step method that utilizes concurrency levels and DVFS settings to achieve the energy efficiency configuration formore » a worldoad. The experimental results based on a Xeon E5620 server with NPB and PARSEC benchmark suites show that the model is able to predict the energy efficient configuration accurately. On average, an additional 10% EDP (Energy Delay Product) saving is obtained by using run-time DVFS for the entire system. An off-line optimal solution is used to compare with the proposed scheme. Finally, the experimental results show that the average extra EDP saved by the optimal solution is within 5% on selective parallel benchmarks.« less

  5. The design of sport and touring aircraft

    NASA Technical Reports Server (NTRS)

    Eppler, R.; Guenther, W.

    1984-01-01

    General considerations concerning the design of a new aircraft are discussed, taking into account the objective to develop an aircraft can satisfy economically a certain spectrum of tasks. Requirements related to the design of sport and touring aircraft included in the past mainly a high cruising speed and short take-off and landing runs. Additional requirements for new aircraft are now low fuel consumption and optimal efficiency. A computer program for the computation of flight performance makes it possible to vary automatically a number of parameters, such as flight altitude, wing area, and wing span. The appropriate design characteristics are to a large extent determined by the selection of the flight altitude. Three different wing profiles are compared. Potential improvements with respect to the performance of the aircraft and its efficiency are related to the use of fiber composites, the employment of better propeller profiles, more efficient engines, and the utilization of suitable instrumentation for optimal flight conduction.

  6. Investigations on the carbon contaminations on the alkali cells of DPAL with hydrocarbon buffer gas

    NASA Astrophysics Data System (ADS)

    Li, Zhiyong; Tan, Rongqing; Wang, Yujie; Ye, Qing; Bian, Jintian; Huang, Wei; Li, Hui; Han, Gaoce

    2017-10-01

    Diode pumped alkali laser (DPAL) with hydrocarbon buffer gases has the features of low threshold and high efficiency. The chemical reaction between alkali and hydrocarbon gases affects the life time of DPAL. In this paper, a method based on Fourier transform infrared spectroscopy and Lambert-Beer law is adopted to find a safe temperature at which DPAL runs for a long term. A theoretical model is established to figure out ways to reduce the peak temperature in the cell window. The results indicates that 170 °C is a safe temperature. Although the absorbance of the cell window to the pump light and alkali laser is lower, there is temperature increase. Small light-transmitting area and air blowing on the windows can reduce the peak temperature effectively. Cooling the cell window is essential and critical in a long-term running DPAL.

  7. Financial advantages. Preventative measures ensure the health of your accounts receivable.

    PubMed

    Duda, Michelle

    2009-11-01

    Running a dental practice is no small task; from staying on the leading edge of new medical developments and products, to monitoring ever-changing dental insurance plans, to simply overseeing the fundamental day-to-day operations. But there is one area of your practice that can be streamlined to significantly improve your cash flow, minimize delinquencies and optimize fiscal operations. Your accounts receivable and collections can be economically and efficiently managed by a savvy combination of internal efforts and the partnership of a third party resource.

  8. European Scientific Notes, Volume 35, Number 11

    DTIC Science & Technology

    1981-11-30

    PERFORMING ORGANIZATION NAME AND ADDRESS 𔃺,. PROGRAM ELEMENT, PROJECT, TASK US Office of Naval Research Branch Office London AREA & WORK UNIT NUMBERS Box 39...presentation in both sides of the tree. Such methodsI was "Non-Standard Uses of the word If," can be spectacularly efficient: in one by D.S. Bred and R. Smit...probability sample of rf uses reduced to a few minutes’ run. was analyzed. Of these, some 60 percent At least two game programs are now of the If’s were

  9. The Study of the Impacts of "Running" on the Contact Area of Soles and Maximal Strength among Elite Middle Distance Runners

    ERIC Educational Resources Information Center

    Uzun, Ahmet; Aydos, Latif; Kaya, Metin; Yuksel, Mehmet Fatih; Pekel, Haci Ahmet

    2017-01-01

    It is possible that running training for many years in athletics affects athletes' running patterns and sole structure. The main aim of this study is to examine the effect of maximal force applied to the floor area and contact area of the athletes with related to mid-distance training for athletics. 18 male athletes who represent Turkey on the…

  10. 33 CFR 100.718 - Annual Suncoast Kilo Run; Sarasota Bay, Sarasota, FL.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Annual Suncoast Kilo Run; Sarasota Bay, Sarasota, FL. 100.718 Section 100.718 Navigation and Navigable Waters COAST GUARD, DEPARTMENT... Suncoast Kilo Run; Sarasota Bay, Sarasota, FL. (a) Regulated area. The regulated area is established in...

  11. [Foundations--a means for implementing strategic goals and measures in public health].

    PubMed

    Brand, A; Brand, H

    2000-03-01

    Social innovations are happening in many critical areas. Foundations make an enduring contribution towards increasing access to innovations in public welfare based on the philosophy that believes state-run organizations are neither efficient nor responsive to the people's changing needs. In this sense, foundations help to close the gap by turning hitherto tolerated conditions into problems and claims to action. The effectiveness of voluntary bodies as advocates of change owes much to their informal nature. In Germany, voluntary actions in Public Health are still underrepresented. Therefore, donors should be more supported by state government through adequate regulations. In addition, funders need to concentrate their efforts on the limited number of areas where they can have the greatest impact.

  12. Performance of a small compression ignition engine fuelled by liquified petroleum gas

    NASA Astrophysics Data System (ADS)

    Ambarita, Himsar; Yohanes Setyawan, Eko; Ginting, Sibuk; Naibaho, Waldemar

    2017-09-01

    In this work, a small air cooled single cylinder of diesel engine with a rated power of 2.5 kW at 3000 rpm is tested in two different modes. In the first mode, the CI engines run on diesel fuel mode. In the second mode, the CI engine run on liquified petroleum gas (LPG) mode. In order to simulate the load, a generator is employed. The load is fixed at 800 W and engine speed varies from 2400 rpm to 3400 rpm. The out power, specific fuel consumption, and brake thermal efficiency resulted from the engine in both modes are compared. The results show that the output power of the CI engine run on LPG fuel is comparable with the engine run on diesel fuel. However, the specific fuel consumption of the CI engine with LPG fuel is higher 17.53% in average in comparison with the CI engine run on diesel fuel. The efficiency of the CI engine with LPG fuel is lower 21.43% in average in comparison with the CI engine run on diesel fuel.

  13. Advanced Methodology for Simulation of Complex Flows Using Structured Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David

    1995-01-01

    Detailed simulations of viscous flows in complicated geometries pose a significant challenge to current capabilities of Computational Fluid Dynamics (CFD). To enable routine application of CFD to this class of problems, advanced methodologies are required that employ (a) automated grid generation, (b) adaptivity, (c) accurate discretizations and efficient solvers, and (d) advanced software techniques. Each of these ingredients contributes to increased accuracy, efficiency (in terms of human effort and computer time), and/or reliability of CFD software. In the long run, methodologies employing structured grid systems will remain a viable choice for routine simulation of flows in complex geometries only if genuinely automatic grid generation techniques for structured grids can be developed and if adaptivity is employed more routinely. More research in both these areas is urgently needed.

  14. A variational conformational dynamics approach to the selection of collective variables in metadynamics.

    PubMed

    McCarty, James; Parrinello, Michele

    2017-11-28

    In this paper, we combine two powerful computational techniques, well-tempered metadynamics and time-lagged independent component analysis. The aim is to develop a new tool for studying rare events and exploring complex free energy landscapes. Metadynamics is a well-established and widely used enhanced sampling method whose efficiency depends on an appropriate choice of collective variables. Often the initial choice is not optimal leading to slow convergence. However by analyzing the dynamics generated in one such run with a time-lagged independent component analysis and the techniques recently developed in the area of conformational dynamics, we obtain much more efficient collective variables that are also better capable of illuminating the physics of the system. We demonstrate the power of this approach in two paradigmatic examples.

  15. A variational conformational dynamics approach to the selection of collective variables in metadynamics

    NASA Astrophysics Data System (ADS)

    McCarty, James; Parrinello, Michele

    2017-11-01

    In this paper, we combine two powerful computational techniques, well-tempered metadynamics and time-lagged independent component analysis. The aim is to develop a new tool for studying rare events and exploring complex free energy landscapes. Metadynamics is a well-established and widely used enhanced sampling method whose efficiency depends on an appropriate choice of collective variables. Often the initial choice is not optimal leading to slow convergence. However by analyzing the dynamics generated in one such run with a time-lagged independent component analysis and the techniques recently developed in the area of conformational dynamics, we obtain much more efficient collective variables that are also better capable of illuminating the physics of the system. We demonstrate the power of this approach in two paradigmatic examples.

  16. Seasonal and Interannual Variations of Evaporation and their Relations with Precipitation, Net Radiation, and Net Carbon Accumulation for the Gediz Basin Area

    NASA Technical Reports Server (NTRS)

    Choudhury, Bhaskar J.

    1999-01-01

    A model combining the rate of carbon assimilation with water and energy balance equations has been run using satellite and ancillary data for a period of 60 months (January 1986 to December 1990). Calculations for the Gediz basin area give mean annual evaporation as 395 mm, which is composed of 45% transpiration, 42% soil evaporation and 13% interception. The coefficient of interannual variation of evaporation is found to be 6%, while that for precipitation and net radiation are, respectively, 16% and 2%, illustrating that net radiation has an important effect in modulating interannual variation of evaporation. The mean annual water use efficiency (i.e., the ratio of net carbon accumulation and total evaporation) is ca. 1 g/sq m/mm, and has a coefficient of interannual variation of 5%. A comparison of the mean water use efficiency with field observations suggests that evaporation over the area is utilized well for biomass production. The reference crop evaporation for irrigated areas has annual mean and coefficient of variation as, respectively, 1176 mm and 3%. The total evaporation during three summer months of peak evaporation (June-August) is estimated to be about 575 mm for irrigated crops like maize and cotton. Seasonal variations of the fluxes are presented.

  17. CE-MS analysis of heroin and its basic impurities using a charged polymer-protected gold nanoparticle-coated capillary.

    PubMed

    Zhang, Zhengxiang; Yan, Bo; Liu, Kelin; Liao, Yiping; Liu, Huwei

    2009-01-01

    The first application of charged polymer-protected gold nanoparticles (Au NPs) as semi-permanent capillary coating in CE-MS was presented. Poly(diallyldimethylammonium chloride) (PDDA) was the only reducing and stabilizing agent for Au NPs preparation. Stable and repeatable coating with good tolerance to 0.1 M HCl, methanol, and ACN was obtained via a simple rinsing procedure. Au NPs enhanced the coating stability toward flushing by methanol, improved the run-to-run and capillary-to-capillary repeatabilities, and improved the separation efficiency of heroin and its basic impurities for tracing geographical origins of illicit samples. Baseline resolution of eight heroin-related alkaloids was achieved on the PDDA-protected Au NPs-coated capillary under the optimum conditions: 120 mM ammonium acetate (pH 5.2) with addition of 13% methanol, separation temperature 20 degrees C, applied voltage -20 kV, and capillary effective length 60.0 cm. CE-MS analysis with run-to-run RSDs (n=5) of migration time in the range of 0.43-0.62% and RSDs (n=5) of peak area in the range of 1.49-4.68% was obtained. The established CE-MS method would offer sensitive detection and confident identification of heroin and related compounds and provide an alternative to LC-MS and GC-MS for illicit drug control.

  18. Endurance exercise performance: the physiology of champions

    PubMed Central

    Joyner, Michael J; Coyle, Edward F

    2008-01-01

    Efforts to understand human physiology through the study of champion athletes and record performances have been ongoing for about a century. For endurance sports three main factors – maximal oxygen consumption , the so-called ‘lactate threshold’ and efficiency (i.e. the oxygen cost to generate a give running speed or cycling power output) – appear to play key roles in endurance performance. and lactate threshold interact to determine the ‘performance ‘ which is the oxygen consumption that can be sustained for a given period of time. Efficiency interacts with the performance to establish the speed or power that can be generated at this oxygen consumption. This review focuses on what is currently known about how these factors interact, their utility as predictors of elite performance, and areas where there is relatively less information to guide current thinking. In this context, definitive ideas about the physiological determinants of running and cycling efficiency is relatively lacking in comparison with and the lactate threshold, and there is surprisingly limited and clear information about the genetic factors that might pre-dispose for elite performance. It should also be cautioned that complex motivational and sociological factors also play important roles in who does or does not become a champion and these factors go far beyond simple physiological explanations. Therefore, the performance of elite athletes is likely to defy the types of easy explanations sought by scientific reductionism and remain an important puzzle for those interested in physiological integration well into the future. PMID:17901124

  19. Efficient and flexible memory architecture to alleviate data and context bandwidth bottlenecks of coarse-grained reconfigurable arrays

    NASA Astrophysics Data System (ADS)

    Yang, Chen; Liu, LeiBo; Yin, ShouYi; Wei, ShaoJun

    2014-12-01

    The computational capability of a coarse-grained reconfigurable array (CGRA) can be significantly restrained due to data and context memory bandwidth bottlenecks. Traditionally, two methods have been used to resolve this problem. One method loads the context into the CGRA at run time. This method occupies very small on-chip memory but induces very large latency, which leads to low computational efficiency. The other method adopts a multi-context structure. This method loads the context into the on-chip context memory at the boot phase. Broadcasting the pointer of a set of contexts changes the hardware configuration on a cycle-by-cycle basis. The size of the context memory induces a large area overhead in multi-context structures, which results in major restrictions on application complexity. This paper proposes a Predictable Context Cache (PCC) architecture to address the above context issues by buffering the context inside a CGRA. In this architecture, context is dynamically transferred into the CGRA. Utilizing a PCC significantly reduces the on-chip context memory and the complexity of the applications running on the CGRA is no longer restricted by the size of the on-chip context memory. Data preloading is the most frequently used approach to hide input data latency and speed up the data transmission process for the data bandwidth issue. Rather than fundamentally reducing the amount of input data, the transferred data and computations are processed in parallel. However, the data preloading method cannot work efficiently because data transmission becomes the critical path as the reconfigurable array scale increases. This paper also presents a Hierarchical Data Memory (HDM) architecture as a solution to the efficiency problem. In this architecture, high internal bandwidth is provided to buffer both reused input data and intermediate data. The HDM architecture relieves the external memory from the data transfer burden so that the performance is significantly improved. As a result of using PCC and HDM, experiments running mainstream video decoding programs achieved performance improvements of 13.57%-19.48% when there was a reasonable memory size. Therefore, 1080p@35.7fps for H.264 high profile video decoding can be achieved on PCC and HDM architecture when utilizing a 200 MHz working frequency. Further, the size of the on-chip context memory no longer restricted complex applications, which were efficiently executed on the PCC and HDM architecture.

  20. Development of Analytical Algorithm for the Performance Analysis of Power Train System of an Electric Vehicle

    NASA Astrophysics Data System (ADS)

    Kim, Chul-Ho; Lee, Kee-Man; Lee, Sang-Heon

    Power train system design is one of the key R&D areas on the development process of new automobile because an optimum size of engine with adaptable power transmission which can accomplish the design requirement of new vehicle can be obtained through the system design. Especially, for the electric vehicle design, very reliable design algorithm of a power train system is required for the energy efficiency. In this study, an analytical simulation algorithm is developed to estimate driving performance of a designed power train system of an electric. The principal theory of the simulation algorithm is conservation of energy with several analytical and experimental data such as rolling resistance, aerodynamic drag, mechanical efficiency of power transmission etc. From the analytical calculation results, running resistance of a designed vehicle is obtained with the change of operating condition of the vehicle such as inclined angle of road and vehicle speed. Tractive performance of the model vehicle with a given power train system is also calculated at each gear ratio of transmission. Through analysis of these two calculation results: running resistance and tractive performance, the driving performance of a designed electric vehicle is estimated and it will be used to evaluate the adaptability of the designed power train system on the vehicle.

  1. Ontological Model of Business Process Management Systems

    NASA Astrophysics Data System (ADS)

    Manoilov, G.; Deliiska, B.

    2008-10-01

    The activities which constitute business process management (BPM) can be grouped into five categories: design, modeling, execution, monitoring and optimization. Dedicated software packets for business process management system (BPMS) are available on the market. But the efficiency of its exploitation depends on used ontological model in the development time and run time of the system. In the article an ontological model of BPMS in area of software industry is investigated. The model building is preceded by conceptualization of the domain and taxonomy of BPMS development. On the base of the taxonomy an simple online thesaurus is created.

  2. Geographic Information System and Remote Sensing Approach with Hydrologic Rational Model for Flood Event Analysis in Jakarta

    NASA Astrophysics Data System (ADS)

    Aditya, M. R.; Hernina, R.; Rokhmatuloh

    2017-12-01

    Rapid development in Jakarta which generates more impervious surface has reduced the amount of rainfall infiltration into soil layer and increases run-off. In some events, continuous high rainfall intensity could create sudden flood in Jakarta City. This article used rainfall data of Jakarta during 10 February 2015 to compute rainfall intensity and then interpolate it with ordinary kriging technique. Spatial distribution of rainfall intensity then overlaid with run-off coefficient based on certain land use type of the study area. Peak run-off within each cell resulted from hydrologic rational model then summed for the whole study area to generate total peak run-off. For this study area, land use types consisted of 51.9 % industrial, 37.57% parks, and 10.54% residential with estimated total peak run-off 6.04 m3/sec, 0.39 m3/sec, and 0.31 m3/sec, respectively.

  3. Providing Specialty Care for the Poor and Underserved at Student-Run Free Clinics in the San Francisco Bay Area.

    PubMed

    Liu, Max Bolun; Xiong, Grace; Boggiano, Victoria Lynn; Ye, Patrick Peiyong; Lin, Steven

    2017-01-01

    This report describes the model of specialty clinics implemented at Stanford University's two student-run free clinics, Arbor Free Clinic and Pacific Free Clinic, in the San Francisco Bay Area. We describe our patient demographic characteristics and the specialty services provided. We discuss challenges in implementing specialty care at student-run free clinics.

  4. The relationship between gamma frequency and running speed differs for slow and fast gamma rhythms in freely behaving rats

    PubMed Central

    Zheng, Chenguang; Bieri, Kevin Wood; Trettel, Sean Gregory; Colgin, Laura Lee

    2015-01-01

    In hippocampal area CA1 of rats, the frequency of gamma activity has been shown to increase with running speed (Ahmed and Mehta, 2012). This finding suggests that different gamma frequencies simply allow for different timings of transitions across cell assemblies at varying running speeds, rather than serving unique functions. However, accumulating evidence supports the conclusion that slow (~25–55 Hz) and fast (~60–100 Hz) gamma are distinct network states with different functions. If slow and fast gamma constitute distinct network states, then it is possible that slow and fast gamma frequencies are differentially affected by running speed. In this study, we tested this hypothesis and found that slow and fast gamma frequencies change differently as a function of running speed in hippocampal areas CA1 and CA3, and in the superficial layers of the medial entorhinal cortex (MEC). Fast gamma frequencies increased with increasing running speed in all three areas. Slow gamma frequencies changed significantly less across different speeds. Furthermore, at high running speeds, CA3 firing rates were low, and MEC firing rates were high, suggesting that CA1 transitions from CA3 inputs to MEC inputs as running speed increases. These results support the hypothesis that slow and fast gamma reflect functionally distinct states in the hippocampal network, with fast gamma driven by MEC at high running speeds and slow gamma driven by CA3 at low running speeds. PMID:25601003

  5. AGS vertical beta function measurements for Run 15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harper, C.; Ahrens, L.; Huang, H.

    2016-10-07

    One key parameter for running the AGS efficiently is by maintaining a low emittance. To measure emittance, one needs to measure the beta function throughout the cycle. This can be done by measuring the beta function at the ionization profile monitors (IPM) in the AGS. This tech note delves into the motivation, the measurement, and some strides that were made throughout Run15.

  6. Job Priorities on Peregrine | High-Performance Computing | NREL

    Science.gov Websites

    allocation when run with qos=high. Requesting a Node Reservation If you are doing work that requires real scheduler more efficiently plan resources for larger jobs. When projects reach their allocation limit, jobs associated with those projects will run at very low priority, which will ensure that these jobs run only when

  7. Home care business management software not just for scheduling.

    PubMed

    Morey, Rick

    2012-10-01

    Rule number one for running a successful, profitable home care company: It is essential to have an efficient, cost-effective administrative operation. A hard fact of the home care industry is that the location of an agency, to a large extent, dictates the billing rates as well as caregiver pay. Therefore, agency profitability is primarily dependent on how efficiently the company is run. Software, used in the right way, helps agencies become more productive andmore profitable.

  8. Lubricant Effects on Efficiency of a Helicopter Transmission

    NASA Technical Reports Server (NTRS)

    Mitchell, A. M.; Coy, J. J.

    1982-01-01

    Eleven different lubricants were used in efficiency tests conducted on the OH-58A helicopter main transmission using the NASA Lewis Research Center's 500 hp torque regenerative helicopter transmission test stand. Tests were run at oil-in temperatures of 355 K and 372 K. The efficiency was calculated from a heat balance on the water running through an oil to water heat exchanger which the transmission was heavily insulated. Results show an efficiency range from 98.3% to 98.8% which is a 50% variation relative to the losses associated with the maximum efficiency measured. For a given lubricant, the efficiency increased as temperature increased and viscosity decreased. There were two exceptions which could not be explained. Between lubricants, efficiency was not correlated with viscosity. There were relatively large variations in efficiency with the different lubricants whose viscosity generally fell in the 5 to 7 centistoke range. The lubricants had no significant effect on the vibration signature of the transmission.

  9. Stochastic Modelling of Wireless Energy Transfer

    NASA Technical Reports Server (NTRS)

    Veilleux, Shaun; Almaghasilah, Ahmed; Abedi, Ali; Wilkerson, DeLisa

    2017-01-01

    This study investigates the efficiency of a new method of powering remote sensors by the means of wireless energy transfer. The increased use of sensors for data collection comes with the inherent cost of supplying power from sources such as power cables or batteries. Wireless energy transfer technology eliminates the need for power cables or periodic battery replacement. The time and cost of setting up or expanding a sensor network will be reduced while allowing sensors to be placed in areas where running power cables or battery replacement is not feasible. This paper models wireless channels for power and data separately. Smart scheduling for the data channel is proposed to avoid transmitting data on a noisy channel where the probability of data loss is high to improve power efficiency. Analytical models have been developed and verified using simulations.

  10. Large Area Silicon Sheet by EFG. [quality control and productivity of edge-defined film-fed growth of ribbons

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Influences on ribbon quality which might be caused by various materials of construction which are used in the growth furnace were assessed. At the present level of ribbon quality, which has produced 8.5% to 9.5% efficient solar cells, no particular influence of any furnace part was detected. The experiments led to the suspicion that the general environment and the somewhat unoptimized materials handling procedures might be responsible for the current variations in ribbon quality and that, therefore, continuous work with this furnace under rather more stringent environmental conditions and operating procedures could perhaps improve materials quality to some extent. The work on the multiple furnace was continued with two multiple growth runs being performed. In these runs, the melt replenishment system performed poorly and extensive modifications to it were designed to make reliable melt feeding for five ribbon growth possible. Additional characterization techniques for wide ribbons, stress measurements, and growth dynamics experiments are reported.

  11. CMS event processing multi-core efficiency status

    NASA Astrophysics Data System (ADS)

    Jones, C. D.; CMS Collaboration

    2017-10-01

    In 2015, CMS was the first LHC experiment to begin using a multi-threaded framework for doing event processing. This new framework utilizes Intel’s Thread Building Block library to manage concurrency via a task based processing model. During the 2015 LHC run period, CMS only ran reconstruction jobs using multiple threads because only those jobs were sufficiently thread efficient. Recent work now allows simulation and digitization to be thread efficient. In addition, during 2015 the multi-threaded framework could run events in parallel but could only use one thread per event. Work done in 2016 now allows multiple threads to be used while processing one event. In this presentation we will show how these recent changes have improved CMS’s overall threading and memory efficiency and we will discuss work to be done to further increase those efficiencies.

  12. Experimental Realization of High-Efficiency Counterfactual Computation.

    PubMed

    Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng

    2015-08-21

    Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.

  13. Experimental Realization of High-Efficiency Counterfactual Computation

    NASA Astrophysics Data System (ADS)

    Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng

    2015-08-01

    Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.

  14. Federated data storage system prototype for LHC experiments and data intensive science

    NASA Astrophysics Data System (ADS)

    Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.

    2017-10-01

    Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.

  15. Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model

    NASA Astrophysics Data System (ADS)

    Urrego-Blanco, J. R.; Urban, N. M.

    2015-12-01

    Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.

  16. Stream Restoration to Manage Nutrients in Degraded Watersheds

    EPA Science Inventory

    Historic land-use change can reduce water quality by impairing the ability of stream ecosystems to efficiently process nutrients such as nitrogen. Study results of two streams (Minebank Run and Big Spring Run) affected by urbanization, quarrying, agriculture, and impoundments in...

  17. Effects of voluntary wheel running on satellite cells in the rat plantaris muscle.

    PubMed

    Kurosaka, Mitsutoshi; Naito, Hisashi; Ogura, Yuji; Kojima, Atsushi; Goto, Katsumasa; Katamoto, Shizuo

    2009-01-01

    This study investigated the effects of voluntary wheel running on satellite cells in the rat plantaris muscle. Seventeen 5-week-old male Wistar rats were assigned to a control (n = 5) or training (n = 12) group. Each rat in the training group ran voluntarily in a running-wheel cage for 8 weeks. After the training period, the animals were anesthetized, and the plantaris muscles were removed, weighed, and analyzed immunohistochemically and biochemically. Although there were no significant differences in muscle weight or fiber area between the groups, the numbers of satellite cells and myonuclei per muscle fiber, percentage of satellite cells, and citrate synthase activity were significantly higher in the training group compared with the control group (p < 0.05). The percentage of satellite cells was also positively correlated with distance run in the training group (r = 0.61, p < 0.05). Voluntary running can induce an increase in the number of satellite cells without changing the mean fiber area in the rat plantaris muscle; this increase in satellite cell content is a function of distance run. Key pointsThere is no study about the effect of voluntary running on satellite cells in the rat plantaris muscle.Voluntary running training causes an increase of citrate synthase activity in the rat plantaris muscle but does not affect muscle weight and mean fiber area in the rat plantaris muscle.Voluntary running can induce an increase in the number of satellite cells without hypertrophy of the rat plantaris muscle.

  18. Effects of Ramadan intermittent fasting on middle-distance running performance in well-trained runners.

    PubMed

    Brisswalter, Jeanick; Bouhlel, Ezzedine; Falola, Jean Marie; Abbiss, Christopher R; Vallier, Jean Marc; Hausswirth, Christophe; Hauswirth, Christophe

    2011-09-01

    To assess whether Ramadan intermittent fasting (RIF) affects 5000-m running performance and physiological parameters classically associated with middle-distance performance. Two experimental groups (Ramadan fasting, n = 9, vs control, n = 9) participated in 2 experimental sessions, one before RIF and the other at the last week of fasting. For each session, subjects completed 4 tests in the same order: a maximal running test, a maximal voluntary contraction (MVC) of knee extensor, 2 rectangular submaximal exercises on treadmill for 6 minutes at an intensity corresponding to the first ventilatory threshold (VT1), and a running performance test (5000 m). Eighteen, well-trained, middle-distance runners. Maximal oxygen consumption, MVC, running performance, running efficiency, submaximal VO(2) kinetics parameters (VO(2), VO(2)b, time constant τ, and amplitude A1) and anthropometric parameters were recorded or calculated. At the end of Ramadan fasting, a decrease in MVC was observed (-3.2%; P < 0.00001; η, 0.80), associated with an increase in the time constant of oxygen kinetics (+51%; P < 0.00007; η, 0.72) and a decrease in performance (-5%; P < 0.0007; η, 0.51). No effect was observed on running efficiency or maximal aerobic power. These results suggest that Ramadan changes in muscular performance and oxygen kinetics could affect performance during middle-distance events and need to be considered to choose training protocols during RIF.

  19. 78 FR 14697 - Final Flood Elevation Determinations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-07

    ... Communities affected elevation above ground [caret] Elevation in meters (MSL) Modified Cecil County, Maryland... 1 to Stone Run At the Stone Run +271 Town of Rising Sun, confluence. Unincorporated Areas of Cecil County. Approximately 460 feet +359 downstream of Pierce Road. Tributary 2 to Stone Run At the Stone Run...

  20. Tagging Efficiency for Nuclear Physics Measurements at MAX-lab

    NASA Astrophysics Data System (ADS)

    Miller, Nevin; Elofson, David; Lewis, Codie; O'Brien, Erin; Buggelli, Kelsey; O'Connor, Kyle; O'Rielly, Grant; Maxtagg Team

    2014-09-01

    A careful study of the tagging efficiency during measurements of near threshold pion photoproduction and high energy Compton scattering has been performed. These experiments are being done at the MAX-lab tagged photon Facility during the June 2014 run period. The determination of the final results from these experiments depends on knowledge of the incident photon flux. The tagging efficiency is a critical part of the photon flux calculation. In addition to daily measurements of the tagging efficiency, a beam monitor was used during the production data runs to monitor the relative tagging efficiency. Two trigger types were used in the daily measurements; one was a logical OR from the tagger array and the other was from the Pb-glass photon detector. Investigations were made to explore the effect of the different trigger conditions and the differences between single and multi hit TDCs on the tagging efficiency. In addition the time evolution and overall uncertainty in the tagging efficiency for each tagger channel was determined. The results will be discussed.

  1. Running as an Adjunct to Psychotherapy.

    ERIC Educational Resources Information Center

    Leer, Frederic

    1980-01-01

    Physical benefits of running have been highly publicized. Explores the equally valuable psychological benefits to be derived from running and examines how mastering a physical skill can be generalized to mastery in other areas of life. (Author)

  2. Voluntary resistance running wheel activity pattern and skeletal muscle growth in rats.

    PubMed

    Legerlotz, Kirsten; Elliott, Bradley; Guillemin, Bernard; Smith, Heather K

    2008-06-01

    The aims of this study were to characterize the pattern of voluntary activity of young rats in response to resistance loading on running wheels and to determine the effects of the activity on the growth of six limb skeletal muscles. Male Sprague-Dawley rats (4 weeks old) were housed individually with a resistance running wheel (R-RUN, n = 7) or a conventional free-spinning running wheel (F-RUN, n = 6) or without a wheel, as non-running control animals (CON, n = 6). The torque required to move the wheel in the R-RUN group was progressively increased, and the activity (velocity, distance and duration of each bout) of the two running wheel groups was recorded continuously for 45 days. The R-RUN group performed many more, shorter and faster bouts of running than the F-RUN group, yet the mean daily distance was not different between the F-RUN (1.3 +/- 0.2 km) and R-RUN group (1.4 +/- 0.6 km). Only the R-RUN resulted in a significantly (P < 0.05) enhanced muscle wet mass, relative to the increase in body mass, of the plantaris (23%) and vastus lateralis muscle (17%), and the plantaris muscle fibre cross-sectional area, compared with CON. Both F-RUN and R-RUN led to a significantly greater wet mass relative to increase in body mass and muscle fibre cross-sectional area in the soleus muscle compared with CON. We conclude that the pattern of voluntary activity on a resistance running wheel differs from that on a free-spinning running wheel and provides a suitable model to induce physiological muscle hypertrophy in rats.

  3. Classification Techniques for Digital Map Compression

    DTIC Science & Technology

    1989-03-01

    classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the

  4. Alteration in cardiac uncoupling proteins and eNOS gene expression following high-intensity interval training in favor of increasing mechanical efficiency.

    PubMed

    Fallahi, Ali Asghar; Shekarfroush, Shahnaz; Rahimi, Mostafa; Jalali, Amirhossain; Khoshbaten, Ali

    2016-03-01

    High-intensity interval training (HIIT) increases energy expenditure and mechanical energy efficiency. Although both uncoupling proteins (UCPs) and endothelial nitric oxide synthase (eNOS) affect the mechanical efficiency and antioxidant capacity, their effects are inverse. The aim of this study was to determine whether the alterations of cardiac UCP2, UCP3, and eNOS mRNA expression following HIIT are in favor of increased mechanical efficiency or decreased oxidative stress. Wistar rats were divided into five groups: control group (n=12), HIIT for an acute bout (AT1), short term HIIT for 3 and 5 sessions (ST3 and ST5), long-term training for 8 weeks (LT) (6 in each group). The rats of the training groups were made to run on a treadmill for 60 min in three stages: 6 min running for warm-up, 7 intervals of 7 min running on treadmill with a slope of 5° to 20° (4 min with an intensity of 80-110% VO2max and 3 min at 50-60% VO2max), and 5-min running for cool-down. The control group did not participate in any exercise program. Rats were sacrificed and the hearts were extracted to analyze the levels of UCP2, UCP3 and eNOS mRNA by RT-PCR. UCP3 expression was increased significantly following an acute training bout. Repeated HIIT for 8 weeks resulted in a significant decrease in UCPs mRNA and a significant increase in eNOS expression in cardiac muscle. This study indicates that Long term HIIT through decreasing UCPs mRNA and increasing eNOS mRNA expression may enhance energy efficiency and physical performance.

  5. Alteration in cardiac uncoupling proteins and eNOS gene expression following high-intensity interval training in favor of increasing mechanical efficiency

    PubMed Central

    Fallahi, Ali Asghar; Shekarfroush, Shahnaz; Rahimi, Mostafa; Jalali, Amirhossain; Khoshbaten, Ali

    2016-01-01

    Objective(s): High-intensity interval training (HIIT) increases energy expenditure and mechanical energy efficiency. Although both uncoupling proteins (UCPs) and endothelial nitric oxide synthase (eNOS) affect the mechanical efficiency and antioxidant capacity, their effects are inverse. The aim of this study was to determine whether the alterations of cardiac UCP2, UCP3, and eNOS mRNA expression following HIIT are in favor of increased mechanical efficiency or decreased oxidative stress. Materials and Methods: Wistar rats were divided into five groups: control group (n=12), HIIT for an acute bout (AT1), short term HIIT for 3 and 5 sessions (ST3 and ST5), long-term training for 8 weeks (LT) (6 in each group). The rats of the training groups were made to run on a treadmill for 60 min in three stages: 6 min running for warm-up, 7 intervals of 7 min running on treadmill with a slope of 5° to 20° (4 min with an intensity of 80-110% VO2max and 3 min at 50-60% VO2max), and 5-min running for cool-down. The control group did not participate in any exercise program. Rats were sacrificed and the hearts were extracted to analyze the levels of UCP2, UCP3 and eNOS mRNA by RT-PCR. Results: UCP3 expression was increased significantly following an acute training bout. Repeated HIIT for 8 weeks resulted in a significant decrease in UCPs mRNA and a significant increase in eNOS expression in cardiac muscle. Conclusion: This study indicates that Long term HIIT through decreasing UCPs mRNA and increasing eNOS mRNA expression may enhance energy efficiency and physical performance. PMID:27114795

  6. User Instructions for the Policy Analysis Modeling System (PAMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McNeil, Michael A.; Letschert, Virginie E.; Van Buskirk, Robert D.

    PAMS uses country-specific and product-specific data to calculate estimates of impacts of a Minimum Efficiency Performance Standard (MEPS) program. The analysis tool is self-contained in a Microsoft Excel spreadsheet, and requires no links to external data, or special code additions to run. The analysis can be customized to a particular program without additional user input, through the use of the pull-down menus located on the Summary page. In addition, the spreadsheet contains many areas into which user-generated input data can be entered for increased accuracy of projection. The following is a step-by-step guide for using and customizing the tool.

  7. Emission spectroscopy analysis during Nopal cladodes dethorning by laser ablation

    NASA Astrophysics Data System (ADS)

    Peña-Díaz, M.; Ponce, L.; Arronte, M.; Flores, T.

    2007-04-01

    Optical emission spectroscopy of the pulsed laser ablation of spines and glochids from Opuntia (Nopal) cladodes was performed. Nopal cladodes were irradiated with Nd:YAG free-running laser pulses on their body, glochids and spines. Emission spectroscopy analyses in the 350-1000 nm region of the laser induced plasma were made. Plasma plume evolution characterization, theoretical calculations of plasma plume temperature and experiments varying the processing atmosphere showed that the process is dominated by a thermally activated combustion reaction which increases the dethorning process efficiency. Therefore, appropriate laser pulse energy for minimal damage of cladodes body and in the area beneath glochids and spines can be obtained.

  8. Building bridges from process R&D: from a customer-supplier relationship to full partnership.

    PubMed

    Federsel

    2000-08-01

    A new and forward-looking way of running process R&D is introduced that integrates this core business in an efficient manner into the network of activities in different disciplines, which constitute the arena for the development of pharmaceutical products. The interfaces with surrounding areas are discussed in addition to the novel organizational principles implemented in process R&D and the workflow emanating from this. Furthermore, the Tollgate model used to keep track of the progress in a project and the pre-study concept are presented in detail. Finally, the main differences between operating modes in the past and in the future are highlighted.

  9. Invisible transportation infrastructure technology to mitigate energy and environment.

    PubMed

    Hossain, Md Faruque

    2017-01-01

    Traditional transportation infrastructure built by heat trapping products and the transportation vehiles run by fossil fuel, both causing deadly climate change. Thus, a new technology of invisible Flying Transportation system has been proposed to mitigate energy and environmental crisis caused by traditional infrastructure system. Underground Maglev system has been modeled to be constructed for all transportation systems to run the vehicle smoothly just over two feet over the earth surface by propulsive and impulsive force at flying stage. A wind energy modeling has also been added to meet the vehicle's energy demand when it runs on a non-maglev area. Naturally, all maglev infrastructures network to be covered by evergreen herb except pedestrian walkways to absorb CO 2 , ambient heat, and moisture (vapor) from the surrounding environment to make it cool. The research revealed that the vehicle will not require any energy since it will run by superconducting electromagnetic force while it runs on a maglev infrastructure area and directed by wind energy while it runs on non-maglev area. The proposed maglev transportation infrastructure technology will indeed be an innovative discovery in modern engineering science which will reduce fossil fuel energy consumption and climate change dramatically.

  10. Paddys Run Streambank Stabilization Project at the Fernald Preserve, Harrison, OH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hooten, Gwendolyn; Hertel, Bill; Homer, John

    The Fernald Preserve is a former uranium-processing plant that underwent extensive remediation pursuant to CERCLA and is now managed by the US DOE Office of Legacy Management. While remediation of buildings and soil contamination was completed in 2006, aquifer remediation is ongoing. Paddys Run is a second-order stream that runs to the south along the western side of the Fernald Preserve. The Paddys Run watershed encompasses nearly 41 km2 (16 mi2), including most of the Fernald site. Field personnel conducting routine site inspections in March 2014 observed that Paddys Run was migrating east via bank erosion into the “Pit 3more » Swale,” an area of known surface-water contamination. The soil there was certified pursuant to site regulatory agreements and meets all final remediation levels. However, weekly surface-water monitoring is conducted from two puddles within the swale area, when water that exceeds the final remediation levels is present. Paddys Run had migrated east approximately 4 m (13 ft) in 2 years and was approximately 29 m (95 ft) from the sample location. This rapid migration threatened existing conditions that allowed for continued monitoring of the swale area and also threatened Paddys Run water quality. Therefore, DOE and regulators determined that the east bank of Paddys Run required stabilization. This was accomplished with a design that included the following components: relocation of approximately 145 m (475 ft) of streambed 9 m (30 ft) west, installation of a rock toe along the east bank, installation of two cross-vane in-stream grade-control structures, stabilization of a portion of the east bank using soil encapsulated lifts, and regrading, seeding, and planting within remaining disturbed areas. In an effort to take advantage of low-flow conditions in Paddys Run, construction was initiated in September 2014. Weather delays and subsurface flow within the Paddys Run streambed resulted in an interim shutdown of the project area in December 2014. Construction activities resumed in April 2015, with completion in November 2015. To date, this stabilization project has been successful. The regraded bank and streambed have remained stable, and no compromise to installed cross-vanes, the rock toe, or the soil encapsulated lifts has been observed.« less

  11. Large Area Silicon Sheet by EFG

    NASA Technical Reports Server (NTRS)

    Wald, F. V.

    1979-01-01

    Progress made in the development of EFG ribbon growth is discussed. Specific areas covered include: (1) demonstration of multiple growth for ribbons 5 cm wide in runs of 12 and 20 hours duration; (2) a single cartridge crystal growth station was built expanding observational capacity by virtue of an anamorphic optical-video system which allows close observation of the meniscus over 7.5 cm wide, as well as video taping of the ribbon growth process; (3) growth station no.1 achieved reproducible and reliable growth of 7.5 cm wide ribbon at speeds up to 4 cm/min; (4) introduction of the 'mini cold shoe'; (5) increases in cell efficiency due to interface shaping using the 'displaced die' concept; and (6) clarification of the role of gaseous impurities in cartridge furnaces and stabilization of their destabilizing influence on growth.

  12. Human-motion energy harvester for autonomous body area sensors

    NASA Astrophysics Data System (ADS)

    Geisler, M.; Boisseau, S.; Perez, M.; Gasnier, P.; Willemin, J.; Ait-Ali, I.; Perraud, S.

    2017-03-01

    This paper reports on a method to optimize an electromagnetic energy harvester converting the low-frequency body motion and aimed at powering wireless body area sensors. This method is based on recorded accelerations, and mechanical and transduction models that enable an efficient joint optimization of the structural parameters. An optimized prototype of 14.8 mmØ × 52 mm, weighting 20 g, has generated up to 4.95 mW in a resistive load when worn at the arm during a run, and 6.57 mW when hand-shaken. Among the inertial electromagnetic energy harvesters reported so far, this one exhibits one of the highest power densities (up to 730 μW cm-3). The energy harvester was finally used to power a bluetooth low energy wireless sensor node with accelerations measurements at 25 Hz.

  13. Local efficiency in fluvial systems: Lessons from Icicle Bend

    NASA Astrophysics Data System (ADS)

    Jerin, Tasnuba; Phillips, Jonathan

    2017-04-01

    Development of fluvial systems is often described and modeled in terms of principles related to maxima, minima, or optima of various hydraulic or energy parameters that can generally be encompassed by a principle of efficiency selection (more efficient flow routes tend to be preferentially selected and enhanced). However, efficiency selection is highly localized, and the cumulative effects of these local events may or may not produce more efficient pathways at a broader scale. This is illustrated by the case of Icicle Bend on Shawnee Run, a limestone bedrock stream in central Kentucky. Field evidence indicates that a paleochannel was abandoned during downcutting of the stream, and the relocation was analyzed using a flow partitioning model. The bend represents abandonment of a steeper, straighter, more efficient channel at the reach scale in favor of a longer, currently less steep and less efficient flow path. This apparently occurred owing to capture of Shawnee Run flow by a subsurface karst flow path that was subsequently exhumed. The development of Icicle Bend illustrates the local nature of efficiency selection and the role of historical contingency in geomorphic evolution.

  14. Evaluation of a low-end architecture for collaborative software development, remote observing, and data analysis from multiple sites

    NASA Astrophysics Data System (ADS)

    Messerotti, Mauro; Otruba, Wolfgang; Hanslmeier, Arnold

    2000-06-01

    The Kanzelhoehe Solar Observatory is an observing facility located in Carinthia (Austria) and operated by the Institute of Geophysics, Astrophysics and Meteorology of the Karl- Franzens University Graz. A set of instruments for solar surveillance at different wavelengths bands is continuously operated in automatic mode and is presently being upgraded to be used in supplying near-real-time solar activity indexes for space weather applications. In this frame, we tested a low-end software/hardware architecture running on the PC platform in a non-homogeneous, remotely distributed environment that allows efficient or moderately efficient application sharing at the Intranet and Extranet (i.e., Wide Area Network) levels respectively. Due to the geographical distributed of participating teams (Trieste, Italy; Kanzelhoehe and Graz, Austria), we have been using such features for collaborative remote software development and testing, data analysis and calibration, and observing run emulation from multiple sites as well. In this work, we describe the used architecture and its performances based on a series of application sharing tests we carried out to ascertain its effectiveness in real collaborative remote work, observations and data exchange. The system proved to be reliable at the Intranet level for most distributed tasks, limited to less demanding ones at the Extranet level, but quite effective in remote instrument control when real time response is not needed.

  15. Redesigned PDC bit solves low hydraulic hp problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-06-01

    A new PDC bit design was created to solve a problem a drilling contractor had due to hydraulic horsepower limitations on rigs used in a particular geographical area. The new bit design, which arose from a formal alliance between Exeter Drilling Co. and Hughes Christensen Co. has greatly improved bit cleaning and overall drilling efficiency in applications where only low hydraulic hp is available. The new design has been run successfully in the Denver-Julesburg (D-J) basin of Colorado. The development was described in detail in paper IADC/SPE 35109, ``Unique PDC bit configuration dramatically improves hole cleaning, drilling efficiency in lowmore » hydraulic applications,`` presented by G.J. Hertzler III, Exeter Drilling Co. and J.T. Wankier, Hughes Christensen Co., at the 1996 IADC/SPE Drilling Conference, New Orleans, La., March 12--15. This article is an abstract of that paper, which contains significantly more technical data.« less

  16. Hollow mesoporous TiO2 microspheres for enhanced photocatalytic degradation of acetaminophen in water.

    PubMed

    Lin, Chin Jung; Yang, Wen-Ta; Chou, Chen-Yi; Liou, Sofia Ya Hsuan

    2016-06-01

    Hollow core-shell mesoporous TiO2 microspheres were synthesized by a template-free solvothermal route for efficient photocatalytic degradation of acetaminophen. X-ray diffraction, scanning electron microscopy, transmission electron microscopy, and Barrett-Joyner-Halenda data revealed a micrometer-sized mesoporous anatase TiO2 hollow sphere with large surface area and efficient light harvesting. For the photocatalytic degradation of acetaminophen in 60 min, the conversion fraction of the drug increased from 88% over commercial Degussa P25 TiO2 to 94% over hollow spheres with about 25% increase in the initial reaction rate. Even after 10 repeated runs, the recycled hollow spheres showed good photodegradation activity. The intermediates generated in the photocatalytic reactions were eventually converted into molecules that are easier to handle. The simple fabrication route would facilitate the development of photocatalysts for the decomposition of environmental contaminants. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Fuzzy Document Clustering Approach using WordNet Lexical Categories

    NASA Astrophysics Data System (ADS)

    Gharib, Tarek F.; Fouad, Mohammed M.; Aref, Mostafa M.

    Text mining refers generally to the process of extracting interesting information and knowledge from unstructured text. This area is growing rapidly mainly because of the strong need for analysing the huge and large amount of textual data that reside on internal file systems and the Web. Text document clustering provides an effective navigation mechanism to organize this large amount of data by grouping their documents into a small number of meaningful classes. In this paper we proposed a fuzzy text document clustering approach using WordNet lexical categories and Fuzzy c-Means algorithm. Some experiments are performed to compare efficiency of the proposed approach with the recently reported approaches. Experimental results show that Fuzzy clustering leads to great performance results. Fuzzy c-means algorithm overcomes other classical clustering algorithms like k-means and bisecting k-means in both clustering quality and running time efficiency.

  18. Run-up of Tsunamis in the Gulf of Mexico caused by the Chicxulub Impact Event

    NASA Astrophysics Data System (ADS)

    Weisz, R.; Wünnenmann, K.; Bahlburg, H.

    2003-04-01

    The Chicxulub impact event can be investigated on (1) local, (2) regional and in (3) global scales. Our investigations focus on the regional scale, especially on the run-up of tsunami waves on the coast around the Gulf of Mexico caused by the impact. An impact produces two types of tsunami waves: (1) the rim wave, (2) the collapse wave. Both waves propagate over long distances and reach coastal areas. Depending on the tsunami wave characteristics, they have a potentionally large influence on the coastal areas. Run-up distance and run-up height can be used as parameters for assessing this influence. To calculate these parameters, we are using a multi-material hydrocode (SALE) to simulate the generation of the tsunami wave, a non-linear shallow water approach for the propagation, and we implemented a special open boundary for considering the run-up of tsunami waves. With the help of the one-dimensional shallow water approach, we will give run-up heights and distances for the coastal area around the Gulf of Mexico. The calculations are done along several sections from the impact site towards the coast. These are a first approximation to run-up calculations for the entire coast of the Gulf of Mexico. The bathymetric data along the sections, used in the wave propagation and run-up, correspond to a linearized bathymetry of the recent Gulf of Mexico. Additionally, we will present preliminary results from our first two-dimensional experiments of propagation and run-up. These results will be compared with the one-dimensional approach.

  19. Mineral resource potential map of the Dolly Ann Roadless Area, Alleghany County, Virginia

    USGS Publications Warehouse

    Lesure, Frank G.; Jones, Jay G.

    1983-01-01

    The Dolly Ann Roadless Area comprises 7,900 acres (3,200 ha) in the George Washington National Forest in the Valley and Ridge physiographic province of west-central Virginia. The area is at the southern ·end of Warm Springs Mountain in Alleghany County just northeast of Covington, the county seat (index map). U.S. Highway 220 forms part of the western boundary, and U.S. Forest Service Road 125, which parallels Pounding Mill Creek, forms the eastern boundary. The principal streams draining the area are Pounding Mill Creek, Dry Run, and Roaring Run, all tributaries of the Jackson River. The highest point in the area is Big Knob at the north end, 4,072 ft (1241 m) above sea level; the lowest points, about 1,400 ft (427 m) above sea level, are at the south side, along Dry Run and Pounding Mill Creek. In general, the hill slopes are steep and heavily wooded with second- or third-growth hardwoods and scattered pine and hemlock. Dolly Ann Hollow near the east end of the area is a steep, boulder-strewn gorge, quite picturesque, but containing no good trails. A good trail up Dry Run connects a trail crossing the ridge between Bald Knob and Big Knob. No other trails cross the area.

  20. Ground-water/surface-water relations along Honey Creek, Washtenaw County, Michigan, 2003

    USGS Publications Warehouse

    Healy, Denis F.

    2005-01-01

    The U.S. Geological Survey (USGS), in cooperation with the city of Ann Arbor, Mich., investigated the ground-water/ surface-water relations along the lower reaches of Honey Creek, Washtenaw County, Mich., and an unnamed tributary to Honey Creek (the discharge tributary) from June through October 2003. Streamflow in these reaches was artificially high during a naturally low-flow period due to an anthropogenic discharge. Ground-water/surface-water relations were examined by seepage runs (series of streamflow measurements for the computation of streams gains or losses) and measurements of the difference in head between the stream surface and shallow aquifer. Specific conductance and water-temperature measurements were used as ancillary data to help identify gaining and losing reaches. Three seepage runs and four runs in which hydraulic-head differences between the stream and shallow aquifer were measured (piezometer runs) were made during periods of base flow. Streamflow measurements were made at 18 sites for the seepage runs. Instream piezometers were installed at 16 sites and bank piezometers were installed at 2 sites. Two deeper instream piezometers were installed at site 13 on September 4, 2003 to collect additional data on the ground-water/surface-water relations at that site. The seepage runs indicate that the main stem of Honey Creek and the discharge tributary in the study area are overall gaining reaches. The seepage runs also indicate that smaller reaches of Honey Creek and the discharge tributary may be losing reaches and that this relation may change over time with changing hydraulic conditions. The piezometer-run measurements support the seepage-run results on the main stem, whereas piezometer-run measurements both support and conflict with seepage-run measurements on the discharge tributary. Seepage runs give an average for the reach, whereas piezometer head-difference measurements are for a specific area around the piezometer. Data that may appear to be conflicting actually may be showing that within a gaining reach there are localized areas that lose streamflow. The overall gain in streamflow along with specific measurements of head differences, specific conductance, and water temperature indicate that ground water is discharging to Honey Creek and the discharge tributary. Although reaches and areas that lose streamflow have been identified, data collected during this study cannot confirm or disprove that the loss is to the regional ground-water system.

  1. Relative importance of impervious area, drainage density, width function, and subsurface storm drainage on flood runoff from an urbanized catchment

    NASA Astrophysics Data System (ADS)

    Ogden, Fred L.; Raj Pradhan, Nawa; Downer, Charles W.; Zahner, Jon A.

    2011-12-01

    The literature contains contradictory conclusions regarding the relative effects of urbanization on peak flood flows due to increases in impervious area, drainage density and width function, and the addition of subsurface storm drains. We used data from an urbanized catchment, the 14.3 km2 Dead Run watershed near Baltimore, Maryland, USA, and the physics-based gridded surface/subsurface hydrologic analysis (GSSHA) model to examine the relative effect of each of these factors on flood peaks, runoff volumes, and runoff production efficiencies. GSSHA was used because the model explicitly includes the spatial variability of land-surface and hydrodynamic parameters, including subsurface storm drains. Results indicate that increases in drainage density, particularly increases in density from low values, produce significant increases in the flood peaks. For a fixed land-use and rainfall input, the flood magnitude approaches an upper limit regardless of the increase in the channel drainage density. Changes in imperviousness can have a significant effect on flood peaks for both moderately extreme and extreme storms. For an extreme rainfall event with a recurrence interval in excess of 100 years, imperviousness is relatively unimportant in terms of runoff efficiency and volume, but can affect the peak flow depending on rainfall rate. Changes to the width function affect flood peaks much more than runoff efficiency, primarily in the case of lower density drainage networks with less impermeable area. Storm drains increase flood peaks, but are overwhelmed during extreme rainfall events when they have a negligible effect. Runoff in urbanized watersheds with considerable impervious area shows a marked sensitivity to rainfall rate. This sensitivity explains some of the contradictory findings in the literature.

  2. Experience with a vectorized general circulation weather model on Star-100

    NASA Technical Reports Server (NTRS)

    Soll, D. B.; Habra, N. R.; Russell, G. L.

    1977-01-01

    A version of an atmospheric general circulation model was vectorized to run on a CDC STAR 100. The numerical model was coded and run in two different vector languages, CDC and LRLTRAN. A factor of 10 speed improvement over an IBM 360/95 was realized. Efficient use of the STAR machine required some redesigning of algorithms and logic. This precludes the application of vectorizing compilers on the original scalar code to achieve the same results. Vector languages permit a more natural and efficient formulation for such numerical codes.

  3. Modeling the impacts of climate change and technical progress on the wheat yield in inland China: An autoregressive distributed lag approach.

    PubMed

    Zhai, Shiyan; Song, Genxin; Qin, Yaochen; Ye, Xinyue; Lee, Jay

    2017-01-01

    This study aims to evaluate the impacts of climate change and technical progress on the wheat yield per unit area from 1970 to 2014 in Henan, the largest agricultural province in China, using an autoregressive distributed lag approach. The bounded F-test for cointegration among the model variables yielded evidence of a long-run relationship among climate change, technical progress, and the wheat yield per unit area. In the long run, agricultural machinery and fertilizer use both had significantly positive impacts on the per unit area wheat yield. A 1% increase in the aggregate quantity of fertilizer use increased the wheat yield by 0.19%. Additionally, a 1% increase in machine use increased the wheat yield by 0.21%. In contrast, precipitation during the wheat growth period (from emergence to maturity, consisting of the period from last October to June) led to a decrease in the wheat yield per unit area. In the short run, the coefficient of the aggregate quantity of fertilizer used was negative. Land size had a significantly positive impact on the per unit area wheat yield in the short run. There was no significant short-run or long-run impact of temperature on the wheat yield per unit area in Henan Province. The results of our analysis suggest that climate change had a weak impact on the wheat yield, while technical progress played an important role in increasing the wheat yield per unit area. The results of this study have implications for national and local agriculture policies under climate change. To design well-targeted agriculture adaptation policies for the future and to reduce the adverse effects of climate change on the wheat yield, climate change and technical progress factors should be considered simultaneously. In addition, adaptive measures associated with technical progress should be given more attention.

  4. Modeling the impacts of climate change and technical progress on the wheat yield in inland China: An autoregressive distributed lag approach

    PubMed Central

    Qin, Yaochen; Lee, Jay

    2017-01-01

    This study aims to evaluate the impacts of climate change and technical progress on the wheat yield per unit area from 1970 to 2014 in Henan, the largest agricultural province in China, using an autoregressive distributed lag approach. The bounded F-test for cointegration among the model variables yielded evidence of a long-run relationship among climate change, technical progress, and the wheat yield per unit area. In the long run, agricultural machinery and fertilizer use both had significantly positive impacts on the per unit area wheat yield. A 1% increase in the aggregate quantity of fertilizer use increased the wheat yield by 0.19%. Additionally, a 1% increase in machine use increased the wheat yield by 0.21%. In contrast, precipitation during the wheat growth period (from emergence to maturity, consisting of the period from last October to June) led to a decrease in the wheat yield per unit area. In the short run, the coefficient of the aggregate quantity of fertilizer used was negative. Land size had a significantly positive impact on the per unit area wheat yield in the short run. There was no significant short-run or long-run impact of temperature on the wheat yield per unit area in Henan Province. The results of our analysis suggest that climate change had a weak impact on the wheat yield, while technical progress played an important role in increasing the wheat yield per unit area. The results of this study have implications for national and local agriculture policies under climate change. To design well-targeted agriculture adaptation policies for the future and to reduce the adverse effects of climate change on the wheat yield, climate change and technical progress factors should be considered simultaneously. In addition, adaptive measures associated with technical progress should be given more attention. PMID:28950027

  5. Reliability of Vibrating Mesh Technology.

    PubMed

    Gowda, Ashwin A; Cuccia, Ann D; Smaldone, Gerald C

    2017-01-01

    For delivery of inhaled aerosols, vibrating mesh systems are more efficient than jet nebulizers are and do not require added gas flow. We assessed the reliability of a vibrating mesh nebulizer (Aerogen Solo, Aerogen Ltd, Galway Ireland) suitable for use in mechanical ventilation. An initial observational study was performed with 6 nebulizers to determine run time and efficiency using normal saline and distilled water. Nebulizers were run until cessation of aerosol production was noted, with residual volume and run time recorded. Three controllers were used to assess the impact of the controller on nebulizer function. Following the observational study, a more detailed experimental protocol was performed using 20 nebulizers. For this analysis, 2 controllers were used, and time to cessation of aerosol production was noted. Gravimetric techniques were used to measure residual volume. Total nebulization time and residual volume were recorded. Failure was defined as premature cessation of aerosol production represented by residual volume of > 10% of the nebulizer charge. In the initial observational protocol, an unexpected sporadic failure rate was noted of 25% in 55 experimental runs. In the experimental protocol, a failure rate was noted of 30% in 40 experimental runs. Failed runs in the experimental protocol exhibited a wide range of retained volume averaging ± SD 36 ± 21.3% compared with 3.2 ± 1.5% (P = .001) in successful runs. Small but significant differences existed in nebulization time between controllers. Aerogen Solo nebulization was often randomly interrupted with a wide range of retained volumes. Copyright © 2017 by Daedalus Enterprises.

  6. Preparing CAM-SE for Multi-Tracer Applications: CAM-SE-Cslam

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Taylor, M.; Goldhaber, S.

    2014-12-01

    The NCAR-DOE spectral element (SE) dynamical core comes from the HOMME (High-Order Modeling Environment; Dennis et al., 2012) and it is available in CAM. The CAM-SE dynamical core is designed with intrinsic mimetic properties guaranteeing total energy conservation (to time-truncation errors) and mass-conservation, and has demonstrated excellent scalability on massively parallel compute platforms (Taylor, 2011). For applications involving many tracers such as chemistry and biochemistry modeling, CAM-SE has been found to be significantly more computationally costly than the current "workhorse" model CAM-FV (Finite-Volume; Lin 2004). Hence a multi-tracer efficient scheme, called the CSLAM (Conservative Semi-Lagrangian Multi-tracer; Lauritzen et al., 2011) scheme, has been implemented in the HOMME (Erath et al., 2012). The CSLAM scheme has recently been cast in flux-form in HOMME so that it can be coupled to the SE dynamical core through conventional flux-coupling methods where the SE dynamical core provides background air mass fluxes to CSLAM. Since the CSLAM scheme makes use of a finite-volume gnomonic cubed-sphere grid and hence does not operate on the SE quadrature grid, the capability of running tracer advection, the physical parameterization suite and dynamics on separate grids has been implemented in CAM-SE. The default CAM-SE-CSLAM setup is to run physics on the quasi-equal area CSLAM grid. The capability of running physics on a different grid than the SE dynamical core may provide a more consistent coupling since the physics grid option operates with quasi-equal-area cell average values rather than non-equi-distant grid-point (SE quadrature point) values. Preliminary results on the performance of CAM-SE-CSLAM will be presented.

  7. Optimal muscle fascicle length and tendon stiffness for maximising gastrocnemius efficiency during human walking and running.

    PubMed

    Lichtwark, G A; Wilson, A M

    2008-06-21

    Muscles generate force to resist gravitational and inertial forces and/or to undertake work, e.g. on the centre of mass. A trade-off in muscle architecture exists in muscles that do both; the fibres should be as short as possible to minimise activation cost but long enough to maintain an appropriate shortening velocity. Energetic cost is also influenced by tendon compliance which modulates the timecourse of muscle mechanical work. Here we use a Hill-type muscle model of the human medial gastrocnemius to determine the muscle fascicle length and Achilles tendon compliance that maximise efficiency during the stance phase of walking (1.2m/s) and running (3.2 and 3.9 m/s). A broad range of muscle fascicle lengths (ranging from 45 to 70 mm) and tendon stiffness values (150-500 N/mm) can achieve close to optimal efficiency at each speed of locomotion; however, efficient walking requires shorter muscle fascicles and a more compliant tendon than running. The values that maximise efficiency are within the range measured in normal populations. A non-linear toe-region region of the tendon force-length properties may further influence the optimal values, requiring a stiffer tendon with slightly longer muscle fascicles; however, it does not alter the main results. We conclude that muscle fibre length and tendon compliance combinations may be tuned to maximise efficiency under a given gait condition. Efficiency is maximised when the required volume of muscle is minimised, which may also help reduce limb inertia and basal metabolic costs.

  8. Efficient High Performance Collective Communication for Distributed Memory Environments

    ERIC Educational Resources Information Center

    Ali, Qasim

    2009-01-01

    Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…

  9. 40 CFR 63.9323 - How do I determine the add-on control device emission destruction or removal efficiency?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... device emission destruction or removal efficiency? 63.9323 Section 63.9323 Protection of Environment... determine the add-on control device emission destruction or removal efficiency? You must use the procedures... removal efficiency as part of the performance test required by § 63.9310. You must conduct three test runs...

  10. 40 CFR 63.9323 - How do I determine the add-on control device emission destruction or removal efficiency?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... device emission destruction or removal efficiency? 63.9323 Section 63.9323 Protection of Environment... determine the add-on control device emission destruction or removal efficiency? You must use the procedures... removal efficiency as part of the performance test required by § 63.9310. You must conduct three test runs...

  11. Policy evaluation for a bus-based transit system: the case study of Busan, Korea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lew, K.S.

    This study considers a quite specific set of dimensions of transit evaluation as a comprehensive management-strategy approach for addressing the ongoing problems of urban public transit systems and apply them in a case study. With increasing difficulties in providing effective public transportation, attention in this study was placed on providing for the movement of people in an efficient and equitable manner. This study was therefore concerned with just identifying and evaluating policy options that are feasible within the socio-economic and political context of the Busan metropolitan area and, in particular, how the criteria of efficiency and equity can best bemore » achieved by implementing transit policy alternatives. In the absence of long-run major public investment in urban transportation, the criteria of both efficiency and equity can only be furthered through managerial strategies applied within a systematic evaluative framework. By emphasizing the planning, operational, and managerial considerations of fixed-route bus transit in Busan, it has been acknowledged that efficient and equitable public transportation can be provided through a variety of social, economic, political, and institutional arrangements that are possible to apply from a policy viewpoint in the immediate future.« less

  12. Large area, low cost space solar cells with optional wraparound contacts

    NASA Technical Reports Server (NTRS)

    Michaels, D.; Mendoza, N.; Williams, R.

    1981-01-01

    Design parameters for two large area, low cost solar cells are presented, and electron irradiation testing, thermal alpha testing, and cell processing are discussed. The devices are a 2 ohm-cm base resistivity silicon cell with an evaporated aluminum reflector produced in a dielectric wraparound cell, and a 10 ohm-cm silicon cell with the BSF/BSR combination and a conventional contact system. Both cells are 5.9 x 5.9 cm and require 200 micron thick silicon material due to mission weight constraints. Normalized values for open circuit voltage, short circuit current, and maximum power calculations derived from electron radiation testing are given. In addition, thermal alpha testing values of absorptivity and emittance are included. A pilot cell processing run produced cells averaging 14.4% efficiencies at AMO 28 C. Manufacturing for such cells will be on a mechanized process line, and the area of coverslide application technology must be considered in order to achieve cost effective production.

  13. Quantification aspects of constant pressure (ultra) high pressure liquid chromatography using mass-sensitive detectors with a nebulizing interface.

    PubMed

    Verstraeten, M; Broeckhoven, K; Lynen, F; Choikhet, K; Landt, K; Dittmann, M; Witt, K; Sandra, P; Desmet, G

    2013-01-25

    The present contribution investigates the quantitation aspects of mass-sensitive detectors with nebulizing interface (ESI-MSD, ELSD, CAD) in the constant pressure gradient elution mode. In this operation mode, the pressure is controlled and maintained at a set value and the liquid flow rate will vary according to the inverse mobile phase viscosity. As the pressure is continuously kept at the allowable maximum during the entire gradient run, the average liquid flow rate is higher compared to that in the conventional constant flow rate operation mode, thus shortening the analysis time. The following three mass-sensitive detectors were investigated: mass spectrometry detector (MS), evaporative light scattering detector (ELSD) and charged aerosol detector (CAD) and a wide variety of samples (phenones, polyaromatic hydrocarbons, wine, cocoa butter) has been considered. It was found that the nebulizing efficiency of the LC-interfaces of the three detectors under consideration changes with the increasing liquid flow rate. For the MS, the increasing flow rate leads to a lower peak area whereas for the ELSD the peak area increases compared to the constant flow rate mode. The peak area obtained with a CAD is rather insensitive to the liquid flow rate. The reproducibility of the peak area remains similar in both modes, although variation in system permeability compromises the 'long-term' reproducibility. This problem can however be overcome by running a flow rate program with an optimized flow rate and composition profile obtained from the constant pressure mode. In this case, the quantification remains reproducibile, despite any occuring variations of the system permeability. Furthermore, the same fragmentation pattern (MS) has been found in the constant pressure mode compared to the customary constant flow rate mode. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Comparison of 2D numerical models for river flood hazard assessment: simulation of the Secchia River flood in January, 2014

    NASA Astrophysics Data System (ADS)

    Shustikova, Iuliia; Domeneghetti, Alessio; Neal, Jeffrey; Bates, Paul; Castellarin, Attilio

    2017-04-01

    Hydrodynamic modeling of inundation events still brings a large array of uncertainties. This effect is especially evident in the models run for geographically large areas. Recent studies suggest using fully two-dimensional (2D) models with high resolution in order to avoid uncertainties and limitations coming from the incorrect interpretation of flood dynamics and an unrealistic reproduction of the terrain topography. This, however, affects the computational efficiency increasing the running time and hardware demands. Concerning this point, our study evaluates and compares numerical models of different complexity by testing them on a flood event that occurred in the basin of the Secchia River, Northern Italy, on 19th January, 2014. The event was characterized by a levee breach and consequent flooding of over 75 km2 of the plain behind the dike within 48 hours causing population displacement, one death and economic losses in excess of 400 million Euro. We test the well-established TELEMAC 2D, and LISFLOOD-FP codes, together with the recently launched HEC-RAS 5.0.3 (2D model), all models are implemented using different grid size (2-200 m) based on the 1 m digital elevation model resolution. TELEMAC is a fully 2D hydrodynamic model which is based on the finite-element or finite-volume approach. Whereas HEC-RAS 5.0.3 and LISFLOOD-FP are both coupled 1D-2D models. All models are calibrated against observed inundation extent and maximum water depths, which are retrieved from remotely sensed data and field survey reports. Our study quantitatively compares the three modeling strategies highlighting differences in terms of the ease of implementation, accuracy of representation of hydraulic processes within floodplains and computational efficiency. Additionally, we look into the different grid resolutions in terms of the results accuracy and computation time. Our study is a preliminary assessment that focuses on smaller areas in order to identify potential modeling schemes that would be efficient for simulating flooding scenarios for large and very large floodplains. This research aims at contributing to the reduction of uncertainties and limitations in hazard and risk assessment.

  15. Neuromuscular adaptations to training, injury and passive interventions: implications for running economy.

    PubMed

    Bonacci, Jason; Chapman, Andrew; Blanch, Peter; Vicenzino, Bill

    2009-01-01

    Performance in endurance sports such as running, cycling and triathlon has long been investigated from a physiological perspective. A strong relationship between running economy and distance running performance is well established in the literature. From this established base, improvements in running economy have traditionally been achieved through endurance training. More recently, research has demonstrated short-term resistance and plyometric training has resulted in enhanced running economy. This improvement in running economy has been hypothesized to be a result of enhanced neuromuscular characteristics such as improved muscle power development and more efficient use of stored elastic energy during running. Changes in indirect measures of neuromuscular control (i.e. stance phase contact times, maximal forward jumps) have been used to support this hypothesis. These results suggest that neuromuscular adaptations in response to training (i.e. neuromuscular learning effects) are an important contributor to enhancements in running economy. However, there is no direct evidence to suggest that these adaptations translate into more efficient muscle recruitment patterns during running. Optimization of training and run performance may be facilitated through direct investigation of muscle recruitment patterns before and after training interventions. There is emerging evidence that demonstrates neuromuscular adaptations during running and cycling vary with training status. Highly trained runners and cyclists display more refined patterns of muscle recruitment than their novice counterparts. In contrast, interference with motor learning and neuromuscular adaptation may occur as a result of ongoing multidiscipline training (e.g. triathlon). In the sport of triathlon, impairments in running economy are frequently observed after cycling. This impairment is related mainly to physiological stress, but an alteration in lower limb muscle coordination during running after cycling has also been observed. Muscle activity during running after cycling has yet to be fully investigated, and to date, the effect of alterations in muscle coordination on running economy is largely unknown. Stretching, which is another mode of training, may induce acute neuromuscular effects but does not appear to alter running economy. There are also factors other than training structure that may influence running economy and neuromuscular adaptations. For example, passive interventions such as shoes and in-shoe orthoses, as well as the presence of musculoskeletal injury, may be considered important modulators of neuromuscular control and run performance. Alterations in muscle activity and running economy have been reported with different shoes and in-shoe orthoses; however, these changes appear to be subject-specific and non-systematic. Musculoskeletal injury has been associated with modifications in lower limb neuromuscular control, which may persist well after an athlete has returned to activity. The influence of changes in neuromuscular control as a result of injury on running economy has yet to be examined thoroughly, and should be considered in future experimental design and training analysis.

  16. An efficient, modular and simple tape archiving solution for LHC Run-3

    NASA Astrophysics Data System (ADS)

    Murray, S.; Bahyl, V.; Cancio, G.; Cano, E.; Kotlyar, V.; Kruse, D. F.; Leduc, J.

    2017-10-01

    The IT Storage group at CERN develops the software responsible for archiving to tape the custodial copy of the physics data generated by the LHC experiments. Physics run 3 will start in 2021 and will introduce two major challenges for which the tape archive software must be evolved. Firstly the software will need to make more efficient use of tape drives in order to sustain the predicted data rate of 150 petabytes per year as opposed to the current 50 petabytes per year. Secondly the software will need to be seamlessly integrated with EOS, which has become the de facto disk storage system provided by the IT Storage group for physics data. The tape storage software for LHC physics run 3 is code named CTA (the CERN Tape Archive). This paper describes how CTA will introduce a pre-emptive drive scheduler to use tape drives more efficiently, will encapsulate all tape software into a single module that will sit behind one or more EOS systems, and will be simpler by dropping support for obsolete backwards compatibility.

  17. 76 FR 26982 - Proposed Flood Elevation Determinations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-10

    .... Specifically, it addresses the flooding source Licking River (Cave Run Lake). DATES: Comments are to be... Incorporated Areas,'' addressed the flooding source Licking River (Cave Run Lake). That table contained... River (Cave Run Lake)....... At the Buck Creek None +765 City of Frenchburg, confluence. Unincorporated...

  18. LCA-based optimization of wood utilization under special consideration of a cascading use of wood.

    PubMed

    Höglmeier, Karin; Steubing, Bernhard; Weber-Blaschke, Gabriele; Richter, Klaus

    2015-04-01

    Cascading, the use of the same unit of a resource in multiple successional applications, is considered as a viable means to improve the efficiency of resource utilization and to decrease environmental impacts. Wood, as a regrowing but nevertheless limited and increasingly in demand resource, can be used in cascades, thereby increasing the potential efficiency per unit of wood. This study aims to assess the influence of cascading wood utilization on optimizing the overall environmental impact of wood utilization. By combining a material flow model of existing wood applications - both for materials provision and energy production - with an algebraic optimization tool, the effects of the use of wood in cascades can be modelled and quantified based on life cycle impact assessment results for all production processes. To identify the most efficient wood allocation, the effects of a potential substitution of non-wood products were taken into account in a part of the model runs. The considered environmental indicators were global warming potential, particulate matter formation, land occupation and an aggregated single score indicator. We found that optimizing either the overall global warming potential or the value of the single score indicator of the system leads to a simultaneous relative decrease of all other considered environmental impacts. The relative differences between the impacts of the model run with and without the possibility of a cascading use of wood were 7% for global warming potential and the single score indicator, despite cascading only influencing a small part of the overall system, namely wood panel production. Cascading led to savings of up to 14% of the annual primary wood supply of the study area. We conclude that cascading can improve the overall performance of a wood utilization system. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Effect of organic loading rates and proton exchange membrane surface area on the performance of an up-flow cylindrical microbial fuel cell.

    PubMed

    Jana, Partha S; Behera, Manaswini; Ghangrekar, M M

    2012-01-01

    The effect of organic loading rates (OLRs) and proton exchange membrane (PEM) surface area on the performance of microbial fuel cells (MFCs) was evaluated. Three MFCs (MFC-1, MFC-2 and MFC-3) having PEM surface area of 10 cm2, 20 cm2 and 40 cm2, respectively, were used in the study. The MFCs were operated at influent chemical oxygen demand (COD) of 500 mg L(-1) and hydraulic retention time (HRT) of 20 h, 17 h, 13 h and 6 h in experimental Run-1 to Run-4. MFC-3, with highest PEM surface area showed highest power generation throughout the study. The optimum performancewas obtained at HRT of 13 h. In Run-5 and Run-6, the influent COD was increased to 1000 mg L(-1) and 1500 mg L(-1), respectively, maintaining the HRT at 13 h. Maximum volumetric powers of 4.26 W m(-3), 9.41 W m(-3) and 17.24 W m(-3) were obtained in MFC-1, MFC-2 and MFC-3, respectively, in Run-5 under the OLR of 1.84 kg COD m(-3) d(-1). These power values are among the higher values reported in literature; MFCs with higher PEM surface area showed better electricity generation, which clearly demonstrates that proton mass transfer is the main constraint in the MFCs which limits the power output. Combined effect of influent COD and HRT was found on electricity generation.

  20. Bacterial disinfection in a sunlight/visible-light-driven photocatalytic reactor by recyclable natural magnetic sphalerite.

    PubMed

    Peng, Xingxing; Ng, Tsz Wai; Huang, Guocheng; Wang, Wanjun; An, Taicheng; Wong, Po Keung

    2017-01-01

    A 5-L reactor was designed and used to enhance the sunlight/visible-light-driven (VLD) photocatalytic disinfection efficiency towards Gram-negative bacterium (Escherichia coli). Natural magnetic sphalerite (NMS) was used as the photocatalyst, which could be easily recycled by applying a magnetic field. Results showed that NMS with irradiation by the blue light emitting diode (LED) lamp could completely inactivate 1.5 × 10 5  cfu/mL of E. coli within 120 min in the first three runs. However, the inactivation efficiency of E. coli started to decrease in the 4th Run, while in the 5th run, the E. coli with the initial concentration of 5 logs was inactivated to 3.3 (blue-light) and 3.5 logs (sunlight), respectively. Moreover, the stability and deactivation mechanism of NMS during subsequent runs were also studied. The results showed that the decline of the photocatalytic activity was possibly attributed to adsorption of the bacterial decomposed compounds on the active sites. In addition, photocatalytic bactericidal mechanism of the NMS in the photocatalytic system was investigated by using multiple scavengers to remove the specific reactive species. Moreover, various Gram-positive bacteria including Staphylococcus aureus, Microbacterium barkeri, and Bacillus subtilis could also be efficiently inactivated in the photocatalytic system. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Shadow: Running Tor in a Box for Accurate and Efficient Experimentation

    DTIC Science & Technology

    2011-09-23

    Modeling the speed of a target CPU is done by running an OpenSSL [31] speed test on a real CPU of that type. This provides us with the raw CPU processing...rate, but we are also interested in the processing speed of an application. By running application 5 benchmarks on the same CPU as the OpenSSL speed test...simulation, saving CPU cy- cles on our simulation host machine. Shadow removes cryptographic processing by preloading the main OpenSSL [31] functions used

  2. FreeSASA: An open source C library for solvent accessible surface area calculations.

    PubMed

    Mitternacht, Simon

    2016-01-01

    Calculating solvent accessible surface areas (SASA) is a run-of-the-mill calculation in structural biology. Although there are many programs available for this calculation, there are no free-standing, open-source tools designed for easy tool-chain integration. FreeSASA is an open source C library for SASA calculations that provides both command-line and Python interfaces in addition to its C API. The library implements both Lee and Richards' and Shrake and Rupley's approximations, and is highly configurable to allow the user to control molecular parameters, accuracy and output granularity. It only depends on standard C libraries and should therefore be easy to compile and install on any platform. The library is well-documented, stable and efficient. The command-line interface can easily replace closed source legacy programs, with comparable or better accuracy and speed, and with some added functionality.

  3. An alternative model to distribute VO software to WLCG sites based on CernVM-FS: a prototype at PIC Tier1

    NASA Astrophysics Data System (ADS)

    Lanciotti, E.; Merino, G.; Bria, A.; Blomer, J.

    2011-12-01

    In a distributed computing model as WLCG the software of experiment specific application software has to be efficiently distributed to any site of the Grid. Application software is currently installed in a shared area of the site visible for all Worker Nodes (WNs) of the site through some protocol (NFS, AFS or other). The software is installed at the site by jobs which run on a privileged node of the computing farm where the shared area is mounted in write mode. This model presents several drawbacks which cause a non-negligible rate of job failure. An alternative model for software distribution based on the CERN Virtual Machine File System (CernVM-FS) has been tried at PIC, the Spanish Tierl site of WLCG. The test bed used and the results are presented in this paper.

  4. Integrated Health Care Barcelona Esquerra (Ais-Be): A Global View of Organisational Development, Re-Engineering of Processes and Improvement of the Information Systems. The Role of the Tertiary University Hospital in the Transformation

    PubMed Central

    Escarrabill, Joan; Gómez, Mónica; Ruiz, Rafael; Enfedaque, Belén; Altimiras, Xavier

    2016-01-01

    The Integrated Health Area “Barcelona Esquerra” (Área Integral de Salud de Barcelona Esquerra – AIS-BE), which covers a population of 524,000 residents in Barcelona city, is running a project to improve healthcare quality and efficiency based on co-ordination between the different suppliers in its area through the participation of their professionals. Endowed with an Organisational Model that seeks decision-taking that starts out from clinical knowledge and from Information Systems tools that facilitate this co-ordination (an interoperability platform and a website) it presents important results in its structured programmes that have been implemented such as the Reorganisation of Emergency Care, Screening for Colorectal Cancer, the Onset of type 2 Diabetes Mellitus, Teledermatology and the Development of Cross-sectional Healthcare Policies for Care in Chronicity. PMID:27616964

  5. Integrated Health Care Barcelona Esquerra (Ais-Be): A Global View of Organisational Development, Re-Engineering of Processes and Improvement of the Information Systems. The Role of the Tertiary University Hospital in the Transformation.

    PubMed

    Font, David; Escarrabill, Joan; Gómez, Mónica; Ruiz, Rafael; Enfedaque, Belén; Altimiras, Xavier

    2016-05-23

    The Integrated Health Area "Barcelona Esquerra" (Área Integral de Salud de Barcelona Esquerra - AIS-BE), which covers a population of 524,000 residents in Barcelona city, is running a project to improve healthcare quality and efficiency based on co-ordination between the different suppliers in its area through the participation of their professionals. Endowed with an Organisational Model that seeks decision-taking that starts out from clinical knowledge and from Information Systems tools that facilitate this co-ordination (an interoperability platform and a website) it presents important results in its structured programmes that have been implemented such as the Reorganisation of Emergency Care, Screening for Colorectal Cancer, the Onset of type 2 Diabetes Mellitus, Teledermatology and the Development of Cross-sectional Healthcare Policies for Care in Chronicity.

  6. A MAC Protocol for Medical Monitoring Applications of Wireless Body Area Networks

    PubMed Central

    Shu, Minglei; Yuan, Dongfeng; Zhang, Chongqing; Wang, Yinglong; Chen, Changfang

    2015-01-01

    Targeting the medical monitoring applications of wireless body area networks (WBANs), a hybrid medium access control protocol using an interrupt mechanism (I-MAC) is proposed to improve the energy and time slot utilization efficiency and to meet the data delivery delay requirement at the same time. Unlike existing hybrid MAC protocols, a superframe structure with a longer length is adopted to avoid unnecessary beacons. The time slots are mostly allocated to nodes with periodic data sources. Short interruption slots are inserted into the superframe to convey the urgent data and to guarantee the real-time requirements of these data. During these interruption slots, the coordinator can break the running superframe and start a new superframe. A contention access period (CAP) is only activated when there are more data that need to be delivered. Experimental results show the effectiveness of the proposed MAC protocol in WBANs with low urgent traffic. PMID:26046596

  7. Cascaded Ga1-xAlxAs/GaAs solar cell with graded i-region

    NASA Astrophysics Data System (ADS)

    Mil'shtein, Sam; Halilov, Samed

    2018-02-01

    In current study we designed p-i-n junction with extended intrinsic layer, where linearly graded Alx Ga1-x As presents variable energy gap so needed for effective harvesting of sun radiation. The design realization involves two regions of compositional structure in the stacking direction. The top AlxGa1-xAs layer of 1 um total thickness has stoichiometric structure x=0.3-0.2d, where depth d runs from 0 to 1 um, topmost 200 nm of which is Be-doped. Bottom AlxGa1-xAs layer of 3 um total thickness has a variable composition of x=0.133-0.033d, d runs from 1 to 4 um, the very bottom of which with 10 nm thickness is Si-doped. On the top surface, there is a 50 nm layer of p+ doped GaAs as a spacer for growing AuGe/Ni anode electrode of 20% surface area, the bottom is coated with AuGe/Ni cathode electrode. The designed cell demonstrates 89% fill factor and 30% conversion efficiency without anti-reflection coating.

  8. Simulation-Based Learning: The Learning-Forgetting-Relearning Process and Impact of Learning History

    ERIC Educational Resources Information Center

    Davidovitch, Lior; Parush, Avi; Shtub, Avy

    2008-01-01

    The results of empirical experiments evaluating the effectiveness and efficiency of the learning-forgetting-relearning process in a dynamic project management simulation environment are reported. Sixty-six graduate engineering students performed repetitive simulation-runs with a break period of several weeks between the runs. The students used a…

  9. Can Graduated Compressive Stockings Reduce Muscle Activity during Running?

    ERIC Educational Resources Information Center

    Lucas-Cuevas, Ángel Gabriel; Priego Quesada, José Ignacio; Giménez, José Vicente; Aparicio, Inmaculada; Cortell-Tormo, Juan Manuel; Pérez-Soriano, Pedro

    2017-01-01

    Purpose: Graduated compressive stockings (GCS) have been suggested to influence performance by reducing muscle oscillations and improving muscle function and efficiency. However, no study to date has analyzed the influence of GCS on muscle activity during running. The objective of the study was to analyze the influence of GCS on the perception of…

  10. Further Refinement of the LEWICE SLD Model

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    2006-01-01

    A research project is underway at NASA Glenn Research Center to produce computer software that can accurately predict ice growth for any meteorological conditions for any aircraft surface. This report will present results from version 3.2 of this software, which is called LEWICE. This version differs from previous releases in that it incorporates additional thermal analysis capabilities, a pneumatic boot model, interfaces to external computational fluid dynamics (CFD) flow solvers and has an empirical model for the supercooled large droplet (SLD) regime. An extensive comparison against the database of ice shapes and collection efficiencies that have been generated in the NASA Glenn Icing Research Tunnel (IRT) has also been performed. The complete set of data used for this comparison will eventually be available in a contractor report. This paper will show the differences in collection efficiency and ice shape between LEWICE 3.2 and experimental data. This report will first describe the LEWICE 3.2 SLD model. A semi-empirical approach was used to incorporate first order physical effects of large droplet phenomena into icing software. Comparisons are then made to every two-dimensional case in the water collection database and the ice shape database. Each collection efficiency condition was run using the following four assumptions: 1) potential flow, no splashing; 2) potential flow, with splashing; 3) Navior-Stokes, no splashing; 4) Navi r-Stokes, with splashing. All cases were run with 21 bin drop size distributions and a lift correction (angle of attack adjustment). Quantitative comparisons are shown for impingement limit, maximum water catch, and total collection efficiency. Due to the large number of ice shape cases, comprehensive comparisons were limited to potential flow cases with and without splashing. Quantitative comparisons are shown for horn height, horn angle, icing limit, area, and leading edge thickness. The results show that the predicted results for both ice shape and water collection are within the accuracy limits of the experimental data for the majority of cases.

  11. Comparison of Varying Heel to Toe Differences and Cushion to Barefoot Running in Novice Minimalist Runners

    PubMed Central

    MOODY, DANNY; HUNTER, IAIN; RIDGE, SARAH; MYRER, J. WILLIAM

    2018-01-01

    There are many different types of footwear available for runners in today’s market. Many of these shoes claim to help runners run more efficiently by altering an individual’s stride mechanics. Minimalist footwear claims to aid runners run more on their forefeet whereas more traditional footwear provides more cushioning specifically for a heel first landing. The purpose of this paper was to determine if runners, who were accustomed to running in traditional footwear would alter their running mechanics while running acutely in various types of minimalist footwear. Twelve subjects, accustomed to running in traditional 12 mm heel/toe differential footwear, ran in five footwear conditions on a treadmill at a controlled pace for two minutes after warming up in each condition for 5 minutes. While running in 12 mm heel/toe differential footwear compared to barefoot, subjects ran with a significantly longer ground time, a lower stride rate and greater vertical oscillation. There were not any differences in variables when running in the shod conditions despite the varying heel/toe differentials. Running barefoot proved to be different than running in traditional 12 mm drop cushioned footwear. PMID:29795721

  12. Comparison of Varying Heel to Toe Differences and Cushion to Barefoot Running in Novice Minimalist Runners.

    PubMed

    Moody, Danny; Hunter, Iain; Ridge, Sarah; Myrer, J William

    2018-01-01

    There are many different types of footwear available for runners in today's market. Many of these shoes claim to help runners run more efficiently by altering an individual's stride mechanics. Minimalist footwear claims to aid runners run more on their forefeet whereas more traditional footwear provides more cushioning specifically for a heel first landing. The purpose of this paper was to determine if runners, who were accustomed to running in traditional footwear would alter their running mechanics while running acutely in various types of minimalist footwear. Twelve subjects, accustomed to running in traditional 12 mm heel/toe differential footwear, ran in five footwear conditions on a treadmill at a controlled pace for two minutes after warming up in each condition for 5 minutes. While running in 12 mm heel/toe differential footwear compared to barefoot, subjects ran with a significantly longer ground time, a lower stride rate and greater vertical oscillation. There were not any differences in variables when running in the shod conditions despite the varying heel/toe differentials. Running barefoot proved to be different than running in traditional 12 mm drop cushioned footwear.

  13. Simultaneous determination of phenylethanoid glycosides and aglycones by capillary zone electrophoresis with running buffer modifier.

    PubMed

    Dong, Shuqing; Gao, Ruibin; Yang, Yan; Guo, Mei; Ni, Jingman; Zhao, Liang

    2014-03-15

    Although the separation efficiency of capillary electrophoresis (CE) is much higher than that of other chromatographic methods, it is sometimes difficult to adequately separate the complex ingredients in biological samples. This article describes how one effective and simple way to develop the separation efficiency in CE is to add some modifiers to the running buffer. The suitable running buffer modifier β-cyclodextrin (β-CD) was explored to fast and completely separate four phenylethanoid glycosides and aglycones (homovanillyl alcohol, hydroxytyrosol, 3,4-dimethoxycinnamic acid, and caffeic acid) in Lamiophlomis rotata (Lr) and Cistanche by capillary zone electrophoresis with ultraviolet (UV) detection. It was found that when β-CD was used as running buffer modifier, a baseline separation of the four analytes could be accomplished in less than 20 min and the detection limits were as low as 10(-3) mg L(-1). Other factors affecting the CE separation, such as working potential, pH value and ionic strength of running buffer, separation voltage, and sample injection time, were investigated extensively. Under the optimal conditions, a successful practical application on the determination of Lr and Cistanche samples confirmed the validity and practicability of this method. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. 76 FR 3524 - Final Flood Elevation Determinations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-20

    ... proof Flood Insurance Study and FIRM available at the address cited below for each community. The BFEs...-Long Run Road. Duck Run (backwater effects from Scioto Approximately 547 feet +535 Unincorporated Areas of River). upstream of Duck Run- Scioto County. Otway Road. Just downstream of +535 McDermott Pond...

  15. Spatial analysis of soil erosion and sediment fluxes: a paired watershed study of two Rappahannock River tributaries, Stafford County, Virginia.

    PubMed

    Ricker, Matthew C; Odhiambo, Ben K; Church, Joseph M

    2008-05-01

    Soil erosion is a serious problem in areas with expanding construction, agricultural production, and improper storm water management. It is important to understand the major processes affecting sediment delivery to surficial water bodies in order to tailor effective mitigation and outreach activities. This study analyzes how naturally occurring and anthropogenic influences, such as urbanization and soil disturbance on steep slopes, are reflected in the amount of soil erosion and sediment delivery within sub-watershed-sized areas. In this study, two sub-watersheds of the Rappahannock River, Horsepen Run and Little Falls Run, were analyzed using the Revised Universal Soil Loss Equation (RUSLE) and a sediment delivery ratio (SDR) to estimate annual sediment flux rates. The RUSLE/SDR analyses for Horsepen Run and Little Falls Run predicted 298 Mg/y and 234 Mg/y, respectively, but nearly identical per-unit-area sediment flux rates of 0.15 Mg/ha/y and 0.18 Mg/ha/y. Suspended sediment sampling indicated greater amounts of sediment in Little Falls Run, which is most likely due to anthropogenic influences. Field analyses also suggest that all-terrain vehicle crossings represent the majority of sediment flux derived from forested areas of Horsepen Run. The combined RUSLE/SDR and field sampling data indicate that small-scale anthropogenic disturbances (ATV trails and construction sites) play a major role in overall sediment flux rates for both basins and that these sites must be properly accounted for when evaluating sediment flux rates at a sub-watershed scale.

  16. Electrochemical disinfection of repeatedly recycled blackwater in a free-standing, additive-free toilet.

    PubMed

    Hawkins, Brian T; Sellgren, Katelyn L; Klem, Ethan J D; Piascik, Jeffrey R; Stoner, Brian R

    2017-11-01

    Decentralized, energy-efficient waste water treatment technologies enabling water reuse are needed to sustainably address sanitation needs in water- and energy-scarce environments. Here, we describe the effects of repeated recycling of disinfected blackwater (as flush liquid) on the energy required to achieve full disinfection with an electrochemical process in a prototype toilet system. The recycled liquid rapidly reached a steady state with total solids reliably ranging between 0.50 and 0.65% and conductivity between 20 and 23 mS/cm through many flush cycles over 15 weeks. The increase in accumulated solids was associated with increased energy demand and wide variation in the free chlorine contact time required to achieve complete disinfection. Further studies on the system at steady state revealed that running at higher voltage modestly improves energy efficiency, and established running parameters that reliably achieve disinfection at fixed run times. These results will guide prototype testing in the field.

  17. Digital Camera Control for Faster Inspection

    NASA Technical Reports Server (NTRS)

    Brown, Katharine; Siekierski, James D.; Mangieri, Mark L.; Dekome, Kent; Cobarruvias, John; Piplani, Perry J.; Busa, Joel

    2009-01-01

    Digital Camera Control Software (DCCS) is a computer program for controlling a boom and a boom-mounted camera used to inspect the external surface of a space shuttle in orbit around the Earth. Running in a laptop computer in the space-shuttle crew cabin, DCCS commands integrated displays and controls. By means of a simple one-button command, a crewmember can view low- resolution images to quickly spot problem areas and can then cause a rapid transition to high- resolution images. The crewmember can command that camera settings apply to a specific small area of interest within the field of view of the camera so as to maximize image quality within that area. DCCS also provides critical high-resolution images to a ground screening team, which analyzes the images to assess damage (if any); in so doing, DCCS enables the team to clear initially suspect areas more quickly than would otherwise be possible and further saves time by minimizing the probability of re-imaging of areas already inspected. On the basis of experience with a previous version (2.0) of the software, the present version (3.0) incorporates a number of advanced imaging features that optimize crewmember capability and efficiency.

  18. Risk-Based Remediation Approach for Cs-137 Contaminated Sediment/Soils at the Savannah River Site (SRS) Lower Three Runs Tail (U) - 13348 - SRNS-RP-2012-00546

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Candice; Bergren, Christopher; Blas, Susan

    Lower Three Runs is a large blackwater stream that runs through the eastern and southern portion of the Savannah River Site. The Lower Three Runs watershed includes two SRS facility areas: P Area (P Reactor) and R Area (R Reactor) that provided effluent discharges to Lower Three Runs. During reactor operations, effluent discharges were well above natural (pre-industrial) or present day stream discharges. The watershed contains a 2,500-acre mainstream impoundment (PAR Pond), several smaller pre-cooler ponds, and a canal system that connects the pre-cooler ponds and discharges surface water to PAR Pond. From the PAR Pond dam, Lower Three Runsmore » flows approximately 36 kilometers braiding through bottom-land/flood-plain forests before it enters the Savannah River. About eight kilometers downstream from the PAR Pond dam, the SRS boundary narrows (termed the Lower Three Runs tail) providing a limited buffer of DOE property for the Lower Three Runs stream and associated flood-plain. Previous screening characterization efforts revealed Cs-137 contamination in the sediment/soils of the flood-plain. As a part of the American Recovery and Reinvestment Act stimulus package, a comprehensive characterization effort was executed on the sediment/soils of the Lower Three Runs tail flood-plain providing a comprehensive look at the contaminant signature of the area. As a follow-up to that characterization, a regulatory decision Core Team, comprised of members of the South Carolina Department of Health and Environmental Control, Environmental Protection Agency - Region IV, and DOE, conducted negotiations on a risk-based approach to address the level of contamination found in the tail flood-plain as an early action that provided a long-term solution to exposure scenarios. For evaluation purposes, the adolescent trespasser was selected as the most likely human receptor for the Lower Three Runs tail portion because of the natural attractiveness of the area for recreational activities (i.e., hunting, fishing, hiking etc.) and access from public property. Exposure of the adolescent trespasser to Cs-137 contaminated sediment/soil at concentrations greater than 23.7 pico curies per gram have been calculated to result in an unacceptable cancer risk (> 1 x 10{sup -4}). Comparing the characterization sampling results conducted in 2009 with the benchmark concentration of 23.7 pCi/g, identified elevated risk levels along three sampling areas in the Lower Three Runs tail portion. On January 5, 2012, it was agreed by the core team that a Removal Action in the Lower Three Runs tail was to be conducted for the identified soil/sediment locations in the three identified areas that exceed the 1 x 10{sup -4} risk (23.7 pCi/g) for the adolescent trespasser receptor. The addition of Land Use Controls following the Removal Action was appropriate to protect human health and the environment. A systematic screening matrix was initiated at the identified hot spots (i.e., sampling points with Cs-137 activities greater than 23.7 pCi/g) to identify the limits of the excavation area. Sediment/soil within the defined removal areas would be excavated to the depth necessary to achieve the cleanup goal and disposed of in a CERCLA Off-Site Rule approved disposal facility. It was agreed that this removal action would adequately reduce the volume of available Cs-137 in the Lower Three Runs tail and consequently residual activities of the Cs-137 would decay over time reducing the amount of Cs-137 available in the tail which would curtail risk. The Land Use Controls consist of installation of an additional seven miles of fencing at major road crossings, utility easements, and at areas that showed a higher probability of access. In addition, signs were placed along the entire SRS perimeter of the Lower Three Runs tail approximately every 200 feet. Sign posts included both a No Trespassing sign and a Contaminant Warning sign. The project initiated a subcontract for both the removal action and the installation of fencing and signs on May 1, 2012. All field activities were completed by July 26, 2012. The project excavated and disposed of over 2700 cubic yards of contaminated sediment/soil, erected approximately seven miles of fence and placed over 2,000 signs demonstrating DOE's commitment to protect human health and act as a good neighbor to residents in the area. (authors)« less

  19. The Run-Up of Subduction Zones

    NASA Astrophysics Data System (ADS)

    Riquelme, S.; Bravo, F. J.; Fuentes, M.; Matias, M.; Medina, M.

    2016-12-01

    Large earthquakes in subduction zones are liable to produce tsunamis that can cause destruction and fatalities. The Run-up is a geophysical parameter that quantifies damage and if critical facilities or population are exposed to. Here we use the coupling for certain subduction regions measured by different techniques (Potency and GPS observations) to define areas where large earthquakes can occur. Taking the slab 1.0 from the United States Geological Survey (USGS), we can define the geometry of the area including its tsunamigenic potential. By using stochastic earthquakes sources for each area with its maximum tsunamigenic potential, we calculate the numerical and analytical run-up for each case. Then, we perform a statistical analysis and calculate the envelope for both methods. Furthermore, we build an index of risk using: the closest slope to the shore in a piecewise linear approach (last slopecriteria) and the outputsfrom tsunami modeling. Results show that there are areas prone to produce higher run-up than others based on the size of the earthquake, geometrical constraints of the source, tectonic setting and the coast last slope. Based on these results, there are zones that have low risk index which can define escape routes or secure coastal areas for tsunami early warning, urban and planning purposes when detailed data is available.

  20. Quantitative Method for Simultaneous Analysis of Acetaminophen and 6 Metabolites.

    PubMed

    Lammers, Laureen A; Achterbergh, Roos; Pistorius, Marcel C M; Romijn, Johannes A; Mathôt, Ron A A

    2017-04-01

    Hepatotoxicity after ingestion of high-dose acetaminophen [N-acetyl-para-aminophenol (APAP)] is caused by the metabolites of the drug. To gain more insight into factors influencing susceptibility to APAP hepatotoxicity, quantification of APAP and metabolites is important. A few methods have been developed to simultaneously quantify APAP and its most important metabolites. However, these methods require a comprehensive sample preparation and long run times. The aim of this study was to develop and validate a simplified, but sensitive method for the simultaneous quantification of acetaminophen, the main metabolites acetaminophen glucuronide and acetaminophen sulfate, and 4 Cytochrome P450-mediated metabolites by using liquid chromatography with mass spectrometric (LC-MS) detection. The method was developed and validated for the human plasma, and it entailed a single method for sample preparation, enabling quick processing of the samples followed by an LC-MS method with a chromatographic run time of 9 minutes. The method was validated for selectivity, linearity, accuracy, imprecision, dilution integrity, recovery, process efficiency, ionization efficiency, and carryover effect. The method showed good selectivity without matrix interferences. For all analytes, the mean process efficiency was >86%, and the mean ionization efficiency was >94%. Furthermore, the accuracy was between 90.3% and 112% for all analytes, and the within- and between-run imprecision were <20% for the lower limit of quantification and <14.3% for the middle level and upper limit of quantification. The method presented here enables the simultaneous quantification of APAP and 6 of its metabolites. It is less time consuming than previously reported methods because it requires only a single and simple method for the sample preparation followed by an LC-MS method with a short run time. Therefore, this analytical method provides a useful method for both clinical and research purposes.

  1. Space suit bioenergetics: framework and analysis of unsuited and suited activity.

    PubMed

    Carr, Christopher E; Newman, Dava J

    2007-11-01

    Metabolic costs limit the duration and intensity of extravehicular activity (EVA), an essential component of future human missions to the Moon and Mars. Energetics Framework: We present a framework for comparison of energetics data across and between studies. This framework, applied to locomotion, differentiates between muscle efficiency and energy recovery, two concepts often confused in the literature. The human run-walk transition in Earth gravity occurs at the point for which energy recovery is approximately the same for walking and running, suggesting a possible role for recovery in gait transitions. Muscular Energetics: Muscle physiology limits the overall efficiency by which chemical energy is converted through metabolism to useful work. Unsuited Locomotion: Walking and running use different methods of energy storage and release. These differences contribute to the relative changes in the metabolic cost of walking and running as gravity is varied, with the metabolic cost of locomoting at a given velocity changing in proportion to gravity for running and less than in proportion for walking. Space Suits: Major factors affecting the energetic cost of suited movement include suit pressurization, gravity, velocity, surface slope, and space suit configuration. Apollo lunar surface EVA traverse metabolic rates, while unexpectedly low, were higher than other activity categories. The Lunar Roving Vehicle facilitated even lower metabolic rates, thus longer duration EVAs. Muscles and tendons act like springs during running; similarly, longitudinal pressure forces in gas pressure space suits allow spring-like storage and release of energy when suits are self-supporting.

  2. Emulation for probabilistic weather forecasting

    NASA Astrophysics Data System (ADS)

    Cornford, Dan; Barillec, Remi

    2010-05-01

    Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.

  3. Bridging the scales in atmospheric composition simulations using a nudging technique

    NASA Astrophysics Data System (ADS)

    D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco

    2010-05-01

    Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean values of run C lie between run A and run B. A propagation of the signal outside the nudging region is observed, and is evaluated in terms of differences between coarse resolution (with and without nudging) and fine resolution simulations.

  4. 40 CFR 63.4362 - How do I determine the add-on control device emission destruction or removal efficiency?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... applicable, during each test run. (b) Measure the volatile organic matter concentration as carbon at the... limit, only the outlet volatile organic matter concentration must be determined. The outlet volatile organic matter concentration is determined as the average of the three test runs. (1) Use Method 25 if the...

  5. 40 CFR 63.4362 - How do I determine the add-on control device emission destruction or removal efficiency?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... applicable, during each test run. (b) Measure the volatile organic matter concentration as carbon at the... limit, only the outlet volatile organic matter concentration must be determined. The outlet volatile organic matter concentration is determined as the average of the three test runs. (1) Use Method 25 if the...

  6. 40 CFR 63.4362 - How do I determine the add-on control device emission destruction or removal efficiency?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... applicable, during each test run. (b) Measure the volatile organic matter concentration as carbon at the... limit, only the outlet volatile organic matter concentration must be determined. The outlet volatile organic matter concentration is determined as the average of the three test runs. (1) Use Method 25 if the...

  7. 40 CFR 63.4362 - How do I determine the add-on control device emission destruction or removal efficiency?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... applicable, during each test run. (b) Measure the volatile organic matter concentration as carbon at the... limit, only the outlet volatile organic matter concentration must be determined. The outlet volatile organic matter concentration is determined as the average of the three test runs. (1) Use Method 25 if the...

  8. A Concurrent Implementation of the Cascade-Correlation Algorithm, Using the Time Warp Operating System

    NASA Technical Reports Server (NTRS)

    Springer, P.

    1993-01-01

    This paper discusses the method in which the Cascade-Correlation algorithm was parallelized in such a way that it could be run using the Time Warp Operating System (TWOS). TWOS is a special purpose operating system designed to run parellel discrete event simulations with maximum efficiency on parallel or distributed computers.

  9. Adapting NEMO for use as the UK operational storm surge forecasting model

    NASA Astrophysics Data System (ADS)

    Furner, Rachel; Williams, Jane; Horsburgh, Kevin; Saulter, Andrew

    2016-04-01

    The United Kingdom is an area vulnerable to damage due to storm surges, particularly the East Coast which suffered losses estimated at over £1 billion during the North Sea surge event of the 5th and 6th December 2013. Accurate forecasting of storm surge events for this region is crucial to enable government agencies to assess the risk of overtopping of coastal defences so they can respond appropriately, minimising risk to life and infrastructure. There has been an operational storm surge forecast service for this region since 1978, using a numerical model developed by the National Oceanography Centre (NOC) and run at the UK Met Office. This is also implemented as part of an ensemble prediction system, using perturbed atmospheric forcing to produce an ensemble surge forecast. In order to ensure efficient use of future supercomputer developments and to create synergy with existing operational coastal ocean models the Met Office and NOC have begun a joint project transitioning the storm surge forecast system from the current CS3X code base to a configuration based on the Nucleus for European Modelling of the Ocean (NEMO). This work involves both adapting NEMO to add functionality, such as allowing the drying out of ocean cells and changes allowing NEMO to run efficiently as a two-dimensional, barotropic model. As the ensemble surge forecast system is run with 12 members 4 times a day computational efficiency is of high importance. Upon completion this project will enable interesting scientific comparisons to be made between a NEMO based surge model and the full three-dimensional baroclinic NEMO based models currently run within the Met Office, facilitating assessment of the impact of baroclinic processes, and vertical resolution on sea surface height forecasts. Moving to a NEMO code base will also allow many future developments to be more easily used within the storm surge model due to the wide range of options which currently exist within NEMO or are planned for future NEMO releases, such as data assimilation, and surge-wave coupling. Assessment of tidal performance of the NEMO-surge configuration and comparison to the existing operational CS3X model has been carried out. Evaluation of the models focus on performance relative to the UK Class A tide gauge network, a dataset which was established following the devastating flood of 1953 and which is managed by the British Oceanographic Data Service (BODC) based at NOC. Trials of the NEMO model in tide-only mode have illustrated the importance of having a well specified bathymetry and, for the 7km scaled model, a secondary sensitivity to bed friction coefficient and the specification of the coastline. Preliminary results will also be presented from model runs with atmospheric (wind stress and pressure at mean sea-level) forcing.

  10. 1. "X15 RUN UP AREA 230." A somewhat blurred, very ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. "X-15 RUN UP AREA 230." A somewhat blurred, very low altitude low oblique view to the northwest. This view predates construction of observation bunkers. Photo no. "14,696 58 A-AFFTC 17 NOV 58." - Edwards Air Force Base, X-15 Engine Test Complex, Rogers Dry Lake, east of runway between North Base & South Base, Boron, Kern County, CA

  11. The effects of running in place in a limited area with abdominal drawing-in maneuvers on abdominal muscle thickness in chronic low back pain patients.

    PubMed

    Gong, Wontae

    2016-11-21

    Based on previous studies indicating that core stabilization exercises accompanied by abdominal drawing-in maneuvers increase the thickness of the transversus abdominis muscle. The purpose of this study was to compare the measurements of abdominal muscle thicknesses during running in place in a limited area with the abdominal drawing-in maneuver. The study classified the subjects into two experimental groups: the training group (M = 2, F = 13), and the control group (M = 2, F = 13). The training group performed three sets of running in place in a limited area with abdominal drawing-in maneuvers each time, three times a week for six weeks. The abdominal muscle thicknesses of the subjects were measured using ultrasonography. Comparing the training group's abdominal muscle thickness before and after this study, there was a statistical significance in all of the external obliquus abdominis, the internal obliquus abdominis, and the transversus abdominis. In particular, thicknesses of external obliquus abdominis and internal obliquus increased remarkably. Running in place in a limited area accompanied by abdominal drawing-in maneuvers increased the thickness of the deep abdominal muscles that are the basis of trunk stabilization.

  12. National Dam Inspection Program. Laurel Run Dam. NDI ID Number PA-00380. DER ID Number 35-6, Pennsylvania Gas and Water Company. Susquehanna River Basin, Laurel Run, Lackawanna County, Pennsylvania Phase I Inspection Report,

    DTIC Science & Technology

    1980-04-01

    Supply. g. Design and Construction History. Laurel Run Dam was constructed in 1594 by Martin Cawley, a contractor from Archbald. The construction was...1T6Ace joly PHASE I INSPECTION REPORT -4 NATIONAL DAM INSPECTION PROGRAM Lime LAUREL RUN DAM PENNSYLVANIA GAS AND WATER COMPANY RESERVOIR AREA

  13. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  14. Quantum partial search for uneven distribution of multiple target items

    NASA Astrophysics Data System (ADS)

    Zhang, Kun; Korepin, Vladimir

    2018-06-01

    Quantum partial search algorithm is an approximate search. It aims to find a target block (which has the target items). It runs a little faster than full Grover search. In this paper, we consider quantum partial search algorithm for multiple target items unevenly distributed in a database (target blocks have different number of target items). The algorithm we describe can locate one of the target blocks. Efficiency of the algorithm is measured by number of queries to the oracle. We optimize the algorithm in order to improve efficiency. By perturbation method, we find that the algorithm runs the fastest when target items are evenly distributed in database.

  15. A numerical study on combustion process in a small compression ignition engine run dual-fuel mode (diesel-biogas)

    NASA Astrophysics Data System (ADS)

    Ambarita, H.; Widodo, T. I.; Nasution, D. M.

    2017-01-01

    In order to reduce the consumption of fossil fuel of a compression ignition (CI) engines which is usually used in transportation and heavy machineries, it can be operated in dual-fuel mode (diesel-biogas). However, the literature reviews show that the thermal efficiency is lower due to incomplete combustion process. In order to increase the efficiency, the combustion process in the combustion chamber need to be explored. Here, a commercial CFD code is used to explore the combustion process of a small CI engine run on dual fuel mode (diesel-biogas). The turbulent governing equations are solved based on finite volume method. A simulation of compression and expansions strokes at an engine speed and load of 1000 rpm and 2500W, respectively has been carried out. The pressure and temperature distributions and streamlines are plotted. The simulation results show that at engine power of 732.27 Watt the thermal efficiency is 9.05%. The experiment and simulation results show a good agreement. The method developed in this study can be used to investigate the combustion process of CI engine run on dual-fuel mode.

  16. Design of an ultraprecision computerized numerical control chemical mechanical polishing machine and its implementation

    NASA Astrophysics Data System (ADS)

    Zhang, Chupeng; Zhao, Huiying; Zhu, Xueliang; Zhao, Shijie; Jiang, Chunye

    2018-01-01

    The chemical mechanical polishing (CMP) is a key process during the machining route of plane optics. To improve the polishing efficiency and accuracy, a CMP model and machine tool were developed. Based on the Preston equation and the axial run-out error measurement results of the m circles on the tin plate, a CMP model that could simulate the material removal at any point on the workpiece was presented. An analysis of the model indicated that lower axial run-out error led to lower material removal but better polishing efficiency and accuracy. Based on this conclusion, the CMP machine was designed, and the ultraprecision gas hydrostatic guideway and rotary table as well as the Siemens 840Dsl numerical control system were incorporated in the CMP machine. To verify the design principles of machine, a series of detection and machining experiments were conducted. The LK-G5000 laser sensor was employed for detecting the straightness error of the gas hydrostatic guideway and the axial run-out error of the gas hydrostatic rotary table. A 300-mm-diameter optic was chosen for the surface profile machining experiments performed to determine the CMP efficiency and accuracy.

  17. On the feasibility of monitoring carbon monoxide in the lower troposphere from a constellation of northern hemisphere geostationary satellites: Global scale assimilation experiments (Part II)

    NASA Astrophysics Data System (ADS)

    Barré, Jérôme; Edwards, David; Worden, Helen; Arellano, Avelino; Gaubert, Benjamin; Da Silva, Arlindo; Lahoz, William; Anderson, Jeffrey

    2016-09-01

    This paper describes the second phase of an Observing System Simulation Experiment (OSSE) that utilizes the synthetic measurements from a constellation of satellites measuring atmospheric composition from geostationary (GEO) Earth orbit presented in part I of the study. Our OSSE is focused on carbon monoxide observations over North America, East Asia and Europe where most of the anthropogenic sources are located. Here we assess the impact of a potential GEO constellation on constraining northern hemisphere (NH) carbon monoxide (CO) using data assimilation. We show how cloud cover affects the GEO constellation data density with the largest cloud cover (i.e., lowest data density) occurring during Asian summer. We compare the modeled state of the atmosphere (Control Run), before CO data assimilation, with the known "true" state of the atmosphere (Nature Run) and show that our setup provides realistic atmospheric CO fields and emission budgets. Overall, the Control Run underestimates CO concentrations in the northern hemisphere, especially in areas close to CO sources. Assimilation experiments show that constraining CO close to the main anthropogenic sources significantly reduces errors in NH CO compared to the Control Run. We assess the changes in error reduction when only single satellite instruments are available as compared to the full constellation. We find large differences in how measurements for each continental scale observation system affect the hemispherical improvement in long-range transport patterns, especially due to seasonal cloud cover. A GEO constellation will provide the most efficient constraint on NH CO during winter when CO lifetime is longer and increments from data assimilation associated with source regions are advected further around the globe.

  18. On the Feasibility of Monitoring Carbon Monoxide in the Lower Troposphere from a Constellation of Northern Hemisphere Geostationary Satellites: Global Scale Assimilation Experiments (Part II)

    NASA Technical Reports Server (NTRS)

    Barre, Jerome; Edwards, David; Worden, Helen; Arellano, Avelino; Gaubert, Benjamin; Da Silva, Arlindo; Lahoz, William; Anderson, Jeffrey

    2016-01-01

    This paper describes the second phase of an Observing System Simulation Experiment (OSSE) that utilizes the synthetic measurements from a constellation of satellites measuring atmospheric composition from geostationary (GEO) Earth orbit presented in part I of the study. Our OSSE is focused on carbon monoxide observations over North America, East Asia and Europe where most of the anthropogenic sources are located. Here we assess the impact of a potential GEO constellation on constraining northern hemisphere (NH) carbon monoxide (CO) using data assimilation. We show how cloud cover affects the GEO constellation data density with the largest cloud cover (i.e., lowest data density) occurring during Asian summer. We compare the modeled state of the atmosphere (Control Run), before CO data assimilation, with the known 'true' state of the atmosphere (Nature Run) and show that our setup provides realistic atmospheric CO fields and emission budgets. Overall, the Control Run underestimates CO concentrations in the northern hemisphere, especially in areas close to CO sources. Assimilation experiments show that constraining CO close to the main anthropogenic sources significantly reduces errors in NH CO compared to the Control Run. We assess the changes in error reduction when only single satellite instruments are available as compared to the full constellation. We find large differences in how measurements for each continental scale observation system affect the hemispherical improvement in long-range transport patterns, especially due to seasonal cloud cover. A GEO constellation will provide the most efficient constraint on NH CO during winter when CO lifetime is longer and increments from data assimilation associated with source regions are advected further around the globe.

  19. Physiological differences between cycling and running: lessons from triathletes.

    PubMed

    Millet, Gregoire P; Vleck, V E; Bentley, D J

    2009-01-01

    The purpose of this review was to provide a synopsis of the literature concerning the physiological differences between cycling and running. By comparing physiological variables such as maximal oxygen consumption (V O(2max)), anaerobic threshold (AT), heart rate, economy or delta efficiency measured in cycling and running in triathletes, runners or cyclists, this review aims to identify the effects of exercise modality on the underlying mechanisms (ventilatory responses, blood flow, muscle oxidative capacity, peripheral innervation and neuromuscular fatigue) of adaptation. The majority of studies indicate that runners achieve a higher V O(2max) on treadmill whereas cyclists can achieve a V O(2max) value in cycle ergometry similar to that in treadmill running. Hence, V O(2max) is specific to the exercise modality. In addition, the muscles adapt specifically to a given exercise task over a period of time, resulting in an improvement in submaximal physiological variables such as the ventilatory threshold, in some cases without a change in V O(2max). However, this effect is probably larger in cycling than in running. At the same time, skill influencing motor unit recruitment patterns is an important influence on the anaerobic threshold in cycling. Furthermore, it is likely that there is more physiological training transfer from running to cycling than vice versa. In triathletes, there is generally no difference in V O(2max) measured in cycle ergometry and treadmill running. The data concerning the anaerobic threshold in cycling and running in triathletes are conflicting. This is likely to be due to a combination of actual training load and prior training history in each discipline. The mechanisms surrounding the differences in the AT together with V O(2max) in cycling and running are not largely understood but are probably due to the relative adaptation of cardiac output influencing V O(2max) and also the recruitment of muscle mass in combination with the oxidative capacity of this mass influencing the AT. Several other physiological differences between cycling and running are addressed: heart rate is different between the two activities both for maximal and submaximal intensities. The delta efficiency is higher in running. Ventilation is more impaired in cycling than in running. It has also been shown that pedalling cadence affects the metabolic responses during cycling but also during a subsequent running bout. However, the optimal cadence is still debated. Central fatigue and decrease in maximal strength are more important after prolonged exercise in running than in cycling.

  20. Automated Array Assembly, Phase 2

    NASA Technical Reports Server (NTRS)

    Carbajal, B. G.

    1979-01-01

    The solar cell module process development activities in the areas of surface preparation are presented. The process step development was carried out on texture etching including the evolution of a conceptual process model for the texturing process; plasma etching; and diffusion studies that focused on doped polymer diffusion sources. Cell processing was carried out to test process steps and a simplified diode solar cell process was developed. Cell processing was also run to fabricate square cells to populate sample minimodules. Module fabrication featured the demonstration of a porcelainized steel glass structure that should exceed the 20 year life goal of the low cost silicon array program. High efficiency cell development was carried out in the development of the tandem junction cell and a modification of the TJC called the front surface field cell. Cell efficiencies in excess of 16 percent at AM1 have been attained with only modest fill factors. The transistor-like model was proposed that fits the cell performance and provides a guideline for future improvements in cell performance.

  1. Newmark local time stepping on high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less

  2. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  3. Experimental investigation on phase change materials as heating element for non-electric neonatal incubator

    NASA Astrophysics Data System (ADS)

    Matahari, Rho Natta; Putra, Nandy; Ariantara, Bambang; Amin, Muhammad; Prawiro, Erwin

    2017-02-01

    High number of preterm births is one of the issues in improving health standard. The effort to help premature babies is hampered by high cost of NICU care in hospital. In addition, uneven distribution of electricity to remote area made it hard to operate the incubator. Utilization of phase change material beeswax to non-electricity incubator as heating element becomes alternative option to save premature babies. The objective of this experiment is to investigate the most efficient mass of beeswax according to Indonesian National Standard to earn over time and ideal temperature of incubator. Experiment was performed using prototype incubator, which utilizes natural convection phenomenon in the heating process of incubator. Utilization of fin is to accelerate heat distribution in the incubator. Result of experiment showed that the most efficient mass of PCM is 3 kg, which has 2.45 hours of running time for maintaining temperature of incubator in range of 32-36 °C.

  4. Regional scale landslide risk assessment with a dynamic physical model - development, application and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Luna, Byron Quan; Vidar Vangelsten, Bjørn; Liu, Zhongqiang; Eidsvig, Unni; Nadim, Farrokh

    2013-04-01

    Landslide risk must be assessed at the appropriate scale in order to allow effective risk management. At the moment, few deterministic models exist that can do all the computations required for a complete landslide risk assessment at a regional scale. This arises from the difficulty to precisely define the location and volume of the released mass and from the inability of the models to compute the displacement with a large amount of individual initiation areas (computationally exhaustive). This paper presents a medium-scale, dynamic physical model for rapid mass movements in mountainous and volcanic areas. The deterministic nature of the approach makes it possible to apply it to other sites since it considers the frictional equilibrium conditions for the initiation process, the rheological resistance of the displaced flow for the run-out process and fragility curve that links intensity to economic loss for each building. The model takes into account the triggering effect of an earthquake, intense rainfall and a combination of both (spatial and temporal). The run-out module of the model considers the flow as a 2-D continuum medium solving the equations of mass balance and momentum conservation. The model is embedded in an open source environment geographical information system (GIS), it is computationally efficient and it is transparent (understandable and comprehensible) for the end-user. The model was applied to a virtual region, assessing landslide hazard, vulnerability and risk. A Monte Carlo simulation scheme was applied to quantify, propagate and communicate the effects of uncertainty in input parameters on the final results. In this technique, the input distributions are recreated through sampling and the failure criteria are calculated for each stochastic realisation of the site properties. The model is able to identify the released volumes of the critical slopes and the areas threatened by the run-out intensity. The obtained final outcome is the estimation of individual building damage and total economic risk. The research leading to these results has received funding from the European Community's Seventh Framework Programme [FP7/2007-2013] under grant agreement No 265138 New Multi-HAzard and MulTi-RIsK Assessment MethodS for Europe (MATRIX).

  5. Cram

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamblin, T.

    2014-08-29

    Large-scale systems like Sequoia allow running small numbers of very large (1M+ process) jobs, but their resource managers and schedulers do not allow large numbers of small (4, 8, 16, etc.) process jobs to run efficiently. Cram is a tool that allows users to launch many small MPI jobs within one large partition, and to overcome the limitations of current resource management software for large ensembles of jobs.

  6. Brain Activation Patterns at Exhaustion in Rats That Differ in Inherent Exercise Capacity

    PubMed Central

    Foley, Teresa E.; Brooks, Leah R.; Gilligan, Lori J.; Burghardt, Paul R.; Koch, Lauren G.; Britton, Steven L.; Fleshner, Monika

    2012-01-01

    In order to further understand the genetic basis for variation in inherent (untrained) exercise capacity, we examined the brains of 32 male rats selectively bred for high or low running capacity (HCR and LCR, respectively). The aim was to characterize the activation patterns of brain regions potentially involved in differences in inherent running capacity between HCR and LCR. Using quantitative in situ hybridization techniques, we measured messenger ribonuclease (mRNA) levels of c-Fos, a marker of neuronal activation, in the brains of HCR and LCR rats after a single bout of acute treadmill running (7.5–15 minutes, 15° slope, 10 m/min) or after treadmill running to exhaustion (15–51 minutes, 15° slope, initial velocity 10 m/min). During verification of trait differences, HCR rats ran six times farther and three times longer prior to exhaustion than LCR rats. Running to exhaustion significantly increased c-Fos mRNA activation of several brain areas in HCR, but LCR failed to show significant elevations of c-Fos mRNA at exhaustion in the majority of areas examined compared to acutely run controls. Results from these studies suggest that there are differences in central c-Fos mRNA expression, and potential brain activation patterns, between HCR and LCR rats during treadmill running to exhaustion and these differences could be involved in the variation in inherent running capacity between lines. PMID:23028992

  7. 2. X15 RUN UP AREA (Jan 59). A sharp, higher ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. X-15 RUN UP AREA (Jan 59). A sharp, higher altitide low oblique aerial view to the north, showing runway, at far left; X-15 Engine Test Complex in the center. This view predates construction of observation bunkers. - Edwards Air Force Base, X-15 Engine Test Complex, Rogers Dry Lake, east of runway between North Base & South Base, Boron, Kern County, CA

  8. Effect of Turbine Axial Nozzle-Wheel Clearance on Performance of Mark 25 Torpedo Power Plant

    NASA Technical Reports Server (NTRS)

    Hoyt, Jack W.; Kottas, Harry

    1948-01-01

    Investigations were made of the turbine from a Mark 25 torpedo to determine the performance of the unit with three different turbine nozzles at various axial nozzle-wheel clearances. Turbine efficiency with a reamed nondivergent nozzle that uses the axial clearance space for gas expansion was little affected by increasing the axial running clearance from 0.030 to 0.150 inch. Turbine efficiency with cast nozzles that expanded the gas inside the nozzle passage was found to be sensitive to increased axial nozzle-wheel clearance. A cast nozzle giving a turbine brake efficiency of 0.525 at an axial running clearance of 0.035 inch gave a brake efficiency of 0.475 when the clearance was increased to 0.095 inch for the same inlet-gas conditions and blade-jet speed ratio. If the basis for computing the isentropic power available to the turbine is the temperature inside the nozzle rather then the temperature in the inlet-gas pipe, an increase in turbine efficiency of about 0.01 is indicated.

  9. Changes in running pattern due to fatigue and cognitive load in orienteering.

    PubMed

    Millet, Guillaume Y; Divert, Caroline; Banizette, Marion; Morin, Jean-Benoit

    2010-01-01

    The aim of this study was to examine the influence of fatigue on running biomechanics in normal running, in normal running with a cognitive task, and in running while map reading. Nineteen international and less experienced orienteers performed a fatiguing running exercise of duration and intensity similar to a classic distance orienteering race on an instrumented treadmill while performing mental arithmetic, an orienteering simulation, and control running at regular intervals. Two-way repeated-measures analysis of variance did not reveal any significant difference between mental arithmetic and control running for any of the kinematic and kinetic parameters analysed eight times over the fatiguing protocol. However, these parameters were systematically different between the orienteering simulation and the other two conditions (mental arithmetic and control running). The adaptations in orienteering simulation running were significantly more pronounced in the elite group when step frequency, peak vertical ground reaction force, vertical stiffness, and maximal downward displacement of the centre of mass during contact were considered. The effects of fatigue on running biomechanics depended on whether the orienteers read their map or ran normally. It is concluded that adding a cognitive load does not modify running patterns. Therefore, all changes in running pattern observed during the orienteering simulation, particularly in elite orienteers, are the result of adaptations to enable efficient map reading and/or potentially prevent injuries. Finally, running patterns are not affected to the same extent by fatigue when a map reading task is added.

  10. Solar cells and modules from dentritic web silicon

    NASA Technical Reports Server (NTRS)

    Campbell, R. B.; Rohatgi, A.; Seman, E. J.; Davis, J. R.; Rai-Choudhury, P.; Gallagher, B. D.

    1980-01-01

    Some of the noteworthy features of the processes developed in the fabrication of solar cell modules are the handling of long lengths of web, the use of cost effective dip coating of photoresist and antireflection coatings, selective electroplating of the grid pattern and ultrasonic bonding of the cell interconnect. Data on the cells is obtained by means of dark I-V analysis and deep level transient spectroscopy. A histogram of over 100 dentritic web solar cells fabricated in a number of runs using different web crystals shows an average efficiency of over 13%, with some efficiencies running above 15%. Lower cell efficiency is generally associated with low minority carrier time due to recombination centers sometimes present in the bulk silicon. A cost analysis of the process sequence using a 25 MW production line indicates a selling price of $0.75/peak watt in 1986. It is concluded that the efficiency of dentritic web cells approaches that of float zone silicon cells, reduced somewhat by the lower bulk lifetime of the former.

  11. Infrastructure performance of irrigation canal to irrigation efficiency of irrigation area of Candi Limo in Mojokerto District

    NASA Astrophysics Data System (ADS)

    Kisnanto, S.; Hadiani, R. R. R.; Ikhsan, C.

    2018-03-01

    Performance is a measure of infrastructure success in delivering the benefits corresponding it’s design implementation. Debit efficiency is a comparison between outflow debit and inflow debit. Irrigation canal performance is part of the overall performance aspects of an irrigation area. The greater of the canal performance will be concluded that the canal is increasingly able to meet the planned benefits, need to be seen its comparison between the performance and debit efficiency of the canal. The existing problems in the field that the value of the performance of irrigation canals are not always comparable to the debit efficiency. This study was conducted to describe the relationship between the performance of the canal to the canal debit efficiency. The study was conducted at Candi Limo Irrigation Area in Mojokerto Disctrict under the authority of Pemerintahan Provinsi Jawa Timur. The primary canal and secondary canal are surveyed to obtain data. The physical condition of the primary and secondary canals into the material of this study also. Primary and secondary canal performance based on the physical condition in the field. Measurement inflow and outflow debit into the data for the calculation of the debit efficiency. The instrument used in this study such as the current meter for debit measurements in the field as a solution when there is a building measure in the field were damaged, also using the meter and the camera. Permen PU No.32 is used to determine the value of the performance of the canal, while the efficiency analysis to calculate a comparison value between outflow and inflow debit. The process of data running processing by performing the measurement and calculation of the performance of the canal, the canal debit efficiency value calculation, and display a graph of the relationship between the value of the performance with the debit efficiency in each canal. The expected results of this study that the performance value on the primary canal in the range of 0 to 100 % with debit efficiency value in the range of 0 to 100 %, while for the secondary canal 1 has a performance range between 0 to 100% with efficiency ranges between 0 to 100%, the performance of the secondary canals 2 ranges between 0 to 100% with efficiencies ranging from 0 to 100%, the secondary canal 3 performance ranges between 0 to 100% efficiency ranges between 0 to 100%, the secondary canal 4 performance ranges between 0 to 100% efficiency ranges between 0 to 100% and secondary canals 5 performance ranges between 0 to 100% efficiency ranges between 0 to 100%. For the tendency to expect from the performance and efficiency of the debit canal can have a proportional clockwise or counterclockwise, which amount can be random. The tendency to be graphed the relationship between performance and efficiency of the debit of each segment studied canal.

  12. Barefoot running: biomechanics and implications for running injuries.

    PubMed

    Altman, Allison R; Davis, Irene S

    2012-01-01

    Despite the technological developments in modern running footwear, up to 79% of runners today get injured in a given year. As we evolved barefoot, examining this mode of running is insightful. Barefoot running encourages a forefoot strike pattern that is associated with a reduction in impact loading and stride length. Studies have shown a reduction in injuries to shod forefoot strikers as compared with rearfoot strikers. In addition to a forefoot strike pattern, barefoot running also affords the runner increased sensory feedback from the foot-ground contact, as well as increased energy storage in the arch. Minimal footwear is being used to mimic barefoot running, but it is not clear whether it truly does. The purpose of this article is to review current and past research on shod and barefoot/minimal footwear running and their implications for running injuries. Clearly more research is needed, and areas for future study are suggested.

  13. Concurrent approach for evolving compact decision rule sets

    NASA Astrophysics Data System (ADS)

    Marmelstein, Robert E.; Hammack, Lonnie P.; Lamont, Gary B.

    1999-02-01

    The induction of decision rules from data is important to many disciplines, including artificial intelligence and pattern recognition. To improve the state of the art in this area, we introduced the genetic rule and classifier construction environment (GRaCCE). It was previously shown that GRaCCE consistently evolved decision rule sets from data, which were significantly more compact than those produced by other methods (such as decision tree algorithms). The primary disadvantage of GRaCCe, however, is its relatively poor run-time execution performance. In this paper, a concurrent version of the GRaCCE architecture is introduced, which improves the efficiency of the original algorithm. A prototype of the algorithm is tested on an in- house parallel processor configuration and the results are discussed.

  14. A method for diagnosing surface parameters using geostationary satellite imagery and a boundary-layer model. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Polansky, A. C.

    1982-01-01

    A method for diagnosing surface parameters on a regional scale via geosynchronous satellite imagery is presented. Moisture availability, thermal inertia, atmospheric heat flux, and total evaporation are determined from three infrared images obtained from the Geostationary Operational Environmental Satellite (GOES). Three GOES images (early morning, midafternoon, and night) are obtained from computer tape. Two temperature-difference images are then created. The boundary-layer model is run, and its output is inverted via cubic regression equations. The satellite imagery is efficiently converted into output-variable fields. All computations are executed on a PDP 11/34 minicomputer. Output fields can be produced within one hour of the availability of aligned satellite subimages of a target area.

  15. Content-based image retrieval on mobile devices

    NASA Astrophysics Data System (ADS)

    Ahmad, Iftikhar; Abdullah, Shafaq; Kiranyaz, Serkan; Gabbouj, Moncef

    2005-03-01

    Content-based image retrieval area possesses a tremendous potential for exploration and utilization equally for researchers and people in industry due to its promising results. Expeditious retrieval of desired images requires indexing of the content in large-scale databases along with extraction of low-level features based on the content of these images. With the recent advances in wireless communication technology and availability of multimedia capable phones it has become vital to enable query operation in image databases and retrieve results based on the image content. In this paper we present a content-based image retrieval system for mobile platforms, providing the capability of content-based query to any mobile device that supports Java platform. The system consists of light-weight client application running on a Java enabled device and a server containing a servlet running inside a Java enabled web server. The server responds to image query using efficient native code from selected image database. The client application, running on a mobile phone, is able to initiate a query request, which is handled by a servlet in the server for finding closest match to the queried image. The retrieved results are transmitted over mobile network and images are displayed on the mobile phone. We conclude that such system serves as a basis of content-based information retrieval on wireless devices and needs to cope up with factors such as constraints on hand-held devices and reduced network bandwidth available in mobile environments.

  16. Evaluation of nonlinear structural dynamic responses using a fast-running spring-mass formulation

    NASA Astrophysics Data System (ADS)

    Benjamin, A. S.; Altman, B. S.; Gruda, J. D.

    In today's world, accurate finite-element simulations of large nonlinear systems may require meshes composed of hundreds of thousands of degrees of freedom. Even with today's fast computers and the promise of ever-faster ones in the future, central processing unit (CPU) expenditures for such problems could be measured in days. Many contemporary engineering problems, such as those found in risk assessment, probabilistic structural analysis, and structural design optimization, cannot tolerate the cost or turnaround time for such CPU-intensive analyses, because these applications require a large number of cases to be run with different inputs. For many risk assessment applications, analysts would prefer running times to be measurable in minutes. There is therefore a need for approximation methods which can solve such problems far more efficiently than the very detailed methods and yet maintain an acceptable degree of accuracy. For this purpose, we have been working on two methods of approximation: neural networks and spring-mass models. This paper presents our work and results to date for spring-mass modeling and analysis, since we are further along in this area than in the neural network formulation. It describes the physical and numerical models contained in a code we developed called STRESS, which stands for 'Spring-mass Transient Response Evaluation for structural Systems'. The paper also presents results for a demonstration problem, and compares these with results obtained for the same problem using PRONTO3D, a state-of-the-art finite element code which was also developed at Sandia.

  17. Electrochemical disinfection of repeatedly recycled blackwater in a free‐standing, additive‐free toilet

    PubMed Central

    Sellgren, Katelyn L.; Klem, Ethan J. D.; Piascik, Jeffrey R.; Stoner, Brian R.

    2017-01-01

    Abstract Decentralized, energy‐efficient waste water treatment technologies enabling water reuse are needed to sustainably address sanitation needs in water‐ and energy‐scarce environments. Here, we describe the effects of repeated recycling of disinfected blackwater (as flush liquid) on the energy required to achieve full disinfection with an electrochemical process in a prototype toilet system. The recycled liquid rapidly reached a steady state with total solids reliably ranging between 0.50 and 0.65% and conductivity between 20 and 23 mS/cm through many flush cycles over 15 weeks. The increase in accumulated solids was associated with increased energy demand and wide variation in the free chlorine contact time required to achieve complete disinfection. Further studies on the system at steady state revealed that running at higher voltage modestly improves energy efficiency, and established running parameters that reliably achieve disinfection at fixed run times. These results will guide prototype testing in the field. PMID:29242713

  18. Rigorous RG Algorithms and Area Laws for Low Energy Eigenstates in 1D

    NASA Astrophysics Data System (ADS)

    Arad, Itai; Landau, Zeph; Vazirani, Umesh; Vidick, Thomas

    2017-11-01

    One of the central challenges in the study of quantum many-body systems is the complexity of simulating them on a classical computer. A recent advance (Landau et al. in Nat Phys, 2015) gave a polynomial time algorithm to compute a succinct classical description for unique ground states of gapped 1D quantum systems. Despite this progress many questions remained unsolved, including whether there exist efficient algorithms when the ground space is degenerate (and of polynomial dimension in the system size), or for the polynomially many lowest energy states, or even whether such states admit succinct classical descriptions or area laws. In this paper we give a new algorithm, based on a rigorously justified RG type transformation, for finding low energy states for 1D Hamiltonians acting on a chain of n particles. In the process we resolve some of the aforementioned open questions, including giving a polynomial time algorithm for poly( n) degenerate ground spaces and an n O(log n) algorithm for the poly( n) lowest energy states (under a mild density condition). For these classes of systems the existence of a succinct classical description and area laws were not rigorously proved before this work. The algorithms are natural and efficient, and for the case of finding unique ground states for frustration-free Hamiltonians the running time is {\\tilde{O}(nM(n))} , where M( n) is the time required to multiply two n × n matrices.

  19. A Newton-Krylov solver for fast spin-up of online ocean tracers

    NASA Astrophysics Data System (ADS)

    Lindsay, Keith

    2017-01-01

    We present a Newton-Krylov based solver to efficiently spin up tracers in an online ocean model. We demonstrate that the solver converges, that tracer simulations initialized with the solution from the solver have small drift, and that the solver takes orders of magnitude less computational time than the brute force spin-up approach. To demonstrate the application of the solver, we use it to efficiently spin up the tracer ideal age with respect to the circulation from different time intervals in a long physics run. We then evaluate how the spun-up ideal age tracer depends on the duration of the physics run, i.e., on how equilibrated the circulation is.

  20. A Survey of Runners' Attitudes Toward and Experiences With Minimally Shod Running.

    PubMed

    Cohler, Marissa H; Casey, Ellen

    2015-08-01

    To investigate the characteristics, perceptions, motivating factors, experiences, and injury rates of runners who practice minimally shod running. Survey. web-based questionnaire. Five-hundred sixty-six members of the Chicago Area Runner's Association. A link to a 31-question online survey was e-mailed to members of Chicago Area Runner's Association. Questions covered demographic information, use of minimalist-style running shoes (MSRS), injury rates, and change in pain. Use of MSRS, occurrence or improvement of injury/pain, regions of injury/pain, reasons for or for not using MSRS. One-hundred seventy-five (31%) respondents had practiced minimally shod running, and the most common motivating factor was to decrease injuries and/or pain. Fifty-one respondents (29%) suffered an injury or pain while wearing MSRS, with the most common body part involved being the foot. Fifty-four respondents (31%) had an injury that improved after adopting minimally shod running; the most common area involved was the knee. One-hundred twenty respondents (69%) were still using MSRS. Of those who stopped using MSRS, the main reason was development of an injury or pain. The most common reason that respondents have not tried minimally shod running is a fear of developing an injury. This survey-based study demonstrated that the use of MSRS is common, largely as the result of a perception that they may reduce injuries or pain. Reductions and occurrences of injury/pain with minimally shod running were reported in approximately equal numbers. The most common site of reported injury/pain reduction was the knee, whereas the most common reported site of injury/pain occurrence was the foot. Fear of developing pain or injury is the most common reason runners are reluctant to try minimally shod running. Copyright © 2015 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  1. Simulation of LHC events on a millions threads

    NASA Astrophysics Data System (ADS)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.

    2015-12-01

    Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.

  2. Influence of running velocity on vertical, leg and joint stiffness : modelling and recommendations for future research.

    PubMed

    Brughelli, Matt; Cronin, John

    2008-01-01

    Human running can be modelled as either a spring-mass model or multiple springs in series. A force is required to stretch or compress the spring, and thus stiffness, the variable of interest in this paper, can be calculated from the ratio of this force to the change in spring length. Given the link between force and length change, muscle stiffness and mechanical stiffness have been areas of interest to researchers, clinicians, and strength and conditioning practitioners for many years. This review focuses on mechanical stiffness, and in particular, vertical, leg and joint stiffness, since these are the only stiffness types that have been directly calculated during human running. It has been established that as running velocity increases from slow-to-moderate values, leg stiffness remains constant while both vertical stiffness and joint stiffness increase. However, no studies have calculated vertical, leg or joint stiffness over a range of slow-to-moderate values to maximum values in an athletic population. Therefore, the effects of faster running velocities on stiffness are relatively unexplored. Furthermore, no experimental research has examined the effects of training on vertical, leg or joint stiffness and the subsequent effects on running performance. Various methods of training (Olympic style weightlifting, heavy resistance training, plyometrics, eccentric strength training) have shown to be effective at improving running performance. However, the effects of these training methods on vertical, leg and joint stiffness are unknown. As a result, the true importance of stiffness to running performance remains unexplored, and the best practice for changing stiffness to optimize running performance is speculative at best. It is our hope that a better understanding of stiffness, and the influence of running speed on stiffness, will lead to greater interest and an increase in experimental research in this area.

  3. Multiview 3D sensing and analysis for high quality point cloud reconstruction

    NASA Astrophysics Data System (ADS)

    Satnik, Andrej; Izquierdo, Ebroul; Orjesek, Richard

    2018-04-01

    Multiview 3D reconstruction techniques enable digital reconstruction of 3D objects from the real world by fusing different viewpoints of the same object into a single 3D representation. This process is by no means trivial and the acquisition of high quality point cloud representations of dynamic 3D objects is still an open problem. In this paper, an approach for high fidelity 3D point cloud generation using low cost 3D sensing hardware is presented. The proposed approach runs in an efficient low-cost hardware setting based on several Kinect v2 scanners connected to a single PC. It performs autocalibration and runs in real-time exploiting an efficient composition of several filtering methods including Radius Outlier Removal (ROR), Weighted Median filter (WM) and Weighted Inter-Frame Average filtering (WIFA). The performance of the proposed method has been demonstrated through efficient acquisition of dense 3D point clouds of moving objects.

  4. Efficiency of SparkJet

    NASA Technical Reports Server (NTRS)

    Golbabaei-Asl, M.; Knight, D.; Wilkinson, S.

    2013-01-01

    The thermal efficiency of a SparkJet is evaluated by measuring the impulse response of a pendulum subject to a single spark discharge. The SparkJet is attached to the end of a pendulum. A laser displacement sensor is used to measure the displacement of the pendulum upon discharge. The pendulum motion is a function of the fraction of the discharge energy that is channeled into the heating of the gas (i.e., increasing the translational-rotational temperature). A theoretical perfect gas model is used to estimate the portion of the energy from the heated gas that results in equivalent pendulum displacement as in the experiment. The earlier results from multiple runs for different capacitances of C = 3, 5, 10, 20, and 40(micro)F demonstrate that the thermal efficiency decreases with higher capacitive discharges.1 In the current paper, results from additional run cases have been included and confirm the previous results

  5. Effect of 29 days of simulated microgravity on maximal oxygen consumption and fat-free mass of rats

    NASA Technical Reports Server (NTRS)

    Woodman, Christopher R.; Stump, Craig S.; Stump, Jane A.; Rahman, Zia; Tipton, Charles M.

    1991-01-01

    Effects of a 29-days exposure to simulated microgravity on the values of maximal oxygen consumption and fat-free mass (FFM) and on the mechanical efficiency of running were investigated in rats randomly assigned to one of three regimens: head-down suspension (HDS) at 45 deg, horizontal suspension (HS), or cage control (CC). Before suspension and on days 7, 14, 21, and 28, five exercise performance tests were carried out, with measurements related to maximal oxygen consumption, treadmill run time, and mechanical efficiency. It was found that maximal oxygen consumption of both HDS and HS groups decreased significantly at day 7, after which the HDS rats remained decreased while the HS rats returned to presuspension values. Apparent mechanical efficiency in the HDS and HS groups decreased by 22-35 percent during the experimental period, and FFM decreased significantly.

  6. Small axial compressor technology, volume 1

    NASA Technical Reports Server (NTRS)

    Holman, F. F.; Kidwell, J. R.; Ware, T. C.

    1976-01-01

    A scaled single-stage, highly-loaded, axial-flow transonic compressor was tested at speeds from 70 to 110% design equivalent speed to evaluate the effects of scaling compromises and the individual and combined effects of rotor tip running clearance and rotor shroud casing treatment on the overall and blade element performance. At design speed and 1% tip clearance the stage demonstrated an efficiency of 83.2% at 96.4% design flow and a pressure ratio of 1.865. Casing treatment increased design speed surge margin 2.0 points to 12.8%. Overall performance was essentially unchanged. An increase in rotor running clearance to 2.2%, with smooth casing, reduced design speed peak efficiency 5.7 points, flow by 7.4%, pressure ratio to 1.740, and surge margin to 5.4%. Reinstalling casing treatment regained 3.5 points in design speed peak efficiency, 4.7% flow, increased pressure ratio to 1.800 and surge margin to 8.7%.

  7. Increasing thermal efficiency of solar flat plate collectors

    NASA Astrophysics Data System (ADS)

    Pona, J.

    A study of methods to increase the efficiency of heat transfer in flat plate solar collectors is presented. In order to increase the heat transfer from the absorber plate to the working fluid inside the tubes, turbulent flow was induced by installing baffles within the tubes. The installation of the baffles resulted in a 7 to 12% increase in collector efficiency. Experiments were run on both 1 sq ft and 2 sq ft collectors each fitted with either slotted baffles or tubular baffles. A computer program was run comparing the baffled collector to the standard collector. The results obtained from the computer show that the baffled collectors have a 2.7% increase in life cycle cost (LCC) savings and a 3.6% increase in net cash flow for use in domestic hot water systems, and even greater increases when used in solar heating systems.

  8. Run-time implementation issues for real-time embedded Ada

    NASA Technical Reports Server (NTRS)

    Maule, Ruth A.

    1986-01-01

    A motivating factor in the development of Ada as the department of defense standard language was the high cost of embedded system software development. It was with embedded system requirements in mind that many of the features of the language were incorporated. Yet it is the designers of embedded systems that seem to comprise the majority of the Ada community dissatisfied with the language. There are a variety of reasons for this dissatisfaction, but many seem to be related in some way to the Ada run-time support system. Some of the areas in which the inconsistencies were found to have the greatest impact on performance from the standpoint of real-time systems are presented. In particular, a large part of the duties of the tasking supervisor are subject to the design decisions of the implementer. These include scheduling, rendezvous, delay processing, and task activation and termination. Some of the more general issues presented include time and space efficiencies, generic expansions, memory management, pragmas, and tracing features. As validated compilers become available for bare computer targets, it is important for a designer to be aware that, at least for many real-time issues, all validated Ada compilers are not created equal.

  9. The role of the estrogen receptor α in the medial preoptic area in sexual incentive motivation, proceptivity and receptivity, anxiety, and wheel running in female rats.

    PubMed

    Spiteri, Thierry; Ogawa, Sonoko; Musatov, Sergei; Pfaff, Donald W; Agmo, Anders

    2012-04-21

    Ovariectomized females were given an infusion in the medial preoptic area (MPOA) of a viral vector carrying either a shRNA directed against the estrogen receptor α (ERα) or luciferase. The females were subjected to a test for sexual incentive motivation immediately followed by a test for receptivity and proceptive behaviors. Two weeks later they were tested in the light/dark choice procedure, and after another 2 weeks they were subjected to a test in a brightly lit open field. Finally, the females were given free access to a running wheel for 88h. The females were treated with estradiol benzoate (EB), 18 or 1.5μg/kg, in randomized order 52h before each test except the running wheel. In that experiment, they were given EB 48h after introduction into the wheel cage. They were given progesterone, 1mg/rat, about 4h before all tests, except the running wheel. The shRNA reduced the number of ERα with 83%. Females with few ERα in the MPOA showed increased lordosis quotient after the 1.5μg/kg dose of EB. There was no effect on proceptive behaviors or on rejections. When given the 18μg/kg EB dose, there was no difference between females with few preoptic ERα and controls. In the test for sexual incentive motivation, females with few preoptic ERα approached the castrated male incentive more than controls, regardless of EB dose. They also moved a shorter distance. In the light/dark choice test as well as in the open field, females with few ERα in the MPOA showed signs of reduced fear/anxiety, since they spent more time in the light part of the dark/light box and in the center of the open field. Finally, the data from the running wheel showed that females with few preoptic ERα failed to show enhanced activity after treatment with EB. These data show that the preoptic ERα inhibits lordosis in females with an intermediate level of receptivity while it fails to do so in fully receptive females. The ERα in the MPOA seems to be necessary for selective approach to a sexual incentive. Finally, activation of this receptor appears to have anxiogenic effects in the procedures employed here. A hypothesis for how all these actions of the preoptic ERα contributes to efficient reproductive behavior is outlined. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. EXPERIMENTAL EVALUATION OF FUEL OIL ADDITIVES FOR REDUCING EMISSIONS AND INCREASING EFFICIENCY OF BOILERS

    EPA Science Inventory

    The report gives results of an evaluation of the effectiveness of combustion-type fuel oil additives to reduce emissions and increase efficiency in a 50-bhp (500 kw) commercial oil-fired packaged boiler. Most additive evaluation runs were made during continuous firing, constant-l...

  11. 40 CFR 60.745 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... determination of the efficiency of a fixed-bed carbon adsorption system with a common exhaust stack for all the... separate runs, each coinciding with one or more complete system rotations through the adsorption cycles of... efficiency of a fixed-bed carbon adsorption system with individual exhaust stacks for each adsorber vessel...

  12. 40 CFR 60.745 - Test methods and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... determination of the efficiency of a fixed-bed carbon adsorption system with a common exhaust stack for all the... separate runs, each coinciding with one or more complete system rotations through the adsorption cycles of... efficiency of a fixed-bed carbon adsorption system with individual exhaust stacks for each adsorber vessel...

  13. Simulation of ozone production in a complex circulation region using nested grids

    NASA Astrophysics Data System (ADS)

    Taghavi, M.; Cautenet, S.; Foret, G.

    2004-06-01

    During the ESCOMPTE precampaign (summer 2000, over Southern France), a 3-day period of intensive observation (IOP0), associated with ozone peaks, has been simulated. The comprehensive RAMS model, version 4.3, coupled on-line with a chemical module including 29 species, is used to follow the chemistry of the polluted zone. This efficient but time consuming method can be used because the code is installed on a parallel computer, the SGI 3800. Two runs are performed: run 1 with a single grid and run 2 with two nested grids. The simulated fields of ozone, carbon monoxide, nitrogen oxides and sulfur dioxide are compared with aircraft and surface station measurements. The 2-grid run looks substantially better than the run with one grid because the former takes the outer pollutants into account. This on-line method helps to satisfactorily retrieve the chemical species redistribution and to explain the impact of dynamics on this redistribution.

  14. Memoized Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Yang, Guowei; Pasareanu, Corina S.; Khurshid, Sarfraz

    2012-01-01

    This paper introduces memoized symbolic execution (Memoise), a novel approach for more efficient application of forward symbolic execution, which is a well-studied technique for systematic exploration of program behaviors based on bounded execution paths. Our key insight is that application of symbolic execution often requires several successive runs of the technique on largely similar underlying problems, e.g., running it once to check a program to find a bug, fixing the bug, and running it again to check the modified program. Memoise introduces a trie-based data structure that stores the key elements of a run of symbolic execution. Maintenance of the trie during successive runs allows re-use of previously computed results of symbolic execution without the need for re-computing them as is traditionally done. Experiments using our prototype embodiment of Memoise show the benefits it holds in various standard scenarios of using symbolic execution, e.g., with iterative deepening of exploration depth, to perform regression analysis, or to enhance coverage.

  15. Improving clinical laboratory efficiency: a time-motion evaluation of the Abbott m2000 RealTime and Roche COBAS AmpliPrep/COBAS TaqMan PCR systems for the simultaneous quantitation of HIV-1 RNA and HCV RNA.

    PubMed

    Amendola, Alessandra; Coen, Sabrina; Belladonna, Stefano; Pulvirenti, F Renato; Clemens, John M; Capobianchi, M Rosaria

    2011-08-01

    Diagnostic laboratories need automation that facilitates efficient processing and workflow management to meet today's challenges for expanding services and reducing cost, yet maintaining the highest levels of quality. Processing efficiency of two commercially available automated systems for quantifying HIV-1 and HCV RNA, Abbott m2000 system and Roche COBAS Ampliprep/COBAS TaqMan 96 (docked) systems (CAP/CTM), was evaluated in a mid/high throughput workflow laboratory using a representative daily workload of 24 HCV and 72 HIV samples. Three test scenarios were evaluated: A) one run with four batches on the CAP/CTM system, B) two runs on the Abbott m2000 and C) one run using the Abbott m2000 maxCycle feature (maxCycle) for co-processing these assays. Cycle times for processing, throughput and hands-on time were evaluated. Overall processing cycle time was 10.3, 9.1 and 7.6 h for Scenarios A), B) and C), respectively. Total hands-on time for each scenario was, in order, 100.0 (A), 90.3 (B) and 61.4 min (C). The interface of an automated analyzer to the laboratory workflow, notably system set up for samples and reagents and clean up functions, are as important as the automation capability of the analyzer for the overall impact to processing efficiency and operator hands-on time.

  16. Efficient ensemble forecasting of marine ecology with clustered 1D models and statistical lateral exchange: application to the Red Sea

    NASA Astrophysics Data System (ADS)

    Dreano, Denis; Tsiaras, Kostas; Triantafyllou, George; Hoteit, Ibrahim

    2017-07-01

    Forecasting the state of large marine ecosystems is important for many economic and public health applications. However, advanced three-dimensional (3D) ecosystem models, such as the European Regional Seas Ecosystem Model (ERSEM), are computationally expensive, especially when implemented within an ensemble data assimilation system requiring several parallel integrations. As an alternative to 3D ecological forecasting systems, we propose to implement a set of regional one-dimensional (1D) water-column ecological models that run at a fraction of the computational cost. The 1D model domains are determined using a Gaussian mixture model (GMM)-based clustering method and satellite chlorophyll-a (Chl-a) data. Regionally averaged Chl-a data is assimilated into the 1D models using the singular evolutive interpolated Kalman (SEIK) filter. To laterally exchange information between subregions and improve the forecasting skills, we introduce a new correction step to the assimilation scheme, in which we assimilate a statistical forecast of future Chl-a observations based on information from neighbouring regions. We apply this approach to the Red Sea and show that the assimilative 1D ecological models can forecast surface Chl-a concentration with high accuracy. The statistical assimilation step further improves the forecasting skill by as much as 50%. This general approach of clustering large marine areas and running several interacting 1D ecological models is very flexible. It allows many combinations of clustering, filtering and regression technics to be used and can be applied to build efficient forecasting systems in other large marine ecosystems.

  17. Investigating fluvial pattern and delta-planform geometry based on varying intervals of flood and interflood

    NASA Astrophysics Data System (ADS)

    Rambo, J. E.; Kim, W.; Miller, K.

    2017-12-01

    Physical modeling of a delta's evolution can represent how changing the intervals of flood and interflood can alter a delta's fluvial pattern and geometry. Here we present a set of six experimental runs in which sediment and water were discharged at constant rates over each experiment. During the "flood" period, both sediment and water were discharged at rates of 0.25 cm3/s and 15 ml/s respectively, and during the "interflood" period, only water was discharged at 7.5 ml/s. The flood periods were only run for 30 minutes to keep the total volume of sediment constant. Run 0 did not have an interflood period and therefore ran with constant sediment and water discharge for the duration of the experiment.The other five runs had either 5, 10, or 15-min intervals of flood with 5, 10, or 15-min intervals of interflood. The experimental results show that Run 0 had the smallest topset area. This is due to a lack of surface reworking that takes place during interflood periods. Run 1 had 15-minute intervals of flood and 15-minute intervals of interflood, and it had the largest topset area. Additionally, the experiments that had longer intervals of interflood than flood had more elongated delta geometries. Wetted fraction color maps were also created to plot channel locations during each run. The maps show that the runs with longer interflood durations had channels occurring predominantly down the middle with stronger incisions; these runs produced deltas with more elongated geometries. When the interflood duration was even longer, however, strong channels started to occur at multiple locations. This increased interflood period allowed for the entire area over the delta's surface to be reworked, thus reducing the downstream slope and allowing channels to be more mobile laterally. Physical modeling of a delta allows us to predict a delta's resulting geometry given a set of conditions. This insight is needed especially with delta's being the home to many populations of people and a habitat for various other species.

  18. The social space in the making of identity(case : Pekan Labuhan, Medan, Indonesia)

    NASA Astrophysics Data System (ADS)

    Siagian, Morida

    2018-03-01

    Social space is relation space manifestated by the existence of comunities, so it become the identity of the area. The district of Pekan Labuhan has a long history of relations between ethnic. Ethnic Malays as the indigenous hereditary have been running daily life at this place. The appeal of the Deli River as a harbour area made ethnic Chinese came later then occupied and then run business activities at this place. The aim of this research is to explain the process the intermingling of ethnic Malay and Chinese in the old city to making social spaces. These social space become the reason the community survive and run his life here. Through qualitative research methods, social space can be articulated and described from explained relationship by etnic Malays as indigenous and Chinese as newcomers. The space becomes a power to struggle and defend the identity between the two ethnicities in the area.

  19. DC grid for home applications

    NASA Astrophysics Data System (ADS)

    Elangovan, D.; Archana, R.; Jayadeep, V. J.; Nithin, M.; Arunkumar, G.

    2017-11-01

    More than fifty percent Indian population do not have access to electricity in daily lives. The distance between the power generating stations and the distribution centers forms one of the main reasons for lack of electrification in rural and remote areas. Here lies the importance of decentralization of power generation through renewable energy resources. In the present world, electricity is predominantly powered by alternating current, but most day to day devices like LED lamps, computers and electrical vehicles, all run on DC power. By directly supplying DC to these loads, the number of power conversion stages was reduced, and overall system efficiency increases. Replacing existing AC network with DC is a humongous task, but with power electronic techniques, this project intends to implement DC grid at a household level in remote and rural areas. Proposed work was designed and simulated successfully for various loads amounting to 250 W through appropriate power electronic convertors. Maximum utilization of the renewable sources for domestic and commercial application was achieved with the proposed DC topology.

  20. New Schemes for Improved Opto-Electronic Oscillator

    NASA Technical Reports Server (NTRS)

    Maleki, Lute; Yao, Steve; Ji, Yu; Ilchenko, Vladimir

    2000-01-01

    The opto-Electronic Oscillator (OEO) has already demonstrated superior spectral purity as a for microwave and millimeter wave reference signals. Experimental results have produced a performance characterized by noise as low as -50 dBc/Hz at 10 Hz and -140 dBc/Hz for a 10 GHz oscillator. This performance is significant because it was produced by an oscillator that was free running. Since the noise in an OEO is independent of the oscillation frequency, the same performance may also be obtained at higher frequency. The recent work in our laboratory has been focused in three areas: 1) realization of a compact OEO based on semiconductor lasers and modulators, 2) reduction of the close-to-carrier noise of the OEO originating from the 1/f noise of the amplifier, and 3) miniaturization of the OEO. In this paper we report on progress made in these areas, and describe future plans to increase the performance and the efficiency of the OEO.

  1. Influence of exercise duration on cardiorespiratory responses, energy cost and tissue oxygenation within a 6 hour treadmill run.

    PubMed

    Kerhervé, Hugo A; McLean, Scott; Birkenhead, Karen; Parr, David; Solomon, Colin

    2017-01-01

    The physiological mechanisms for alterations in oxygen utilization ([Formula: see text]) and the energy cost of running ( C r ) during prolonged running are not completely understood, and could be linked with alterations in muscle and cerebral tissue oxygenation. Eight trained ultramarathon runners (three women; mean ± SD; age 37 ± 7 yr; maximum [Formula: see text] 60 ± 15 mL min -1  kg -1 ) completed a 6 hr treadmill run (6TR), which consisted of four modules, including periods of moderate (3 min at 10 km h -1 , 10-CR) and heavy exercise intensities (6 min at 70% of maximum [Formula: see text], HILL), separated by three, 100 min periods of self-paced running (SP). We measured [Formula: see text], minute ventilation ([Formula: see text]), ventilatory efficiency ([Formula: see text]), respiratory exchange ratio (RER), C r , muscle and cerebral tissue saturation index (TSI) during the modules, and heart rate (HR) and perceived exertion (RPE) during the modules and SP. Participants ran 58.3 ± 10.5 km during 6TR. Speed decreased and HR and RPE increased during SP. Across the modules, HR and [Formula: see text] increased (10-CR), and RER decreased (10-CR and HILL). There were no significant changes in [Formula: see text], [Formula: see text], C r , TSI and RPE across the modules. In the context of positive pacing (decreasing speed), increased cardiac drift and perceived exertion over the 6TR, we observed increased RER and increased HR at moderate and heavy exercise intensity, increased [Formula: see text] at moderate intensity, and no effect of exercise duration on ventilatory efficiency, energy cost of running and tissue oxygenation.

  2. Intra-dance variation among waggle runs and the design of efficient protocols for honey bee dance decoding.

    PubMed

    Couvillon, Margaret J; Riddell Pearce, Fiona C; Harris-Jones, Elisabeth L; Kuepfer, Amanda M; Mackenzie-Smith, Samantha J; Rozario, Laura A; Schürch, Roger; Ratnieks, Francis L W

    2012-05-15

    Noise is universal in information transfer. In animal communication, this presents a challenge not only for intended signal receivers, but also to biologists studying the system. In honey bees, a forager communicates to nestmates the location of an important resource via the waggle dance. This vibrational signal is composed of repeating units (waggle runs) that are then averaged by nestmates to derive a single vector. Manual dance decoding is a powerful tool for studying bee foraging ecology, although the process is time-consuming: a forager may repeat the waggle run 1- >100 times within a dance. It is impractical to decode all of these to obtain the vector; however, intra-dance waggle runs vary, so it is important to decode enough to obtain a good average. Here we examine the variation among waggle runs made by foraging bees to devise a method of dance decoding. The first and last waggle runs within a dance are significantly more variable than the middle run. There was no trend in variation for the middle waggle runs. We recommend that any four consecutive waggle runs, not including the first and last runs, may be decoded, and we show that this methodology is suitable by demonstrating the goodness-of-fit between the decoded vectors from our subsamples with the vectors from the entire dances.

  3. Intra-dance variation among waggle runs and the design of efficient protocols for honey bee dance decoding

    PubMed Central

    Couvillon, Margaret J.; Riddell Pearce, Fiona C.; Harris-Jones, Elisabeth L.; Kuepfer, Amanda M.; Mackenzie-Smith, Samantha J.; Rozario, Laura A.; Schürch, Roger; Ratnieks, Francis L. W.

    2012-01-01

    Summary Noise is universal in information transfer. In animal communication, this presents a challenge not only for intended signal receivers, but also to biologists studying the system. In honey bees, a forager communicates to nestmates the location of an important resource via the waggle dance. This vibrational signal is composed of repeating units (waggle runs) that are then averaged by nestmates to derive a single vector. Manual dance decoding is a powerful tool for studying bee foraging ecology, although the process is time-consuming: a forager may repeat the waggle run 1- >100 times within a dance. It is impractical to decode all of these to obtain the vector; however, intra-dance waggle runs vary, so it is important to decode enough to obtain a good average. Here we examine the variation among waggle runs made by foraging bees to devise a method of dance decoding. The first and last waggle runs within a dance are significantly more variable than the middle run. There was no trend in variation for the middle waggle runs. We recommend that any four consecutive waggle runs, not including the first and last runs, may be decoded, and we show that this methodology is suitable by demonstrating the goodness-of-fit between the decoded vectors from our subsamples with the vectors from the entire dances. PMID:23213438

  4. Current and Future Applications of Machine Learning for the US Army

    DTIC Science & Technology

    2018-04-13

    designing from the unwieldy application of the first principles of flight controls, aerodynamics, blade propulsion, and so on, the designers turned...when the number of features runs into millions can become challenging. To overcome these issues, regularization techniques have been developed which...and compiled to run efficiently on either CPU or GPU architectures. 5) Keras63 is a library that contains numerous implementations of commonly used

  5. Improving the Efficiency of Free Energy Calculations in the Amber Molecular Dynamics Package.

    PubMed

    Kaus, Joseph W; Pierce, Levi T; Walker, Ross C; McCammont, J Andrew

    2013-09-10

    Alchemical transformations are widely used methods to calculate free energies. Amber has traditionally included support for alchemical transformations as part of the sander molecular dynamics (MD) engine. Here we describe the implementation of a more efficient approach to alchemical transformations in the Amber MD package. Specifically we have implemented this new approach within the more computational efficient and scalable pmemd MD engine that is included with the Amber MD package. The majority of the gain in efficiency comes from the improved design of the calculation, which includes better parallel scaling and reduction in the calculation of redundant terms. This new implementation is able to reproduce results from equivalent simulations run with the existing functionality, but at 2.5 times greater computational efficiency. This new implementation is also able to run softcore simulations at the λ end states making direct calculation of free energies more accurate, compared to the extrapolation required in the existing implementation. The updated alchemical transformation functionality will be included in the next major release of Amber (scheduled for release in Q1 2014) and will be available at http://ambermd.org, under the Amber license.

  6. Improving the Efficiency of Free Energy Calculations in the Amber Molecular Dynamics Package

    PubMed Central

    Pierce, Levi T.; Walker, Ross C.; McCammont, J. Andrew

    2013-01-01

    Alchemical transformations are widely used methods to calculate free energies. Amber has traditionally included support for alchemical transformations as part of the sander molecular dynamics (MD) engine. Here we describe the implementation of a more efficient approach to alchemical transformations in the Amber MD package. Specifically we have implemented this new approach within the more computational efficient and scalable pmemd MD engine that is included with the Amber MD package. The majority of the gain in efficiency comes from the improved design of the calculation, which includes better parallel scaling and reduction in the calculation of redundant terms. This new implementation is able to reproduce results from equivalent simulations run with the existing functionality, but at 2.5 times greater computational efficiency. This new implementation is also able to run softcore simulations at the λ end states making direct calculation of free energies more accurate, compared to the extrapolation required in the existing implementation. The updated alchemical transformation functionality will be included in the next major release of Amber (scheduled for release in Q1 2014) and will be available at http://ambermd.org, under the Amber license. PMID:24185531

  7. The concept of surgical operating list 'efficiency': a formula to describe the term.

    PubMed

    Pandit, J J; Westbury, S; Pandit, M

    2007-09-01

    While numerous reports have sought ways of improving the efficiency of surgical operating lists, none has defined 'efficiency'. We describe a formula that defines efficiency as incorporating three elements: maximising utilisation, minimising over-running and minimising cancellations on a list. We applied this formula to hypothetical (but realistic) scenarios, and our formula yielded plausible descriptions of these. We also applied the formula to 16 consecutive elective surgical lists from three gynaecology teams (two at a university hospital and one at a non-university hospital). Again, the formula gave useful insights into problems faced by the teams in improving their performance, and it also guided possible solutions. The formula confirmed that a team that schedules cases according to the predicted durations of the operations listed (i.e. the non-university hospital team) suffered fewer cancellations (median 5% vs 8% and 13%) and fewer list over-runs (6% vs 38% and 50%), and performed considerably more efficiently (90% vs 79% and 72%; p = 0.038) than teams that did not do so (i.e. those from the university hospital). We suggest that surgical list performance is more completely described by our formula for efficiency than it is by other conventional measures such as list utilisation or cancellation rate alone.

  8. Variation in annual run-off in the Rocky Mountain region: Chapter A in Contributions to the hydrology of the United States, 1923-1924

    USGS Publications Warehouse

    Follansbee, Robert

    1925-01-01

    Records of run-off in the Rocky Mountain States since the nineties and for a few stations since the eighties afford a means of studying the variation in the annual run-off in this region. The data presented in this report show that the variation in annual run-off differs in different areas in the Rocky Mountain region, owing to the differences in the sources of the precipitation in these areas. Except in the drainage basins of streams in northern Montana the year of lowest run-off shown by the records was 1902, when the run-ff at one station was only 36 per cent of the mean run-ff for the periods covered by the several records available. The percentage variation of run-ff for streams in different parts of Colorado is less for any one year than that for streams in the mountain region as a whole, and for streams in the same major drainage basin the annual variation is markedly similar. The influence of topography upon variation in annual run-ff for streams in Colorado is marked, the streams that rise in the central mountain region having a smaller range in variation than the streams that rise on the eastern or western edges of the central mountain mass. The streams that rise on the plains just east of the mountains have a greater variation than those of any of the mountain groups. The ratio of any 10-year mean to the mean for the entire period covered by the records ranges from 72 to 133 per cent. For the South Platte, Arkansas, and Rio Grande the run-off during the nineties was below the normal, but since about 1903 it has been above normal. For the Cache la Poudre low-water periods occurred during the eighties and from 1905 to 1922, but during the nineties the run-off was above the normal.

  9. Physiological and biomechanical adaptations to the cycle to run transition in Olympic triathlon: review and practical recommendations for training.

    PubMed

    Millet, G P; Vleck, V E

    2000-10-01

    Current knowledge of the physiological, biomechanical, and sensory effects of the cycle to run transition in the Olympic triathlon (1.5 km, 10 km, 40 km) is reviewed and implications for the training of junior and elite triathletes are discussed. Triathlon running elicits hyperventilation, increased heart rate, decreased pulmonary compliance, and exercise induced hypoxaemia. This may be due to exercise intensity, ventilatory muscle fatigue, dehydration, muscle fibre damage, a shift in metabolism towards fat oxidation, and depleted glycogen stores after a 40 km cycle. The energy cost (CR) of running during the cycle to run transition is also increased over that of control running. The increase in CR varies from 1.6% to 11.6% and is a reflection of triathlete ability level. This increase may be partly related to kinematic alterations, but research suggests that most biomechanical parameters are unchanged. A more forward leaning trunk inclination is the most significant observation reported. Running pattern, and thus running economy, could also be influenced by sensorimotor perturbations related to the change in posture. Technical skill in the transition area is obviously very important. The conditions under which the preceding cycling section is performed-that is, steady state or stochastic power output, drafting or non-drafting-are likely to influence the speed of adjustment to transition. The extent to which a decrease in the average 10 km running speed occurs during competition must be investigated further. It is clear that the higher the athlete is placed in the field at the end of the bike section, the greater the importance to their finishing position of both a quick transition area time and optimal adjustment to the physiological demands of the cycle to run transition. The need for, and current methods of, training to prepare junior and elite triathletes for a better transition are critically reviewed in light of the effects of sequential cycle to run exercise.

  10. Comparisons of spawning areas and times for two runs of chinook salmon (Oncorhynchus tshawytscha) in the Kenai River, Alaska

    USGS Publications Warehouse

    Burger, C.V.; Wilmot, R.L.; Wangaard, D.B.

    1985-01-01

    From 1979 to 1982,188 chinook salmon (Oncorhynchus tshawytscha) were tagged with radio transmitters to locate spawning areas in the glacial Kenai River, southcentral Alaska. Results confirmed that an early run entered the river in May and June and spawned in tributaries, and a late run entered the river from late June through August and spawned in the main stem. Spawning peaked during August in tributaries influenced by lakes, but during July in other tributaries. Lakes may have increased fall and winter temperatures of downstream waters, enabling successful reproduction for later spawning fish within these tributaries. This hypothesis assumes that hatching and emergence can be completed in a shorter time in lake-influenced waters. The time of upstream migration and spawning (mid- to late August) of the late run is unique among chinook stocks in Cook Inlet. This behavior may have developed only because two large lakes (Kenai and Skilak) directly influence the main-stem Kenai River. If run timing is genetically controlled, and if the various components of the two runs are isolated stocks that have adapted to predictable stream temperatures, there are implications for stock transplantation programs and for any activities of man that alter stream temperatures.

  11. 40 CFR 63.4361 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... determine the mass fraction of TVH liquid input from each regulated material used in the web coating.../printing or dyeing/finishing operation during the capture efficiency test run, kg. TVHi = Mass fraction of... enclosure or building enclosure. The liquid-to-uncaptured-gas protocol compares the mass of liquid TVH in...

  12. Application of a computationally efficient method to approximate gap model results with a probabilistic approach

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.

    2014-02-01

    To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second generation DGVM LPJ-GUESS to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, that increased the model's speed by approximately the factor 8, we were able to faster detect shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south-transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high resolution LPJ-GUESS simulation results for a large part of the Alpine region.

  13. New approach to calibrating bed load samplers

    USGS Publications Warehouse

    Hubbell, D.W.; Stevens, H.H.; Skinner, J.V.; Beverage, J.P.

    1985-01-01

    Cyclic variations in bed load discharge at a point, which are an inherent part of the process of bed load movement, complicate calibration of bed load samplers and preclude the use of average rates to define sampling efficiencies. Calibration curves, rather than efficiencies, are derived by two independent methods using data collected with prototype versions of the Helley‐Smith sampler in a large calibration facility capable of continuously measuring transport rates across a 9 ft (2.7 m) width. Results from both methods agree. Composite calibration curves, based on matching probability distribution functions of samples and measured rates from different hydraulic conditions (runs), are obtained for six different versions of the sampler. Sampled rates corrected by the calibration curves agree with measured rates for individual runs.

  14. Free-running InGaAs single photon detector with 1 dark count per second at 10% efficiency

    NASA Astrophysics Data System (ADS)

    Korzh, B.; Walenta, N.; Lunghi, T.; Gisin, N.; Zbinden, H.

    2014-02-01

    We present a free-running single photon detector for telecom wavelengths based on a negative feedback avalanche photodiode (NFAD). A dark count rate as low as 1 cps was obtained at a detection efficiency of 10%, with an afterpulse probability of 2.2% for 20 μs of deadtime. This was achieved by using an active hold-off circuit and cooling the NFAD with a free-piston stirling cooler down to temperatures of -110 °C. We integrated two detectors into a practical, 625 MHz clocked quantum key distribution system. Stable, real-time key distribution in the presence of 30 dB channel loss was possible, yielding a secret key rate of 350 bps.

  15. Novel Long Stroke Reciprocating Compressor for Energy Efficient Jaggery Making

    NASA Astrophysics Data System (ADS)

    Rane, M. V.; Uphade, D. B.

    2017-08-01

    Novel Long Stroke Reciprocating Compressor is analysed for jaggery making while avoiding burning of bagasse for concentrating juice. Heat of evaporated water vapour along with small compressor work is recycled to enable boiling of juice. Condensate formed during heating of juice is pure water, as oil-less compressor is used. Superheat of compressor is suppressed by flow of superheated vapours through condensate. It limits heating surface temperature and avoids caramelization of sugar. Thereby improves quality of jaggery and eliminates need to use chemicals for colour improvement. Stroke to bore ratio is 0.6 to 1.2 in conventional reciprocating drives. Long stroke in reciprocating compressors enhances heat dissipation to surrounding by providing large surface area and increases isentropic efficiency by reducing compressor outlet temperature. Longer stroke increases inlet and exit valve operation timings, which reduces inertial effects substantially. Thereby allowing use of sturdier valves. This enables handling liquid along with vapour in compressors. Thereby supressing the superheat and reducing compressor power input. Longer stroke increases stroke to clearance ratios which increases volumetric efficiency and ability of compressor to compress through higher pressure ratios efficiently. Stress-strain simulation is performed in SolidWorks for gear drive. Long Stroke Reciprocating Compressor is developed at Heat Pump Laboratory, stroke/bore 292 mm/32 mm. It is operated and tested successfully at different speeds for operational stability of components. Theoretical volumetric efficiency is 93.9% at pressure ratio 2.0. Specific energy consumption is 108.3 kWhe/m3 separated water, considering free run power.

  16. Physiological demands of running during long distance runs and triathlons.

    PubMed

    Hausswirth, C; Lehénaff, D

    2001-01-01

    The aim of this review article is to identify the main metabolic factors which have an influence on the energy cost of running (Cr) during prolonged exercise runs and triathlons. This article proposes a physiological comparison of these 2 exercises and the relationship between running economy and performance. Many terms are used as the equivalent of 'running economy' such as 'oxygen cost', 'metabolic cost', 'energy cost of running', and 'oxygen consumption'. It has been suggested that these expressions may be defined by the rate of oxygen uptake (VO2) at a steady state (i.e. between 60 to 90% of maximal VO2) at a submaximal running speed. Endurance events such as triathlon or marathon running are known to modify biological constants of athletes and should have an influence on their running efficiency. The Cr appears to contribute to the variation found in distance running performance among runners of homogeneous level. This has been shown to be important in sports performance, especially in events like long distance running. In addition, many factors are known or hypothesised to influence Cr such as environmental conditions, participant specificity, and metabolic modifications (e.g. training status, fatigue). The decrease in running economy during a triathlon and/or a marathon could be largely linked to physiological factors such as the enhancement of core temperature and a lack of fluid balance. Moreover, the increase in circulating free fatty acids and glycerol at the end of these long exercise durations bear witness to the decrease in Cr values. The combination of these factors alters the Cr during exercise and hence could modify the athlete's performance in triathlons or a prolonged run.

  17. Magnetically Separable MoS₂/Fe₃O₄/nZVI Nanocomposites for the Treatment of Wastewater Containing Cr(VI) and 4-Chlorophenol.

    PubMed

    Lu, Haijiao; Wang, Jingkang; Hao, Hongxun; Wang, Ting

    2017-09-30

    With a large specific surface area, high reactivity, and excellent adsorption properties, nano zerovalent iron (nZVI) can degrade a wide variety of contaminants in wastewater. However, aggregation, oxidation, and separation issues greatly impede its wide application. In this study, MoS₂/Fe₃O₄/nZVI nanocomposites were successfully synthesized by a facile step-by-step approach to overcome these problems. MoS₂ nanosheets (MNs) acted as an efficient support for nZVI and enriched the organic pollutants nearby, leading to an enhanced removal efficiency. Fe₃O₄ nanoparticles (NPs) could not only suppress the agglomeration and restacking of MNs, but also facilitate easy separation and recovery of the nanocomposites. The synergistic effect between MNs and Fe₃O₄ NPs effectively enhanced the reactivity and efficiency of nZVI. In the system, Cr(VI) was reduced to Cr(III) by nZVI in the nanocomposites, and Fe 2+ produced in the process was combined with H₂O₂ to further remove 4-Chlorophenol (4-CP) through a Fenton reaction. Furthermore, the nanocomposites could be easily separated from wastewater by a magnet and be reused for at least five consecutive runs, revealing good reusability. The results demonstrate that the novel nanocomposites are highly efficient and promising for the simultaneous removal of Cr(VI) and 4-CP in wastewater.

  18. Land Use Cover Changes and Run Off Potention of Cipunten Agung Watershed Banten

    NASA Astrophysics Data System (ADS)

    Karima, A.; Kaswanto, R. L.

    2017-10-01

    The changes of landscape form such as Land Use Cover Changes (LUCC) of Cipunten Agung watershed could be identified periodically in 1995, 2005, and 2015. In general, land utilization in Cipunten Agung classified into protected region and cultivated region. In 2011, total of protected area is 885.80 ha or 22.54% of watershed area. Those conditions affected both positively to the community development and negatively to the water quantity condition in Cipunten Agung such as flooding, run off, and erosion. Therefore, the purpose of this research is to analyze LUCC impacts to run off potential in Cipunten Agung watershed. Supervised classification method and Soil Conservation Services (Qscs) approach were correlated to determine the figure out an optimal solution to reduce the rate of LUCC. Cipunten Agung watershed imagery was classified into five classes, namely water bodies, forest, cultivated tree, settlement and paddy field. The result shows that area of cultivation tree and paddy fields are larger than others in midstream, and settlement is denser in downstream, particularly at riparian landscapes. The LUCC into paddy field often occur at two period 1995 to 2005 and 2005 to 2015 with several area are 530.92 ha and 388.17 ha. The Qscs method calculation result for 1995 until 2015 was affected by land use cover composition in each year and it was defined by Curve Number (CN). High rainfall in 1995 was generating high run off potential volume. Nevertheless, curve number value was increase get near to 100, which indicate the potential of run off volume increases along with LUCC in each year, those are 70.95; 72.47; and 72.81.

  19. Consequence assessment of large rock slope failures in Norway

    NASA Astrophysics Data System (ADS)

    Oppikofer, Thierry; Hermanns, Reginald L.; Horton, Pascal; Sandøy, Gro; Roberts, Nicholas J.; Jaboyedoff, Michel; Böhme, Martina; Yugsi Molina, Freddy X.

    2014-05-01

    Steep glacially carved valleys and fjords in Norway are prone to many landslide types, including large rockslides, rockfalls, and debris flows. Large rockslides and their secondary effects (rockslide-triggered displacement waves, inundation behind landslide dams and outburst floods from failure of landslide dams) pose a significant hazard to the population living in the valleys and along the fjords shoreline. The Geological Survey of Norway performs systematic mapping of unstable rock slopes in Norway and has detected more than 230 unstable slopes with significant postglacial deformation. This large number necessitates prioritisation of follow-up activities, such as more detailed investigations, periodic displacement measurements, continuous monitoring and early-warning systems. Prioritisation is achieved through a hazard and risk classification system, which has been developed by a panel of international and Norwegian experts (www.ngu.no/en-gb/hm/Publications/Reports/2012/2012-029). The risk classification system combines a qualitative hazard assessment with a consequences assessment focusing on potential life losses. The hazard assessment is based on a series of nine geomorphological, engineering geological and structural criteria, as well as displacement rates, past events and other signs of activity. We present a method for consequence assessment comprising four main steps: 1. computation of the volume of the unstable rock slope; 2. run-out assessment based on the volume-dependent angle of reach (Fahrböschung) or detailed numerical run-out modelling; 3. assessment of possible displacement wave propagation and run-up based on empirical relations or modelling in 2D or 3D; and 4. estimation of the number of persons exposed to rock avalanches or displacement waves. Volume computation of an unstable rock slope is based on the sloping local base level technique, which uses a digital elevation model to create a second-order curved surface between the mapped extent of the unstable rock slope. This surface represents the possible basal sliding surface of an unstable rock slope. The elevation difference between this surface and the topographic surface estimates the volume of the unstable rock slope. A tool has been developed for the present study to adapt the curvature parameters of the computed surface to local geological and structural conditions. The obtained volume is then used to define the angle of reach of a possible rock avalanche from the unstable rock slope by using empirical derived values of angle of reach vs. volume relations. Run-out area is calculated using FlowR; the software is widely used for run-out assessment of debris flows and is adapted here for assessment of rock avalanches, including their potential to ascend opposing slopes. Under certain conditions, more sophisticated and complex numerical run-out models are also used. For rock avalanches with potential to reach a fjord or a lake the propagation and run-up area of triggered displacement waves is assessed. Empirical relations of wave run-up height as a function of rock avalanche volume and distance from impact location are derived from a national and international inventory of landslide-triggered displacement waves. These empirical relations are used in first-level hazard assessment and where necessary, followed by 2D or 3D displacement wave modelling. Finally, the population exposed in the rock avalanche run-out area and in the run-up area of a possible displacement wave is assessed taking into account different population groups: inhabitants, persons in critical infrastructure (hospitals and other emergency services), persons in schools and kindergartens, persons at work or in shops, tourists, persons on ferries and so on. Exposure levels are defined for each population group and vulnerability values are set for the rock avalanche run-out area (100%) and the run-up area of a possible displacement wave (70%). Finally, the total number of persons within the hazard area is calculated taking into account exposure and vulnerability. The method for consequence assessment is currently tested through several case studies in Norway and, thereafter, applied to all unstable rock slopes in the country to assess their risk level. Follow-up activities (detailed investigations, periodic displacement measurements or continuous monitoring and early-warning systems) can then be prioritized based on the risk level and with a standard approach for whole Norway.

  20. Field programmable gate arrays-based number plate binarization and adjustment for automatic number plate recognition systems

    NASA Astrophysics Data System (ADS)

    Zhai, Xiaojun; Bensaali, Faycal; Sotudeh, Reza

    2013-01-01

    Number plate (NP) binarization and adjustment are important preprocessing stages in automatic number plate recognition (ANPR) systems and are used to link the number plate localization (NPL) and character segmentation stages. Successfully linking these two stages will improve the performance of the entire ANPR system. We present two optimized low-complexity NP binarization and adjustment algorithms. Efficient area/speed architectures based on the proposed algorithms are also presented and have been successfully implemented and tested using the Mentor Graphics RC240 FPGA development board, which together require only 9% of the available on-chip resources of a Virtex-4 FPGA, run with a maximum frequency of 95.8 MHz and are capable of processing one image in 0.07 to 0.17 ms.

  1. Dynamic configuration management of a multi-standard and multi-mode reconfigurable multi-ASIP architecture for turbo decoding

    NASA Astrophysics Data System (ADS)

    Lapotre, Vianney; Gogniat, Guy; Baghdadi, Amer; Diguet, Jean-Philippe

    2017-12-01

    The multiplication of connected devices goes along with a large variety of applications and traffic types needing diverse requirements. Accompanying this connectivity evolution, the last years have seen considerable evolutions of wireless communication standards in the domain of mobile telephone networks, local/wide wireless area networks, and Digital Video Broadcasting (DVB). In this context, intensive research has been conducted to provide flexible turbo decoder targeting high throughput, multi-mode, multi-standard, and power consumption efficiency. However, flexible turbo decoder implementations have not often considered dynamic reconfiguration issues in this context that requires high speed configuration switching. Starting from this assessment, this paper proposes the first solution that allows frame-by-frame run-time configuration management of a multi-processor turbo decoder without compromising the decoding performances.

  2. Publishing bioethics and bioethics--reflections on academic publishing by a journal editor.

    PubMed

    Schüklenk, Udo

    2011-02-01

    This article by one of the Editors of Bioethics, published in the 25th anniversary issue of the journal, describes some of the revolutionary changes academic publishing has undergone during the last decades. Many humanities journals went from typically small print-runs, counting by the hundreds, to on-line availability in thousands of university libraries worldwide. Article up-take by our subscribers can be measured efficiently. The implications of this and other changes to academic publishing are discussed. Important ethical challenges need to be addressed in areas such as the enforcement of plagiarism-related policies, the so-called 'impact factor' and its impact on academic integrity, and the question of whether on-line only publishing can currently guarantee the integrity of academic publishing histories. © 2010 Blackwell Publishing Ltd.

  3. 19. TRAVELING CRANE ATOP SUPERSTRUCTURE, FROM RUN LINE DECK. Looking ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    19. TRAVELING CRANE ATOP SUPERSTRUCTURE, FROM RUN LINE DECK. Looking up to north northeast. - Edwards Air Force Base, Air Force Rocket Propulsion Laboratory, Test Stand 1-A, Test Area 1-120, north end of Jupiter Boulevard, Boron, Kern County, CA

  4. Adaptive genetic markers discriminate migratory runs of Chinook salmon (Oncorhynchus tshawytscha) amid continued gene flow

    PubMed Central

    O'Malley, Kathleen G; Jacobson, Dave P; Kurth, Ryon; Dill, Allen J; Banks, Michael A

    2013-01-01

    Neutral genetic markers are routinely used to define distinct units within species that warrant discrete management. Human-induced changes to gene flow however may reduce the power of such an approach. We tested the efficiency of adaptive versus neutral genetic markers in differentiating temporally divergent migratory runs of Chinook salmon (Oncorhynchus tshawytscha) amid high gene flow owing to artificial propagation and habitat alteration. We compared seven putative migration timing genes to ten microsatellite loci in delineating three migratory groups of Chinook in the Feather River, CA: offspring of fall-run hatchery broodstock that returned as adults to freshwater in fall (fall run), spring-run offspring that returned in spring (spring run), and fall-run offspring that returned in spring (FRS). We found evidence for significant differentiation between the fall and federally listed threatened spring groups based on divergence at three circadian clock genes (OtsClock1b, OmyFbxw11, and Omy1009UW), but not neutral markers. We thus demonstrate the importance of genetic marker choice in resolving complex life history types. These findings directly impact conservation management strategies and add to previous evidence from Pacific and Atlantic salmon indicating that circadian clock genes influence migration timing. PMID:24478800

  5. Mechanics and energetics of human locomotion on sand.

    PubMed

    Lejeune, T M; Willems, P A; Heglund, N C

    1998-07-01

    Moving about in nature often involves walking or running on a soft yielding substratum such as sand, which has a profound effect on the mechanics and energetics of locomotion. Force platform and cinematographic analyses were used to determine the mechanical work performed by human subjects during walking and running on sand and on a hard surface. Oxygen consumption was used to determine the energetic cost of walking and running under the same conditions. Walking on sand requires 1.6-2.5 times more mechanical work than does walking on a hard surface at the same speed. In contrast, running on sand requires only 1.15 times more mechanical work than does running on a hard surface at the same speed. Walking on sand requires 2.1-2.7 times more energy expenditure than does walking on a hard surface at the same speed; while running on sand requires 1.6 times more energy expenditure than does running on a hard surface. The increase in energy cost is due primarily to two effects: the mechanical work done on the sand, and a decrease in the efficiency of positive work done by the muscles and tendons.

  6. The mechanics and energetics of human walking and running: a joint level perspective.

    PubMed

    Farris, Dominic James; Sawicki, Gregory S

    2012-01-07

    Humans walk and run at a range of speeds. While steady locomotion at a given speed requires no net mechanical work, moving faster does demand both more positive and negative mechanical work per stride. Is this increased demand met by increasing power output at all lower limb joints or just some of them? Does running rely on different joints for power output than walking? How does this contribute to the metabolic cost of locomotion? This study examined the effects of walking and running speed on lower limb joint mechanics and metabolic cost of transport in humans. Kinematic and kinetic data for 10 participants were collected for a range of walking (0.75, 1.25, 1.75, 2.0 m s(-1)) and running (2.0, 2.25, 2.75, 3.25 m s(-1)) speeds. Net metabolic power was measured by indirect calorimetry. Within each gait, there was no difference in the proportion of power contributed by each joint (hip, knee, ankle) to total power across speeds. Changing from walking to running resulted in a significant (p = 0.02) shift in power production from the hip to the ankle which may explain the higher efficiency of running at speeds above 2.0 m s(-1) and shed light on a potential mechanism behind the walk-run transition.

  7. [The functional sport shoe parameter "torsion" within running shoe research--a literature review].

    PubMed

    Michel, F I; Kälin, X; Metzger, A; Westphal, K; Schweizer, F; Campe, S; Segesser, B

    2009-12-01

    Within the sport shoe area torsion is described as the twisting and decoupling of the rear-, mid- and forefoot along the longitudinal axis of the foot. Studies have shown that running shoes restrict the torsion of the foot and thus they increase the pronation of the foot. Based on the findings, it is recommended to design running shoes, which allow the natural freedom of movement of the foot. The market introduction of the first torsion concept through adidas(R) took place in 1989. Independently of the first market introduction, only one epidemiological study was conducted in the running shoe area. The study should investigate the occurrence of Achilles tendon problems of the athletes running in the new "adidas Torsion(R) shoes". However, further studies quantifying the optimal region of torsionability concerning the reduction of injury incidence are still missing. Newer studies reveal that the criterion torsion only plays a secondary roll regarding the buying decision. Moreover, athletes are not able to perceive torsionability as a discrete functional parameter. It is to register, that several workgroups are dealing intensively with the detailed analysis of the foot movement based on kinematic multi-segment-models. However, scientific as well as popular scientific contributions display that the original idea of the torsion concept is still not completely understood. Hence, the "inverse" characteristic is postulated. The present literature review leads to the deduction that the functional characteristics of the torsion concept are not fully implemented within the running shoe area. This implies the necessity of scientific studies, which investigate the relevance of a functional torsion concept regarding injury prevention based on basic and applied research. Besides, biomechanical studies should analyse systematically the mechanism and the effects of torsion relevant technologies and systems.

  8. The efficiency coefficient of the rat heart and muscular system after physical training and hypokinesia

    NASA Technical Reports Server (NTRS)

    Alyukhin, Y. S.; Davydov, A. F.

    1982-01-01

    The efficiency of an isolated heart did not change after prolonged physical training of rats for an extreme load. The increase in oxygen consumption by the entire organism in 'uphill' running as compared to the resting level in the trained rats was 14% lower than in the control animals. Prolonged hypokinesia of the rats did not elicit a change in the efficiency of the isolated heart.

  9. Differences in plantar loading between training shoes and racing flats at a self-selected running speed.

    PubMed

    Wiegerinck, Johannes I; Boyd, Jennifer; Yoder, Jordan C; Abbey, Alicia N; Nunley, James A; Queen, Robin M

    2009-04-01

    The purpose of this study was to examine the difference in plantar loading between two different running shoe types. We hypothesized that a higher maximum force, peak pressure, and contact area would exist beneath the entire foot while running in a racing flat when compared to a training shoe. 37 athletes (17 male and 20 female) were recruited for this study. Subjects had no history of lower extremity injuries in the past six months, no history of foot or ankle surgery within the past 3 years, and no history of metatarsal stress fractures. Subjects had to be physically active and run at least 10 miles per week. Each subject ran on a 10m runway 7 times wearing two different running shoe types, the Nike Air Pegasus (training shoe) and the Nike Air Zoom Katana IV (racing flat). A Pedar-X in-shoe pressure measurement system sampling at 50Hz was used to collect plantar pressure data. Peak pressure, maximum force, and contact area beneath eight different anatomical regions of the foot as well as beneath the total foot were obtained. The results of this study demonstrated a significant difference between training shoes and racing flats in terms of peak pressure, maximum force, and contact area. The significant differences measured between the two shoes can be of importance when examining the influence of shoe type on the occurrence of stress fractures in runners.

  10. Exploring the Effect of Climate Perturbations on Water Availability for Renewable Energy Development in the Indian Wells Valley, California

    NASA Astrophysics Data System (ADS)

    Rey, David M.

    Energy and water are connected through the water-use cycle (e.g. obtaining, transporting, and treating water) and thermoelectric energy generation, which converts heat to electricity via steam-driven turbines. As the United States implements more renewable energy technologies, quantifying the relationships between energy, water, and land-surface impacts of these implementations will provide policy makers the strengths and weaknesses of different renewable energy options. In this study, a MODFLOW model of the Indian Wells Valley (IWV), in California, was developed to capture the water, energy, and land-surface impacts of potential proposed 1) solar, 2) wind, and 3) biofuel implementations. The model was calibrated to pre-existing groundwater head data from 1985 to present to develop a baseline model before running two-year predictive scenarios for photovoltaic (PV), concentrating solar power (CSP), wind, and biofuel implementations. Additionally, the baseline model was perturbed by decreasing mountain front recharge values by 5%, 10%, and 15%, simulating potential future system perturbations under a changing climate. These potential future conditions were used to re-run each implementation scenario. Implementation scenarios were developed based on population, typical energy use per person, existing land-use and land-cover type within the IWV, and previously published values for water use, surface-area use, and energy-generation potential for each renewable fuel type. The results indicate that the quantity of water needed, localized drawdown from pumping water to meet implementation demands, and generation efficiency are strongly controlled by the fuel type, as well as the energy generating technology and thermoelectric technologies implemented. Specifically, PV and wind-turbine (WT) implementations required less than 1% of the estimated annual aquifer recharge, while technologies such as biofuels and CSP, which rely on thermoelectric generation, ranged from 3% to 20%. As modeled groundwater elevations declined in the IWV, the net generation (i.e. energy produced - energy used) of each renewable energy implementation decreased due a higher energy cost for pumping groundwater. The loss in efficiency was minimal for PV and wind solutions, with maximum changes in the drawdown being less than 10 m; however, for CSP and biofuel implementations drawdowns over 50 m were observed at the pumping well, resulting in electrical generation efficiency losses between 4% and 50% over a two-year period. It was concluded that PV would be the best balance between water and land-use for the IWV, or other groundwater dependent Basin and Range settings. In areas with limited water resources but abundant available land for implementation, WT solutions would have the smallest hydrologic impact. The impact of renewable scenarios was highly variable across and within differing fuel types, with the potential for larger negative impacts under a changing climate in areas with no perennial surface water.

  11. Predictive simulation of gait at low gravity reveals skipping as the preferred locomotion strategy

    PubMed Central

    Ackermann, Marko; van den Bogert, Antonie J.

    2012-01-01

    The investigation of gait strategies at low gravity environments gained momentum recently as manned missions to the Moon and to Mars are reconsidered. Although reports by astronauts of the Apollo missions indicate alternative gait strategies might be favored on the Moon, computational simulations and experimental investigations have been almost exclusively limited to the study of either walking or running, the locomotion modes preferred under Earth's gravity. In order to investigate the gait strategies likely to be favored at low gravity a series of predictive, computational simulations of gait are performed using a physiological model of the musculoskeletal system, without assuming any particular type of gait. A computationally efficient optimization strategy is utilized allowing for multiple simulations. The results reveal skipping as more efficient and less fatiguing than walking or running and suggest the existence of a walk-skip rather than a walk-run transition at low gravity. The results are expected to serve as a background to the design of experimental investigations of gait under simulated low gravity. PMID:22365845

  12. Predictive simulation of gait at low gravity reveals skipping as the preferred locomotion strategy.

    PubMed

    Ackermann, Marko; van den Bogert, Antonie J

    2012-04-30

    The investigation of gait strategies at low gravity environments gained momentum recently as manned missions to the Moon and to Mars are reconsidered. Although reports by astronauts of the Apollo missions indicate alternative gait strategies might be favored on the Moon, computational simulations and experimental investigations have been almost exclusively limited to the study of either walking or running, the locomotion modes preferred under Earth's gravity. In order to investigate the gait strategies likely to be favored at low gravity a series of predictive, computational simulations of gait are performed using a physiological model of the musculoskeletal system, without assuming any particular type of gait. A computationally efficient optimization strategy is utilized allowing for multiple simulations. The results reveal skipping as more efficient and less fatiguing than walking or running and suggest the existence of a walk-skip rather than a walk-run transition at low gravity. The results are expected to serve as a background to the design of experimental investigations of gait under simulated low gravity. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Silicon web process development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hill, F. E.; Skutch, M. E.; Driggers, J. M.; Hopkins, R. H.

    1980-01-01

    A barrier crucible design which consistently maintains melt stability over long periods of time was successfully tested and used in long growth runs. The pellet feeder for melt replenishment was operated continuously for growth runs of up to 17 hours. The liquid level sensor comprising a laser/sensor system was operated, performed well, and meets the requirements for maintaining liquid level height during growth and melt replenishment. An automated feedback loop connecting the feed mechanism and the liquid level sensing system was designed and constructed and operated successfully for 3.5 hours demonstrating the feasibility of semi-automated dendritic web growth. The sensitivity of the cost of sheet, to variations in capital equipment cost and recycling dendrites was calculated and it was shown that these factors have relatively little impact on sheet cost. Dendrites from web which had gone all the way through the solar cell fabrication process, when melted and grown into web, produce crystals which show no degradation in cell efficiency. Material quality remains high and cells made from web grown at the start, during, and the end of a run from a replenished melt show comparable efficiencies.

  14. Performance of a Block Structured, Hierarchical Adaptive MeshRefinement Code on the 64k Node IBM BlueGene/L Computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenough, Jeffrey A.; de Supinski, Bronis R.; Yates, Robert K.

    2005-04-25

    We describe the performance of the block-structured Adaptive Mesh Refinement (AMR) code Raptor on the 32k node IBM BlueGene/L computer. This machine represents a significant step forward towards petascale computing. As such, it presents Raptor with many challenges for utilizing the hardware efficiently. In terms of performance, Raptor shows excellent weak and strong scaling when running in single level mode (no adaptivity). Hardware performance monitors show Raptor achieves an aggregate performance of 3:0 Tflops in the main integration kernel on the 32k system. Results from preliminary AMR runs on a prototype astrophysical problem demonstrate the efficiency of the current softwaremore » when running at large scale. The BG/L system is enabling a physics problem to be considered that represents a factor of 64 increase in overall size compared to the largest ones of this type computed to date. Finally, we provide a description of the development work currently underway to address our inefficiencies.« less

  15. High power disk lasers: advances and applications

    NASA Astrophysics Data System (ADS)

    Havrilla, David; Holzer, Marco

    2011-02-01

    Though the genesis of the disk laser concept dates to the early 90's, the disk laser continues to demonstrate the flexibility and the certain future of a breakthrough technology. On-going increases in power per disk, and improvements in beam quality and efficiency continue to validate the genius of the disk laser concept. As of today, the disk principle has not reached any fundamental limits regarding output power per disk or beam quality, and offers numerous advantages over other high power resonator concepts, especially over monolithic architectures. With well over 1000 high power disk lasers installations, the disk laser has proven to be a robust and reliable industrial tool. With advancements in running cost, investment cost and footprint, manufacturers continue to implement disk laser technology with more vigor than ever. This paper will explain important details of the TruDisk laser series and process relevant features of the system, like pump diode arrangement, resonator design and integrated beam guidance. In addition, advances in applications in the thick sheet area and very cost efficient high productivity applications like remote welding, remote cutting and cutting of thin sheets will be discussed.

  16. Applying cost accounting to operating room staffing in otolaryngology: time-driven activity-based costing and outpatient adenotonsillectomy.

    PubMed

    Balakrishnan, Karthik; Goico, Brian; Arjmand, Ellis M

    2015-04-01

    (1) To describe the application of a detailed cost-accounting method (time-driven activity-cased costing) to operating room personnel costs, avoiding the proxy use of hospital and provider charges. (2) To model potential cost efficiencies using different staffing models with the case study of outpatient adenotonsillectomy. Prospective cost analysis case study. Tertiary pediatric hospital. All otolaryngology providers and otolaryngology operating room staff at our institution. Time-driven activity-based costing demonstrated precise per-case and per-minute calculation of personnel costs. We identified several areas of unused personnel capacity in a basic staffing model. Per-case personnel costs decreased by 23.2% by allowing a surgeon to run 2 operating rooms, despite doubling all other staff. Further cost reductions up to a total of 26.4% were predicted with additional staffing rearrangements. Time-driven activity-based costing allows detailed understanding of not only personnel costs but also how personnel time is used. This in turn allows testing of alternative staffing models to decrease unused personnel capacity and increase efficiency. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  17. Nanocrystalline Hierarchical ZSM-5: An Efficient Catalyst for the Alkylation of Phenol with Cyclohexene.

    PubMed

    Radhika, N P; Selvin, Rosilda; Kakkar, Rita; Roselin, L Selva

    2018-08-01

    In this paper, authors report the synthesis of nanocrystalline hierarchical zeolite ZSM-5 and its application as a heterogeneous catalyst in the alkylation of phenol with cyclohexene. The catalyst was synthesized by vacuum-concentration coupled hydrothermal technique in the presence of two templates. This synthetic route could successfully introduce pores of higher hierarchy in the zeolite ZSM-5 structure. Hierarchical ZSM-5 could catalyse effectively the industrially important reaction of cyclohexene with phenol. We ascribe the high efficiency of the catalyst to its conducive structural features such as nanoscale size, high surface area, presence of hierarchy of pores and existence of Lewis sites along with Brønsted acid sites. The effect of various reaction parameters like duration, catalyst amount, reactant mole ratio and temperature were assessed. Under optimum reaction conditions, the catalyst showed up to 65% selectivity towards the major product, cyclohexyl phenyl ether. There was no discernible decline in percent conversion or selectivity even when the catalyst was re-used for up to four runs. Kinetic studies were done through regression analysis and a mechanistic route based on LHHW model was suggested.

  18. Physiological and biomechanical adaptations to the cycle to run transition in Olympic triathlon: review and practical recommendations for training

    PubMed Central

    Millet, G.; Vleck, V.

    2000-01-01

    Current knowledge of the physiological, biomechanical, and sensory effects of the cycle to run transition in the Olympic triathlon (1.5 km, 10 km, 40 km) is reviewed and implications for the training of junior and elite triathletes are discussed. Triathlon running elicits hyperventilation, increased heart rate, decreased pulmonary compliance, and exercise induced hypoxaemia. This may be due to exercise intensity, ventilatory muscle fatigue, dehydration, muscle fibre damage, a shift in metabolism towards fat oxidation, and depleted glycogen stores after a 40 km cycle. The energy cost (CR) of running during the cycle to run transition is also increased over that of control running. The increase in CR varies from 1.6% to 11.6% and is a reflection of triathlete ability level. This increase may be partly related to kinematic alterations, but research suggests that most biomechanical parameters are unchanged. A more forward leaning trunk inclination is the most significant observation reported. Running pattern, and thus running economy, could also be influenced by sensorimotor perturbations related to the change in posture. Technical skill in the transition area is obviously very important. The conditions under which the preceding cycling section is performed—that is, steady state or stochastic power output, drafting or non-drafting—are likely to influence the speed of adjustment to transition. The extent to which a decrease in the average 10 km running speed occurs during competition must be investigated further. It is clear that the higher the athlete is placed in the field at the end of the bike section, the greater the importance to their finishing position of both a quick transition area time and optimal adjustment to the physiological demands of the cycle to run transition. The need for, and current methods of, training to prepare junior and elite triathletes for a better transition are critically reviewed in light of the effects of sequential cycle to run exercise. Key Words: triathlon; cycle to run transition; training; performance PMID:11049151

  19. Water flow statistics: SRP creeks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lower, M.W.

    1982-08-26

    For a number of environmental studies it is necessary to know the water flow rates and variations in the SRP streams. The objective of this memorandum is to pull together and present a number of statistical analyses for Upper Three Runs Creek, Four Mile Creek and Lower Three Runs Creek. The data basis covers 8 USGS stream gage stations for the years 1972 - 1981. The average flow rates over a ten-year period along Upper Three Runs Creek were determined to be 114 cfs at US Route 278, 193 cfs at Road C, and 265 cfs at Road A. Alongmore » Four Mile Creek the average flow rates over a ten-year period doubled from 9 cfs prior to F-Area discharges to 18 cfs prior to cooling water discharges from C-Area Reactor. Finally, average flow rates along Lower Three Runs Creek over a ten-year period tripled from 32 cfs at Par Pond to 96 cfs near Snelling, South Carolina. 1 figure, 9 tables.« less

  20. An ultra-low-power RF transceiver for WBANs in medical applications

    NASA Astrophysics Data System (ADS)

    Qi, Zhang; Xiaofei, Kuang; Nanjian, Wu

    2011-06-01

    A 2.4 GHz ultra-low-power RF transceiver with a 900 MHz auxiliary wake-up link for wireless body area networks (WBANs) in medical applications is presented. The RF transceiver with an asymmetric architecture is proposed to achieve high energy efficiency according to the asymmetric communication in WBANs. The transceiver consists of a main receiver (RX) with an ultra-low-power free-running ring oscillator and a high speed main transmitter (TX) with fast lock-in PLL. A passive wake-up receiver (WuRx) for wake-up function with a high power conversion efficiency (PCE) CMOS rectifier is designed to offer the sensor node the capability of work-on-demand with zero standby power. The chip is implemented in a 0.18 μm CMOS process. Its core area is 1.6 mm2. The main RX achieves a sensitivity of -55 dBm at a 100 kbps OOK data rate while consuming just 210 μA current from the 1 V power supply. The main TX achieves +3 dBm output power with a 4 Mbps/500 kbps/200 kbps data rate for OOK/4 FSK/2 FSK modulation and dissipates 3.25 mA/6.5 mA/6.5 mA current from a 1.8 V power supply. The minimum detectable RF input energy for the wake-up RX is -15 dBm and the PCE is more than 25%.

  1. Validated HPLC method for determination of sennosides A and B in senna tablets.

    PubMed

    Sun, Shao Wen; Su, Hsiu Ting

    2002-07-31

    This study developed an efficient and reliable ion-pair liquid chromatographic method for quantitation of sennosides A and B in commercial senna tablets. Separation was conducted on a Hypersil C 18 column (250 x 4.6 mm, 5 microm) at a temperature of 40 degrees C, using a mixture of 0.1 M acetate buffer (pH 6.0) and acetonitrile (70:30, v/v) containing 5 mM tetrahexylammonium bromide as mobile phase. Sennosides A and B were completely separated from other constituents within 14 min. The developed method was validated. Both run-to-run repeatability (n=10) and day-to-day reproducibility (n=3) of peak area were below 0.4% RSD. Linearity of peak area was tested in the range 30-70 microg/ml (r>0.9997). Accuracy was assessed with recovery and the recoveries for sennosides A and B were 101.73+/-1.30% and 101.81+/-2.18% (n=3 x 6), respectively. Robustness of the analytical method was tested using a three-leveled Plackett-Burman design in which 11 factors were assessed with 23 experiments. Eight factors (column, concentration of ion pair reagent, % of organic modifier (acetonitrile), buffer pH, column temperature, flow rate, time constant and detection wavelength) were investigated in a specified range above and below the nominal method conditions. It was found that: (1) column and % acetonitrile affected significantly resolution and retention time, (2) column, % acetonitrile, column temperature, flow rate and time constant affected significantly the plate number of sennoside A, and (3) column and time constant affected significantly the tailing factor.

  2. Alteration of functional connectivity during real-time fMRI regulation of PCC

    NASA Astrophysics Data System (ADS)

    Zhang, Gaoyan; Yao, Li; Long, Zhiying

    2012-03-01

    Real-time functional magnetic resonance imaging (rtfMRI) can be used to train the subjects to selectively control activity of specific brain area so as to affect the activation in the target region and even to improve cognition and behavior. So far, whether brain activity in posterior cingulate cortex (PCC) can be regulated by rtfMRI has not been reported. In the present study, we aimed at investigating whether real-time regulation of activity in PCC can change the functional connectivity between PCC and other brain regions. A total of 12 subjects underwent two training runs, each lasts 782s. During the training, subjects were instructed to down regulate activity in PCC by imagining right hand finger movement with the sequence of 4-2-3-1-3-4-2 during task and relax as possible as they can during rest. To control for any effects induced by repeated practice, another 12 subjects in the control group received the same experiment procedure and instruction except with no feedback during training. Experiment results show that increased functional connectivity of PCC with medial frontal cortex (MFC) was observed in both groups during the two training runs. However, PCC of the experimental group is correlated with larger areas in MFC than the control group. Because the positive correlation between task performance and MFC to PCC connectivity has been demonstrated previously, we infer that the stronger connectivity between PCC and MFC in the experimental group may suggest that the experimental group with neurofeedback can more efficiently regulate PCC than the control group without neurofeedback.

  3. Technology Tools for the Tough Tasks: Plug in for Great Outcomes

    ERIC Educational Resources Information Center

    Simon, Fran

    2012-01-01

    There are a lot of easy-to-use online tools that can help teachers and administrators with the tough tasks involved in running efficient, responsive, and intentional programs. The efficiencies offered through these systems allow busy educators to spend less time managing information and more time doing the work that matters the most--working with…

  4. 40 CFR 63.4964 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... to 40 CFR part 51 to determine the mass fraction, kg TVH per kg material, of TVH liquid input from... the coating operation during the capture efficiency test run, lb. TVHi = Mass fraction of TVH in... temporary total enclosure or building enclosure. The liquid-to-uncaptured-gas protocol compares the mass of...

  5. 40 CFR 63.3965 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... mass fraction of TVH liquid input from each coating, thinner and/or other additive, and cleaning... efficiency test run, kg. TVHi = Mass fraction of TVH in coating, thinner and/or other additive, or cleaning...-uncaptured-gas protocol compares the mass of liquid TVH in materials used in the coating operation to the...

  6. 40 CFR 63.3544 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... mass fraction of TVH liquid input from each coating and thinner used in the coating operation during... materials used in the coating operation during the capture efficiency test run, kg. TVHi = Mass fraction of... protocol compares the mass of liquid TVH in materials used in the coating operation to the mass of TVH...

  7. 40 CFR 63.4964 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to 40 CFR part 51 to determine the mass fraction, kg TVH per kg material, of TVH liquid input from... the coating operation during the capture efficiency test run, lb. TVHi = Mass fraction of TVH in... temporary total enclosure or building enclosure. The liquid-to-uncaptured-gas protocol compares the mass of...

  8. 40 CFR 63.3965 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... mass fraction of TVH liquid input from each coating, thinner and/or other additive, and cleaning... efficiency test run, kg. TVHi = Mass fraction of TVH in coating, thinner and/or other additive, or cleaning...-uncaptured-gas protocol compares the mass of liquid TVH in materials used in the coating operation to the...

  9. 40 CFR 63.4165 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to 40 CFR part 51 to determine the mass fraction of TVH liquid input from each coating, thinner, and... operation during the capture efficiency test run, kg. TVHi = mass fraction of TVH in coating, thinner, or... temporary total enclosure or building enclosure. The liquid-to-uncaptured-gas protocol compares the mass of...

  10. 40 CFR 63.4964 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... determine the mass fraction, kg TVH per kg material, of TVH liquid input from each coating, thinner, and... capture efficiency test run, lb. TVHi = Mass fraction of TVH in coating, thinner, or cleaning material, i... enclosure. The liquid-to-uncaptured-gas protocol compares the mass of liquid TVH in materials used in the...

  11. 40 CFR 63.3544 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... mass fraction of TVH liquid input from each coating and thinner used in the coating operation during... materials used in the coating operation during the capture efficiency test run, kg. TVHi = Mass fraction of... protocol compares the mass of liquid TVH in materials used in the coating operation to the mass of TVH...

  12. 40 CFR 63.4165 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... to 40 CFR part 51 to determine the mass fraction of TVH liquid input from each coating, thinner, and... operation during the capture efficiency test run, kg. TVHi = mass fraction of TVH in coating, thinner, or... temporary total enclosure or building enclosure. The liquid-to-uncaptured-gas protocol compares the mass of...

  13. 40 CFR 63.3544 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... mass fraction of TVH liquid input from each coating and thinner used in the coating operation during... materials used in the coating operation during the capture efficiency test run, kg. TVHi = Mass fraction of... protocol compares the mass of liquid TVH in materials used in the coating operation to the mass of TVH...

  14. 40 CFR 63.4964 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... to 40 CFR part 51 to determine the mass fraction, kg TVH per kg material, of TVH liquid input from... the coating operation during the capture efficiency test run, lb. TVHi = Mass fraction of TVH in... temporary total enclosure or building enclosure. The liquid-to-uncaptured-gas protocol compares the mass of...

  15. Adaptive Grid Refinement for Atmospheric Boundary Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, Antoon; van Heerwaarden, Chiel; Popinet, Stephane; van der linden, Steven; de Roode, Stephan; van de Wiel, Bas

    2017-04-01

    We validate and benchmark an adaptive mesh refinement (AMR) algorithm for numerical simulations of the atmospheric boundary layer (ABL). The AMR technique aims to distribute the computational resources efficiently over a domain by refining and coarsening the numerical grid locally and in time. This can be beneficial for studying cases in which length scales vary significantly in time and space. We present the results for a case describing the growth and decay of a convective boundary layer. The AMR results are benchmarked against two runs using a fixed, fine meshed grid. First, with the same numerical formulation as the AMR-code and second, with a code dedicated to ABL studies. Compared to the fixed and isotropic grid runs, the AMR algorithm can coarsen and refine the grid such that accurate results are obtained whilst using only a fraction of the grid cells. Performance wise, the AMR run was cheaper than the fixed and isotropic grid run with similar numerical formulations. However, for this specific case, the dedicated code outperformed both aforementioned runs.

  16. Analyzing Long-run Relationship between Energy Consumption and Economic Growth in the Kingdom of Bahrain

    NASA Astrophysics Data System (ADS)

    Naser, Hanan

    2017-11-01

    Since the relation between energy consumption and economic growth is important to design effective energy policies that will promote economic growth, this study investigates the short run dynamics and causality among energy consumption, co2 emissions, oil prices and economic growth in Kingdom of Bahrain. To do so, annual data that covers the period from 1960 till 2015. Empirical work tests for unit root, co-integration relationship using Johansen (1988) approach and then estimate both long and short run dynamics using the vector error correction model (VECM). Results indicate that there is a long-run relationship between the suggested variables. Since economic growth has a predictive power to estimate the energy demand of Kingdom of Bahrain, it is recommended that the government of Bahrain and policy designers shed the light on energy efficiency strategies and carbon emissions reduction policy in the long run without impeding economic growth in order to move towards sustainability.

  17. An innovative integrated oxidation ditch with vertical circle (IODVC) for wastewater treatment.

    PubMed

    Xia, Shi-bin; Liu, Jun-xin

    2004-01-01

    The oxidation ditch process is economic and efficient for wastewater treatment, but its application is limited in case where land is costly due to its large land area required. An innovative integrated oxidation ditch with vertical circle (IODVC) system was developed to treat domestic and industrial wastewater aiming to save land area. The new system consists of a single-channel divided into two ditches(the top one and the bottom one by a plate), a brush, and an innovative integral clarifier. Different from the horizontal circle of the conventional oxidation ditch, the flow of IODVC system recycles from the top zone to the bottom zone in the vertical circle as the brush is running, and then the IODVC saved land area required by about 50% compared with a conventional oxidation ditch with an intrachannel clarifier. The innovative integral clarifier is effective for separation of liquid and solids, and is preferably positioned at the opposite end of the brush in the ditch. It does not affect the hydrodynamic characteristics of the mixed liquor in the ditch, and the sludge can automatically return to the down ditch without any pump. In this study, experiments of domestic and dye wastewater treatment were carried out in bench scale and in full scale, respectively. Results clearly showed that the IODVC efficiently removed pollutants in the wastewaters, i.e., the average of COD removals for domestic and dye wastewater treatment were 95% and 90%, respectively, and that the IODVC process may provide a cost effective way for full scale dye wastewater treatment.

  18. SAM 2.1—A computer program for plotting and formatting surveying data for estimating peak discharges by the slope-area method

    USGS Publications Warehouse

    Hortness, J.E.

    2004-01-01

    The U.S. Geological Survey (USGS) measures discharge in streams using several methods. However, measurement of peak discharges is often impossible or impractical due to difficult access, inherent danger of making measurements during flood events, and timing often associated with flood events. Thus, many peak discharge values often are calculated after the fact by use of indirect methods. The most common indirect method for estimating peak dis- charges in streams is the slope-area method. This, like other indirect methods, requires measuring the flood profile through detailed surveys. Processing the survey data for efficient entry into computer streamflow models can be time demanding; SAM 2.1 is a program designed to expedite that process. The SAM 2.1 computer program is designed to be run in the field on a portable computer. The program processes digital surveying data obtained from an electronic surveying instrument during slope- area measurements. After all measurements have been completed, the program generates files to be input into the SAC (Slope-Area Computation program; Fulford, 1994) or HEC-RAS (Hydrologic Engineering Center-River Analysis System; Brunner, 2001) computer streamflow models so that an estimate of the peak discharge can be calculated.

  19. A visual tracking method based on improved online multiple instance learning

    NASA Astrophysics Data System (ADS)

    He, Xianhui; Wei, Yuxing

    2016-09-01

    Visual tracking is an active research topic in the field of computer vision and has been well studied in the last decades. The method based on multiple instance learning (MIL) was recently introduced into the tracking task, which can solve the problem that template drift well. However, MIL method has relatively poor performance in running efficiency and accuracy, due to its strong classifiers updating strategy is complicated, and the speed of the classifiers update is not always same with the change of the targets' appearance. In this paper, we present a novel online effective MIL (EMIL) tracker. A new update strategy for strong classifier was proposed to improve the running efficiency of MIL method. In addition, to improve the t racking accuracy and stability of the MIL method, a new dynamic mechanism for learning rate renewal of the classifier and variable search window were proposed. Experimental results show that our method performs good performance under the complex scenes, with strong stability and high efficiency.

  20. A time-domain finite element boundary integral approach for elastic wave scattering

    NASA Astrophysics Data System (ADS)

    Shi, F.; Lowe, M. J. S.; Skelton, E. A.; Craster, R. V.

    2018-04-01

    The response of complex scatterers, such as rough or branched cracks, to incident elastic waves is required in many areas of industrial importance such as those in non-destructive evaluation and related fields; we develop an approach to generate accurate and rapid simulations. To achieve this we develop, in the time domain, an implementation to efficiently couple the finite element (FE) method within a small local region, and the boundary integral (BI) globally. The FE explicit scheme is run in a local box to compute the surface displacement of the scatterer, by giving forcing signals to excitation nodes, which can lie on the scatterer itself. The required input forces on the excitation nodes are obtained with a reformulated FE equation, according to the incident displacement field. The surface displacements computed by the local FE are then projected, through time-domain BI formulae, to calculate the scattering signals with different modes. This new method yields huge improvements in the efficiency of FE simulations for scattering from complex scatterers. We present results using different shapes and boundary conditions, all simulated using this approach in both 2D and 3D, and then compare with full FE models and theoretical solutions to demonstrate the efficiency and accuracy of this numerical approach.

  1. Dark-field microscopic image stitching method for surface defects evaluation of large fine optics.

    PubMed

    Liu, Dong; Wang, Shitong; Cao, Pin; Li, Lu; Cheng, Zhongtao; Gao, Xin; Yang, Yongying

    2013-03-11

    One of the challenges in surface defects evaluation of large fine optics is to detect defects of microns on surfaces of tens or hundreds of millimeters. Sub-aperture scanning and stitching is considered to be a practical and efficient method. But since there are usually few defects on the large aperture fine optics, resulting in no defects or only one run-through line feature in many sub-aperture images, traditional stitching methods encounter with mismatch problem. In this paper, a feature-based multi-cycle image stitching algorithm is proposed to solve the problem. The overlapping areas of sub-apertures are categorized based on the features they contain. Different types of overlapping areas are then stitched in different cycles with different methods. The stitching trace is changed to follow the one that determined by the features. The whole stitching procedure is a region-growing like process. Sub-aperture blocks grow bigger after each cycle and finally the full aperture image is obtained. Comparison experiment shows that the proposed method is very suitable to stitch sub-apertures that very few feature information exists in the overlapping areas and can stitch the dark-field microscopic sub-aperture images very well.

  2. Generation, propagation and run-up of tsunamis due to the Chicxulub impact event

    NASA Astrophysics Data System (ADS)

    Weisz, R.; Wuennenmann, K.; Bahlburg, H.

    2003-04-01

    The Chicxulub impact event can be investigated in (1) local, (2) regional and in (3) global scales. Our investigations focus on the regional scale, especially on the influence of tsunami waves on the coast around the Gulf of Mexico caused by the impact. During an impact two types of tsunamis are generated. The first wave is known as the "rim wave" and is generated in front of the ejecta curtain. The second one is linked to the late modification stage of the impact and results from the collapsing cavity of water. We designate this wave as "collapse wave". The "rim wave" and "collapse wave" are able to propagate over long distances, without a significant loss of wave amplitude. Corresponding to the amplitudes, the waves have a potentially large influence on the coastal areas. Run-up distance and run-up height can be used as parameters for describing this influence. We are utilizing a multimaterial hydrocode (SALE) to simulate the generation of tsunami waves. The propagation of the waves is based on the non-linear shallow water theory, because tsunami waves are defined to be long waves. The position of the coast line varies according to the tsunami run-up and is implemented with open boundary conditions. We show with our investigations (1) the generation of tsunami waves due to shallow water impacts, (2) wave damping during propagation, and (3) the influence of the "rim wave" and the "collapse wave" on the coastal areas. Here, we present our first results from numerical modeling of tsunami waves owing to a Chicxulub sized impactor. The characteristics of the “rim wave” depend on the size of the bolide and the water depth. However, the amplitude and velocity of the “collapse wave” is only determined by the water depth in the impact area. The numerical modeling of the tsunami propagation and run-up is calculated along a section from the impact point towards to the west and gives the moderate damping of both waves and the run-up on the coastal area. As a first approximation, the bathymetric data, used in the wave propagation and run-up, correspond to a linearized bathymetry of the Recent Gulf of Mexico. The linearized bathymetry allows to study the influence of the bathymetry on wave propagation and run-up. Additionally, we give preliminary results of the implementation of the two-dimensional propagation and run-up model for arbitrary bathymetries. The two-dimensional wave propagation model will enable us to more realistically asses the influence of the impact-related tsunamis on the coasts around the Gulf of Mexico due to the Chicxulub impact event.

  3. 75 FR 81957 - Proposed Flood Elevation Determinations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-29

    ... in meters (MSL) Effective Modified Levy County, Florida, and Incorporated Areas Bronson North Ditch... nearest 0.1 meter. ** BFEs to be changed include the listed downstream and upstream BFEs, and include BFEs... upstream of Elliots Run Road. Unnamed Tributary to Shoup Run..... Approximately 400 feet None +1139...

  4. How to keep the Grid full and working with ATLAS production and physics jobs

    NASA Astrophysics Data System (ADS)

    Pacheco Pagés, A.; Barreiro Megino, F. H.; Cameron, D.; Fassi, F.; Filipcic, A.; Di Girolamo, A.; González de la Hoz, S.; Glushkov, I.; Maeno, T.; Walker, R.; Yang, W.; ATLAS Collaboration

    2017-10-01

    The ATLAS production system provides the infrastructure to process millions of events collected during the LHC Run 1 and the first two years of Run 2 using grid, clouds and high performance computing. We address in this contribution the strategies and improvements that have been implemented to the production system for optimal performance and to achieve the highest efficiency of available resources from operational perspective. We focus on the recent developments.

  5. Application of the Booth-Kautzmann method for the determination of N-2 packing leakage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burkhart, D.M.; Milton, J.W.; Fawcett, S.T.

    1995-06-01

    To accurately determine turbine cycle heat rate, leakage past the N-2 steam seal packing must be determined on turbines with both HP and IP turbines contained within a common high pressure casing. N-2 packing leakage can be determined by the Booth-Kautzmann Method with instrumentation commonly used to determine the HP and IP turbine efficiency. The only additional requirements are changes to the main steam and/or hot reheat steam conditions. This paper discusses the actual test results using the Booth-Kautzmann test procedure on three natural gas fired units. The test results demonstrate the added advantage of having at least three N-2more » test runs, stability requirements for repeatable test runs and test procedures used to determine leakage results. Discussion of the sensitivity of the assumed N-2 enthalpy are also addressed. Utilizing Martins Formula with a series of N-2 Leakage test runs is shown to be a leakage prediction tool and a packing clearance approximation tool. It is concluded that the Booth-Kautzmann Method for determination of N-2 packing leakage should be utilized whenever HP and Ip turbine efficiency is determined. The two or three additional hours invested in the test runs is well worth the information gained on the performance of the N-2 packing.« less

  6. Neuromuscular factors associated with decline in long-distance running performance in master athletes.

    PubMed

    Brisswalter, Jeanick; Nosaka, Kazunori

    2013-01-01

    This review focuses on neuromuscular factors that may affect endurance performance in master athletes. During the last decade, due to the rapid increase in the number of master or veteran participants in endurance sporting competitions, many studies attempted to identify metabolic factors associated with the decrease in endurance, especially long-distance running performance with ageing, focusing on decreases in maximal oxygen consumption. However, neuromuscular factors have been less studied despite the well-known phenomena of strength loss with ageing. For master athletes to perform better in long-distance running events, it is important to reduce muscle fatigue and/or muscle damage, to improve locomotion efficiency and to facilitate recovery. To date, no consensus exists that regular endurance training is beneficial for improving locomotion efficiency, reducing muscle fatigue and muscle damage, and enhancing recovery capacity in master athletes. Some recent studies seem to indicate that master athletes have similar muscle damage to young athletes, but they require a longer recovery time after a long-distance running event. Further analyses of these parameters in master athletes require more experimental and practical interest from researchers and coaches. In particular, more attention should be directed towards the capacity to maintain muscle function with training and the role of neuromuscular factors in long-distance performance decline with ageing using a more cellular and molecular approach.

  7. SARANA: language, compiler and run-time system support for spatially aware and resource-aware mobile computing.

    PubMed

    Hari, Pradip; Ko, Kevin; Koukoumidis, Emmanouil; Kremer, Ulrich; Martonosi, Margaret; Ottoni, Desiree; Peh, Li-Shiuan; Zhang, Pei

    2008-10-28

    Increasingly, spatial awareness plays a central role in many distributed and mobile computing applications. Spatially aware applications rely on information about the geographical position of compute devices and their supported services in order to support novel functionality. While many spatial application drivers already exist in mobile and distributed computing, very little systems research has explored how best to program these applications, to express their spatial and temporal constraints, and to allow efficient implementations on highly dynamic real-world platforms. This paper proposes the SARANA system architecture, which includes language and run-time system support for spatially aware and resource-aware applications. SARANA allows users to express spatial regions of interest, as well as trade-offs between quality of result (QoR), latency and cost. The goal is to produce applications that use resources efficiently and that can be run on diverse resource-constrained platforms ranging from laptops to personal digital assistants and to smart phones. SARANA's run-time system manages QoR and cost trade-offs dynamically by tracking resource availability and locations, brokering usage/pricing agreements and migrating programs to nodes accordingly. A resource cost model permeates the SARANA system layers, permitting users to express their resource needs and QoR expectations in units that make sense to them. Although we are still early in the system development, initial versions have been demonstrated on a nine-node system prototype.

  8. Contract Service for School Maintenance

    ERIC Educational Resources Information Center

    Modern Schools, 1976

    1976-01-01

    Preventive maintenance can extend useful equipment life in a school building and keep systems running more efficiently. Points to consider before selecting a comprehensive energy management package are listed. (Author/MLF)

  9. ANTP Protocol Suite Software Implementation Architecture in Python

    DTIC Science & Technology

    2011-06-03

    a popular platform of networking programming, an area in which C has traditionally dominated. 2 NetController AeroRP AeroNP AeroNP API AeroTP...visualisation of the running system. For example using the Google Maps API , the main logging web page can show all the running nodes in the system. By...communication between AeroNP and AeroRP and runs on the operating system as daemon. Furthermore, it creates an API interface to mange the communication between

  10. Free-running InGaAs single photon detector with 1 dark count per second at 10% efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korzh, B., E-mail: Boris.Korzh@unige.ch; Walenta, N.; Lunghi, T.

    We present a free-running single photon detector for telecom wavelengths based on a negative feedback avalanche photodiode (NFAD). A dark count rate as low as 1 cps was obtained at a detection efficiency of 10%, with an afterpulse probability of 2.2% for 20 μs of deadtime. This was achieved by using an active hold-off circuit and cooling the NFAD with a free-piston stirling cooler down to temperatures of −110 °C. We integrated two detectors into a practical, 625 MHz clocked quantum key distribution system. Stable, real-time key distribution in the presence of 30 dB channel loss was possible, yielding a secret key rate of 350 bps.

  11. MPI_XSTAR: MPI-based Parallelization of the XSTAR Photoionization Program

    NASA Astrophysics Data System (ADS)

    Danehkar, Ashkbiz; Nowak, Michael A.; Lee, Julia C.; Smith, Randall K.

    2018-02-01

    We describe a program for the parallel implementation of multiple runs of XSTAR, a photoionization code that is used to predict the physical properties of an ionized gas from its emission and/or absorption lines. The parallelization program, called MPI_XSTAR, has been developed and implemented in the C++ language by using the Message Passing Interface (MPI) protocol, a conventional standard of parallel computing. We have benchmarked parallel multiprocessing executions of XSTAR, using MPI_XSTAR, against a serial execution of XSTAR, in terms of the parallelization speedup and the computing resource efficiency. Our experience indicates that the parallel execution runs significantly faster than the serial execution, however, the efficiency in terms of the computing resource usage decreases with increasing the number of processors used in the parallel computing.

  12. Distributed run of a one-dimensional model in a regional application using SOAP-based web services

    NASA Astrophysics Data System (ADS)

    Smiatek, Gerhard

    This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.

  13. Local corticosteroid injection in iliotibial band friction syndrome in runners: a randomised controlled trial

    PubMed Central

    Gunter, P; Schwellnus, M; Fuller, P

    2004-01-01

    Objective: To establish whether a local injection of methylprednisolone acetate (40 mg) is effective in decreasing pain during running in runners with recent onset (less than two weeks) iliotibial band friction syndrome (ITBFS). Methods: Eighteen runners with at least grade 2 ITBFS underwent baseline investigations including a treadmill running test during which pain was recorded on a visual analogue scale every minute. The runners were then randomly assigned to either the experimental (EXP; nine) or a placebo control (CON; nine) group. The EXP group was infiltrated in the area where the iliotibial band crosses the lateral femoral condyle with 40 mg methylprednisolone acetate mixed with a short acting local anaesthetic, and the CON group with short acting local anaesthetic only. The same laboratory based running test was repeated after seven and 14 days. The main measure of outcome was total pain during running (calculated as the area under the pain versus time graph for each running test). Results: There was a tendency (p = 0.07) for a greater decrease in total pain (mean (SEM)) during the treadmill running in the EXP group than the CON group tests from day 0 (EXP = 222 (71), CON = 197 (31)) to day 7 (EXP = 140 (87), CON = 178 (76)), but there was a significant decrease in total pain during running (p = 0.01) from day 7 (EXP = 140 (87), CON = 178 (76)) to day 14 (EXP = 103 (89), CON = 157 (109)) in the EXP group compared with the CON group. Conclusion: Local corticosteroid infiltration effectively decreases pain during running in the first two weeks of treatment in patients with recent onset ITBFS. PMID:15155424

  14. Scalable load balancing for massively parallel distributed Monte Carlo particle transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, M. J.; Brantley, P. S.; Joy, K. I.

    2013-07-01

    In order to run computer simulations efficiently on massively parallel computers with hundreds of thousands or millions of processors, care must be taken that the calculation is load balanced across the processors. Examining the workload of every processor leads to an unscalable algorithm, with run time at least as large as O(N), where N is the number of processors. We present a scalable load balancing algorithm, with run time 0(log(N)), that involves iterated processor-pair-wise balancing steps, ultimately leading to a globally balanced workload. We demonstrate scalability of the algorithm up to 2 million processors on the Sequoia supercomputer at Lawrencemore » Livermore National Laboratory. (authors)« less

  15. Performance of a supercharged direct-injection stratified-charge rotary combustion engine

    NASA Technical Reports Server (NTRS)

    Bartrand, Timothy A.; Willis, Edward A.

    1990-01-01

    A zero-dimensional thermodynamic performance computer model for direct-injection stratified-charge rotary combustion engines was modified and run for a single rotor supercharged engine. Operating conditions for the computer runs were a single boost pressure and a matrix of speeds, loads and engine materials. A representative engine map is presented showing the predicted range of efficient operation. After discussion of the engine map, a number of engine features are analyzed individually. These features are: heat transfer and the influence insulating materials have on engine performance and exhaust energy; intake manifold pressure oscillations and interactions with the combustion chamber; and performance losses and seal friction. Finally, code running times and convergence data are presented.

  16. The health sector reforms and the efficiency of public hospitals in Turkey: provincial markets.

    PubMed

    Sulku, Seher Nur

    2012-10-01

    Turkey initiated the 'Health Transformation Programme' (HTP) in 2003 to align its health care system with the European Union and OECD countries. This study investigates the impact of these reforms on the efficiency of public hospitals. Our study would contribute to the existing literature with a comprehensive analysis of the health system in a developing country. We employ the data envelopment approach and the Malmquist index to comparatively examine before and after the reform years. Our analyses compare the performances of public hospitals served in provincial markets. Inputs of number of beds, number of primary care physician, and number of specialists, and how they are used to produce outputs of inpatient discharges, outpatient visits and surgical operations are investigated. Indeed, as the performance indicators dead rate, hospital bed occupation rate and average length of stay are considered. The HTP was generally successful in boosting productivity due to advancements in technology and technical efficiency but in the socio-economically disadvantaged provinces productivity gains have not been achieved. The average technical efficiency gains took place because of the significantly improved scale efficiencies, as the average pure technical efficiency slightly improved. Lastly, the hospital performance indicators have not improved in the short run. It appears that the expected benefits from the health reforms in Turkey have been partially achieved in the short run.

  17. Effect of Resting-State fNIRS Scanning Duration on Functional Brain Connectivity and Graph Theory Metrics of Brain Network.

    PubMed

    Geng, Shujie; Liu, Xiangyu; Biswal, Bharat B; Niu, Haijing

    2017-01-01

    As an emerging brain imaging technique, functional near infrared spectroscopy (fNIRS) has attracted widespread attention for advancing resting-state functional connectivity (FC) and graph theoretical analyses of brain networks. However, it remains largely unknown how the duration of the fNIRS signal scanning is related to stable and reproducible functional brain network features. To answer this question, we collected resting-state fNIRS signals (10-min duration, two runs) from 18 participants and then truncated the hemodynamic time series into 30-s time bins that ranged from 1 to 10 min. Measures of nodal efficiency, nodal betweenness, network local efficiency, global efficiency, and clustering coefficient were computed for each subject at each fNIRS signal acquisition duration. Analyses of the stability and between-run reproducibility were performed to identify optimal time length for each measure. We found that the FC, nodal efficiency and nodal betweenness stabilized and were reproducible after 1 min of fNIRS signal acquisition, whereas network clustering coefficient, local and global efficiencies stabilized after 1 min and were reproducible after 5 min of fNIRS signal acquisition for only local and global efficiencies. These quantitative results provide direct evidence regarding the choice of the resting-state fNIRS scanning duration for functional brain connectivity and topological metric stability of brain network connectivity.

  18. Water Development, Allocation, and Institutions: A Role for Integrated Tools

    NASA Astrophysics Data System (ADS)

    Ward, F. A.

    2008-12-01

    Many parts of the world suffer from inadequate water infrastructure, inefficient water allocation, and weak water institutions. Each of these three challenges compounds the burdens imposed by inadequacies associated with the other two. Weak water infrastructure makes it hard to allocate water efficiently and undermines tracking of water rights and use, which blocks effective functioning of water institutions. Inefficient water allocation makes it harder to secure resources to develop new water infrastructure. Poorly developed water institutions undermine the security of water rights, which damages incentives to develop water infrastructure or use water efficiently. This paper reports on the development of a prototype basin scale economic optimization, in which existing water supplies are allocated more efficiently in the short run to provide resources for more efficient long-run water infrastructure development. Preliminary results provide the basis for designing water administrative proposals, building effective water infrastructure, increasing farm income, and meeting transboundary delivery commitments. The application is to the Kabul River Basin in Afghanistan, where food security has been compromised by a history of drought, war, damaged irrigation infrastructure, lack of reservoir storage, inefficient water allocation, and weak water institutions. Results illustrate increases in economic efficiency achievable when development programs simultaneously address interdependencies in water allocation, development, and institutions.

  19. Micro Climate Simulation in new Town 'Hashtgerd'

    NASA Astrophysics Data System (ADS)

    Sodoudi, S.; Langer, I.; Cubasch, U.

    2012-04-01

    One of the objectives of climatological part of project Young Cities 'Developing Energy-Efficient Urban Fabric in the Tehran-Karaj Region' is to simulate the micro climate (with 1m resolution) in 35ha of new town Hashtgerd, which is located 65 km far from mega city Tehran. The Project aims are developing, implementing and evaluating building and planning schemes and technologies which allow to plan and build sustainable, energy-efficient and climate sensible form mass housing settlements in arid and semi-arid regions ("energy-efficient fabric"). Climate sensitive form also means designing and planning for climate change and its related effects for Hashtgerd New Town. By configuration of buildings and open spaces according to solar radiation, wind and vegetation, climate sensitive urban form can create outdoor thermal comfort. To simulate the climate on small spatial scales, the micro climate model Envi-met has been used to simulate the micro climate in 35 ha. The Eulerian model ENVI-met is a micro-scale climate model which gives information about the influence of architecture and buildings as well as vegetation and green area on the micro climate up to 1 m resolution. Envi-met has been run with information from topography, downscaled climate data with neuro-fuzzy method, meteorological measurements, building height and different vegetation variants (low and high number of trees) Through the optimal Urban Design and Planning for the 35ha area the microclimate results shows, that with vegetation the microclimate in streets will be change: • 2 m temperature is decreased by about 2 K • relative humidity increase by about 10 % • soil temperature is decreased by about 3 K • wind speed is decreased by about 60% The style of buildings allows free movement of air, which is of high importance for fresh air supply. The increase of inbuilt areas in 35 ha reduces the heat island effect through cooling caused by vegetation and increase of air humidity which caused by trees evaporation.

  20. A Fast Implementation of the ISOCLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2003-01-01

    Unsupervised clustering is a fundamental tool in numerous image processing and remote sensing applications. For example, unsupervised clustering is often used to obtain vegetation maps of an area of interest. This approach is useful when reliable training data are either scarce or expensive, and when relatively little a priori information about the data is available. Unsupervised clustering methods play a significant role in the pursuit of unsupervised classification. One of the most popular and widely used clustering schemes for remote sensing applications is the ISOCLUS algorithm, which is based on the ISODATA method. The algorithm is given a set of n data points (or samples) in d-dimensional space, an integer k indicating the initial number of clusters, and a number of additional parameters. The general goal is to compute a set of cluster centers in d-space. Although there is no specific optimization criterion, the algorithm is similar in spirit to the well known k-means clustering method in which the objective is to minimize the average squared distance of each point to its nearest center, called the average distortion. One significant feature of ISOCLUS over k-means is that clusters may be merged or split, and so the final number of clusters may be different from the number k supplied as part of the input. This algorithm will be described in later in this paper. The ISOCLUS algorithm can run very slowly, particularly on large data sets. Given its wide use in remote sensing, its efficient computation is an important goal. We have developed a fast implementation of the ISOCLUS algorithm. Our improvement is based on a recent acceleration to the k-means algorithm, the filtering algorithm, by Kanungo et al.. They showed that, by storing the data in a kd-tree, it was possible to significantly reduce the running time of k-means. We have adapted this method for the ISOCLUS algorithm. For technical reasons, which are explained later, it is necessary to make a minor modification to the ISOCLUS specification. We provide empirical evidence, on both synthetic and Landsat image data sets, that our algorithm's performance is essentially the same as that of ISOCLUS, but with significantly lower running times. We show that our algorithm runs from 3 to 30 times faster than a straightforward implementation of ISOCLUS. Our adaptation of the filtering algorithm involves the efficient computation of a number of cluster statistics that are needed for ISOCLUS, but not for k-means.

  1. Efficiently Sorting Zoo-Mesh Data Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, R; Max, N; Silva, C

    The authors describe the SXMPVO algorithm for performing a visibility ordering zoo-meshed polyhedra. The algorithm runs in practice in linear time and the visibility ordering which it produces is exact.

  2. What Research Tells the Coach About Distance Running.

    ERIC Educational Resources Information Center

    Costill, David L.

    This booklet is designed to make available research findins concerning distance running with interpretations, for practical applications, and to point out areas of needed research. Chapter 1, "Describing the Distance Runner," considers the following aspects in relation to the distance runner: a) anatomical characteristics, b) aging, c) strength…

  3. 76 FR 42658 - Endangered and Threatened Species: Authorizing Release of a Nonessential Experimental Population...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-19

    ... Experimental Population of Upper Columbia Spring-Run Chinook Salmon in the Okanogan River Basin Under the... nonessential experimental population of Upper Columbia (UC) spring-run Chinook salmon (Oncorhynchus tshawytscha... Act (ESA) of 1973, as amended. The geographic boundaries of the experimental population area would...

  4. Chasing Personal Meaning: Pedagogical Lessons through Luis Rodriguez's "Always Running"

    ERIC Educational Resources Information Center

    Theisen-Homer, Victoria

    2014-01-01

    In this autobiographical narrative, the author recounts her experiences teaching the novel "Always Running" by Luis Rodriguez with her English classes at a high school in a gang-heavy area. When she first started teaching, this teacher struggled to engage students. One particularly disruptive student requested to read "Always…

  5. Cross-Sectional Data for Selected Reaches of the Chattahoochee River within the Chattahoochee River National Recreation Area, Georgia, 2004

    USGS Publications Warehouse

    Dalton, Melinda S.

    2006-01-01

    This report presents hydrologic data for selected reaches of the Chattahoochee River within the Chattahoochee River National Recreation Area (CRNRA). Data about transect location, width, depth, and velocity of flow for selected reaches of the river are presented in tabular form. The tables contain measurements collected from shoal and run habitats identified as critical sites for the CRNRA. In shoal habitats, measurements were collected while wading using a digital flowmeter and laser range finder. In run habitats, measurements were collected using acoustic Doppler current profiling. Fifty-three transects were established in six reaches throughout the CRNRA; 24 in shoal habitat, 26 in run habitat, and 3 in pool habitat. Illustrations in this report contain information about study area location, hydrology, transect locations, and cross-sectional information. A study area location figure is followed by figures identifying locations of transects within each individual reach. Cross-sectional information is presented for each transect, by reach, in a series of graphs. The data presented herein can be used to complete preliminary habitat assessments for the Chattahoochee River within the CRNRA. These preliminary assessments can be used to identify reaches of concern for future impacts associated with continual development in the Metropolitan Atlanta area and potential water allocation agreements between Georgia, Florida, and Alabama.

  6. Analysis of large system black box verification test data

    NASA Technical Reports Server (NTRS)

    Clapp, Kenneth C.; Iyer, Ravishankar Krishnan

    1993-01-01

    Issues regarding black box, large systems verification are explored. It begins by collecting data from several testing teams. An integrated database containing test, fault, repair, and source file information is generated. Intuitive effectiveness measures are generated using conventional black box testing results analysis methods. Conventional analysts methods indicate that the testing was effective in the sense that as more tests were run, more faults were found. Average behavior and individual data points are analyzed. The data is categorized and average behavior shows a very wide variation in number of tests run and in pass rates (pass rates ranged from 71 percent to 98 percent). The 'white box' data contained in the integrated database is studied in detail. Conservative measures of effectiveness are discussed. Testing efficiency (ratio of repairs to number of tests) is measured at 3 percent, fault record effectiveness (ratio of repairs to fault records) is measured at 55 percent, and test script redundancy (ratio of number of failed tests to minimum number of tests needed to find the faults) ranges from 4.2 to 15.8. Error prone source files and subsystems are identified. A correlational mapping of test functional area to product subsystem is completed. A new adaptive testing process based on real-time generation of the integrated database is proposed.

  7. Performance overview of the Euclid infrared focal plane detector subsystems

    NASA Astrophysics Data System (ADS)

    Waczynski, A.; Barbier, R.; Cagiano, S.; Chen, J.; Cheung, S.; Cho, H.; Cillis, A.; Clémens, J.-C.; Dawson, O.; Delo, G.; Farris, M.; Feizi, A.; Foltz, R.; Hickey, M.; Holmes, W.; Hwang, T.; Israelsson, U.; Jhabvala, M.; Kahle, D.; Kan, Em.; Kan, Er.; Loose, M.; Lotkin, G.; Miko, L.; Nguyen, L.; Piquette, E.; Powers, T.; Pravdo, S.; Runkle, A.; Seiffert, M.; Strada, P.; Tucker, C.; Turck, K.; Wang, F.; Weber, C.; Williams, J.

    2016-07-01

    In support of the European space agency (ESA) Euclid mission, NASA is responsible for the evaluation of the H2RG mercury cadmium telluride (MCT) detectors and electronics assemblies fabricated by Teledyne imaging systems. The detector evaluation is performed in the detector characterization laboratory (DCL) at the NASA Goddard space flight center (GSFC) in close collaboration with engineers and scientists from the jet propulsion laboratory (JPL) and the Euclid project. The Euclid near infrared spectrometer and imaging photometer (NISP) will perform large area optical and spectroscopic sky surveys in the 0.9-2.02 μm infrared (IR) region. The NISP instrument will contain sixteen detector arrays each coupled to a Teledyne SIDECAR application specific integrated circuit (ASIC). The focal plane will operate at 100K and the SIDECAR ASIC will be in close proximity operating at a slightly higher temperature of 137K. This paper will describe the test configuration, performance tests and results of the latest engineering run, also known as pilot run 3 (PR3), consisting of four H2RG detectors operating simultaneously. Performance data will be presented on; noise, spectral quantum efficiency, dark current, persistence, pixel yield, pixel to pixel uniformity, linearity, inter pixel crosstalk, full well and dynamic range, power dissipation, thermal response and unit cell input sensitivity.

  8. PNNL streamlines energy-guzzling computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, Mary T.; Marquez, Andres

    In a room the size of a garage, two rows of six-foot-tall racks holding supercomputer hard drives sit back-to-back. Thin tubes and wires snake off the hard drives, slithering into the corners. Stepping between the rows, a rush of heat whips around you -- the air from fans blowing off processing heat. But walk farther in, between the next racks of hard drives, and the temperature drops noticeably. These drives are being cooled by a non-conducting liquid that runs right over the hardworking processors. The liquid carries the heat away in tubes, saving the air a few degrees. This ismore » the Energy Smart Data Center at Pacific Northwest National Laboratory. The bigger, faster, and meatier supercomputers get, the more energy they consume. PNNL's Andres Marquez has developed this test bed to learn how to train the behemoths in energy efficiency. The work will help supercomputers perform better as well. Processors have to keep cool or suffer from "thermal throttling," says Marquez. "That's the performance threshold where the computer is too hot to run well. That threshold is an industry secret." The center at EMSL, DOE's national scientific user facility at PNNL, harbors several ways of experimenting with energy usage. For example, the room's air conditioning is isolated from the rest of EMSL -- pipes running beneath the floor carry temperature-controlled water through heat exchangers to cooling towers outside. "We can test whether it's more energy efficient to cool directly on the processing chips or out in the water tower," says Marquez. The hard drives feed energy and temperature data to a network server running specially designed software that controls and monitors the data center. To test the center’s limits, the team runs the processors flat out – not only on carefully controlled test programs in the Energy Smart computers, but also on real world software from other EMSL research, such as regional weather forecasting models. Marquez's group is also developing "power aware computing", where the computer programs themselves perform calculations more energy efficiently. Maybe once computers get smart about energy, they'll have tips for their users.« less

  9. 29 CFR 1918.92 - Illumination.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Walking, working, and climbing areas. Walking, working, and climbing areas shall be illuminated. Unless... contact with drafts, running gear, and other moving equipment. (4) Portable cargo lights furnished by the...

  10. Older Runners Retain Youthful Running Economy despite Biomechanical Differences.

    PubMed

    Beck, Owen N; Kipp, Shalaya; Roby, Jaclyn M; Grabowski, Alena M; Kram, Rodger; Ortega, Justus D

    2016-04-01

    Sixty-five years of age typically marks the onset of impaired walking economy. However, running economy has not been assessed beyond the age of 65 yr. Furthermore, a critical determinant of running economy is the spring-like storage and return of elastic energy from the leg during stance, which is related to leg stiffness. Therefore, we investigated whether runners older than 65 yr retain youthful running economy and/or leg stiffness across running speeds. Fifteen young and 15 older runners ran on a force-instrumented treadmill at 2.01, 2.46, and 2.91 m·s(-1). We measured their rates of metabolic energy consumption (i.e., metabolic power), ground reaction forces, and stride kinematics. There were only small differences in running economy between young and older runners across the range of speeds. Statistically, the older runners consumed 2% to 9% less metabolic energy than the young runners across speeds (P = 0.012). Also, the leg stiffness of older runners was 10% to 20% lower than that of young runners across the range of speeds (P = 0.002), and in contrast to the younger runners, the leg stiffness of older runners decreased with speed (P < 0.001). Runners beyond 65 yr of age maintain youthful running economy despite biomechanical differences. It may be that vigorous exercise, such as running, prevents the age related deterioration of muscular efficiency and, therefore, may make everyday activities easier.

  11. An approach to combining parallel and cross-over trials with and without run-in periods using individual patient data.

    PubMed

    Tvete, Ingunn F; Olsen, Inge C; Fagerland, Morten W; Meland, Nils; Aldrin, Magne; Smerud, Knut T; Holden, Lars

    2012-04-01

    In active run-in trials, where patients may be excluded after a run-in period based on their response to the treatment, it is implicitly assumed that patients have individual treatment effects. If individual patient data are available, active run-in trials can be modelled using patient-specific random effects. With more than one trial on the same medication available, one can obtain a more precise overall treatment effect estimate. We present a model for joint analysis of a two-sequence, four-period cross-over trial (AABB/BBAA) and a three-sequence, two-period active run-in trial (AB/AA/A), where the aim is to investigate the effect of a new treatment for patients with pain due to osteoarthritis. Our approach enables us to separately estimate the direct treatment effect for all patients, for the patients excluded after the active run-in trial prior to randomisation, and for the patients who completed the active run-in trial. A similar model approach can be used to analyse other types of run-in trials, but this depends on the data and type of other trials available. We assume equality of the various carry-over effects over time. The proposed approach is flexible and can be modified to handle other designs. Our results should be encouraging for those responsible for planning cost-efficient clinical development programmes.

  12. Cross-Layer Modeling Framework for Energy-Efficient Resilience

    DTIC Science & Technology

    2014-04-01

    functional block diagram of the software architecture of PEARL, which stands for: Power Efficient and Resilient Embedded Processing with Real - Time ... DVFS ). The goal of the run- time manager is to minimize power consumption, while maintaining system resilience targets (on average) and meeting... real - time performance targets. The integrated performance, power and resilience models are nothing but the analytical modeling toolkit described in

  13. 40 CFR 63.4165 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of appendix M to 40 CFR part 51 to determine the mass fraction of TVH liquid input from each coating... materials used in the coating operation during the capture efficiency test run, kg. TVHi = mass fraction of... compares the mass of liquid TVH in materials used in the coating operation, to the mass of TVH emissions...

  14. 40 CFR 63.4165 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of appendix M to 40 CFR part 51 to determine the mass fraction of TVH liquid input from each coating... materials used in the coating operation during the capture efficiency test run, kg. TVHi = mass fraction of... compares the mass of liquid TVH in materials used in the coating operation, to the mass of TVH emissions...

  15. 40 CFR 63.4361 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Method 204A or 204F of appendix M to 40 CFR part 51 to determine the mass fraction of TVH liquid input... the capture efficiency test run, kg. TVHi = Mass fraction of TVH in regulated material, i, that is... protocol compares the mass of liquid TVH in regulated materials applied in the web coating/printing or...

  16. 40 CFR 63.4361 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Method 204A or 204F of appendix M to 40 CFR part 51 to determine the mass fraction of TVH liquid input... the capture efficiency test run, kg. TVHi = Mass fraction of TVH in regulated material, i, that is... protocol compares the mass of liquid TVH in regulated materials applied in the web coating/printing or...

  17. 45 CFR 1310.17 - Driver and bus monitor training.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... safe and efficient manner; (2) safely run a fixed route, including loading and unloading children... first aid in case of injury; (4) handle emergency situations, including vehicle evacuation procedures...

  18. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  19. Quantitative measurement of pass-by noise radiated by vehicles running at high speeds

    NASA Astrophysics Data System (ADS)

    Yang, Diange; Wang, Ziteng; Li, Bing; Luo, Yugong; Lian, Xiaomin

    2011-03-01

    It has been a challenge in the past to accurately locate and quantify the pass-by noise source radiated by the running vehicles. A system composed of a microphone array is developed in our current work to do this work. An acoustic-holography method for moving sound sources is designed to handle the Doppler effect effectively in the time domain. The effective sound pressure distribution is reconstructed on the surface of a running vehicle. The method has achieved a high calculation efficiency and is able to quantitatively measure the sound pressure at the sound source and identify the location of the main sound source. The method is also validated by the simulation experiments and the measurement tests with known moving speakers. Finally, the engine noise, tire noise, exhaust noise and wind noise of the vehicle running at different speeds are successfully identified by this method.

  20. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE PAGES

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines; ...

    2017-01-31

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  1. Study on the Effect of a Cogeneration System Capacity on its CO2 Emissions

    NASA Astrophysics Data System (ADS)

    Fonseca, J. G. S., Jr.; Asano, Hitoshi; Fujii, Terushige; Hirasawa, Shigeki

    With the global warming problem aggravating and subsequent implementation of the Kyoto Protocol, CO2 emissions are becoming an important factor when verifying the usability of cogeneration systems. Considering this, the purpose of this work is to study the effect of the capacity of a cogeneration system on its CO2 emissions under two kinds of operation strategies: one focused on exergetic efficiency and another on running cost. The system meets the demand pattern typical of a hospital in Japan, operating during one year with an average heat-to-power ratio of 1.3. The main equipments of the cogeneration system are: a gas turbine with waste heat boiler, a main boiler and an auxiliary steam turbine. Each of these equipments was characterized with partial load models, and the turbine efficiencies at full load changed according to the system capacity. Still, it was assumed that eventual surplus of electricity generated could be sold. The main results showed that for any of the capacities simulated, an exergetic efficiency-focused operational strategy always resulted in higher CO2 emissions reduction when compared to the running cost-focused strategy. Furthermore, the amount of reduction in emissions decreased when the system capacity decreased, reaching a value of 1.6% when the system capacity was 33% of the maximum electricity demand with a heat-to-power ratio of 4.1. When the system operated focused on running cost, the economic savings increased with the capacity and reached 42% for a system capacity of 80% of maximum electricity demand and with a heat-to-power ratio of 2.3. In such conditions however, there was an increase in emissions of 8.5%. Still for the same capacity, an exergetic efficiency operation strategy presented the best balance between cost and emissions, generating economic savings of 29% with a decrease in CO2 emissions of 7.1%. The results found showed the importance of an exergy-focused operational strategy and also indicated that lower capacities resulted in lesser gains of both CO2 emissions and running cost reduction.

  2. Regional Climate Simulation with a Variable Resolution Stretched Grid GCM: The Regional Down-Scaling Effects

    NASA Technical Reports Server (NTRS)

    Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Suarez, Max; Sawyer, William; Govindaraju, Ravi C.

    1999-01-01

    The results obtained with the variable resolution stretched grid (SG) GEOS GCM (Goddard Earth Observing System General Circulation Models) are discussed, with the emphasis on the regional down-scaling effects and their dependence on the stretched grid design and parameters. A variable resolution SG-GCM and SG-DAS using a global stretched grid with fine resolution over an area of interest, is a viable new approach to REGIONAL and subregional CLIMATE studies and applications. The stretched grid approach is an ideal tool for representing regional to global scale interactions. It is an alternative to the widely used nested grid approach introduced a decade ago as a pioneering step in regional climate modeling. The GEOS SG-GCM is used for simulations of the anomalous U.S. climate events of 1988 drought and 1993 flood, with enhanced regional resolution. The height low level jet, precipitation and other diagnostic patterns are successfully simulated and show the efficient down-scaling over the area of interest the U.S. An imitation of the nested grid approach is performed using the developed SG-DAS (Data Assimilation System) that incorporates the SG-GCM. The SG-DAS is run with withholding data over the area of interest. The design immitates the nested grid framework with boundary conditions provided from analyses. No boundary condition buffer is needed for the case due to the global domain of integration used for the SG-GCM and SG-DAS. The experiments based on the newly developed versions of the GEOS SG-GCM and SG-DAS, with finer 0.5 degree (and higher) regional resolution, are briefly discussed. The major aspects of parallelization of the SG-GCM code are outlined. The KEY OBJECTIVES of the study are: 1) obtaining an efficient DOWN-SCALING over the area of interest with fine and very fine resolution; 2) providing CONSISTENT interactions between regional and global scales including the consistent representation of regional ENERGY and WATER BALANCES; 3) providing a high computational efficiency for future SG-GCM and SG-DAS versions using PARALLEL codes.

  3. Influence of short-term unweighing and reloading on running kinetics and muscle activity.

    PubMed

    Sainton, Patrick; Nicol, Caroline; Cabri, Jan; Barthelemy-Montfort, Joëlle; Berton, Eric; Chavet, Pascale

    2015-05-01

    In running, body weight reduction is reported to result in decreased lower limb muscle activity with no change in the global activation pattern (Liebenberg et al. in J Sports Sci 29:207-214). Our study examined the acute effects on running mechanics and lower limb muscle activity of short-term unweighing and reloading conditions while running on a treadmill with a lower body positive pressure (LBPP) device. Eleven healthy males performed two randomized running series of 9 min at preferred speed. Each series included three successive running conditions of 3 min [at 100 % body weight (BW), 60 or 80 % BW, and 100 % BW]. Vertical ground reaction force and center of mass accelerations were analyzed together with surface EMG activity recorded from six major muscles of the left lower limb for the first and last 30 s of each running condition. Effort sensation and mean heart rate were also recorded. In both running series, the unloaded running pattern was characterized by a lower step frequency (due to increased flight time with no change in contact time), lower impact and active force peaks, and also by reduced loading rate and push-off impulse. Amplitude of muscle activity overall decreased, but pre-contact and braking phase extensor muscle activity did not change, whereas it was reduced during the subsequent push-off phase. The combined neuro-mechanical changes suggest that LBPP technology provides runners with an efficient support during the stride. The after-effects recorded after reloading highlight the fact that 3 min of unweighing may be sufficient for updating the running pattern.

  4. The LHCb Run Control

    NASA Astrophysics Data System (ADS)

    Alessio, F.; Barandela, M. C.; Callot, O.; Duval, P.-Y.; Franek, B.; Frank, M.; Galli, D.; Gaspar, C.; Herwijnen, E. v.; Jacobsson, R.; Jost, B.; Neufeld, N.; Sambade, A.; Schwemmer, R.; Somogyi, P.

    2010-04-01

    LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provided to the developers, as well as the first experience with the usage of the Run Control will be presented

  5. Nuclear shell model code CRUNCHER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resler, D.A.; Grimes, S.M.

    1988-05-01

    A new nuclear shell model code CRUNCHER, patterned after the code VLADIMIR, has been developed. While CRUNCHER and VLADIMIR employ the techniques of an uncoupled basis and the Lanczos process, improvements in the new code allow it to handle much larger problems than the previous code and to perform them more efficiently. Tests involving a moderately sized calculation indicate that CRUNCHER running on a SUN 3/260 workstation requires approximately one-half the central processing unit (CPU) time required by VLADIMIR running on a CRAY-1 supercomputer.

  6. CCC7-119 Reactive Molecular Dynamics Simulations of Hot Spot Growth in Shocked Energetic Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Aidan P.

    2015-03-01

    The purpose of this work is to understand how defects control initiation in energetic materials used in stockpile components; Sequoia gives us the core-count to run very large-scale simulations of up to 10 million atoms and; Using an OpenMP threaded implementation of the ReaxFF package in LAMMPS, we have been able to get good parallel efficiency running on 16k nodes of Sequoia, with 1 hardware thread per core.

  7. Development of an Experimental Test Section for Forcing Unsteady Flow in a Linear Compressor Cascade using Circular Rods

    DTIC Science & Technology

    2003-03-01

    to be moved while the tunnel was running, reducing the need for tunnel shut-down and allowing for thermal equilibrium to be maintained during the high ...rather quickly. However, for the high speed runs, the tunnel heats up greatly, so data cannot be taken until the tunnel reaches thermal steady-state...January 1992. 10. Wilson, David G. and Korakianitis, Theodosius. The Design of High - Efficiency Turbo- machinery and Gas Turbines , 317—322. Upper Saddle

  8. Feasibility of Developing a Human Simulator for CBRN IPE Testing

    DTIC Science & Technology

    2007-08-01

    side to side, calisthenic arm movements, running in place, pumping a tire pump, and walking in place. For testing high efficiency (HE) PAPRs, the head...not be appropriate for mouth movement to cause abnormal bulges or depressions in the simulator’s cheek. The arms should be able to mimic calisthenic ...Exercises FIT TEST Exercise NIOSH NIOSH HE PAPR OSHA(" LRPL Head: Up/Down x - x x Head: Side/Side x - x x Calisthenic Arm Movements x - - - Running in

  9. Affordance Templates for Shared Robot Control

    NASA Technical Reports Server (NTRS)

    Hart, Stephen; Dinh, Paul; Hambuchen, Kim

    2014-01-01

    This paper introduces the Affordance Template framework used to supervise task behaviors on the NASA-JSC Valkyrie robot at the 2013 DARPA Robotics Challenge (DRC) Trials. This framework provides graphical interfaces to human supervisors that are adjustable based on the run-time environmental context (e.g., size, location, and shape of objects that the robot must interact with, etc.). Additional improvements, described below, inject degrees of autonomy into instantiations of affordance templates at run-time in order to enable efficient human supervision of the robot for accomplishing tasks.

  10. Near-IR imaging of erbium laser ablation with a water spray

    NASA Astrophysics Data System (ADS)

    Darling, Cynthia L.; Maffei, Marie E.; Fried, William A.; Fried, Daniel

    2008-02-01

    Near-IR (NIR) imaging can be used to view the formation of ablation craters during laser ablation since the enamel of the tooth is almost completely transparent near 1310-nm1. Laser ablation craters can be monitored under varying irradiation conditions to assess peripheral thermal and transient-stress induced damage, measure the rate and efficiency of ablation and provide insight into the ablation mechanism. There are fundamental differences in the mechanism of enamel ablation using erbium lasers versus carbon dioxide laser systems due to the nature of the primary absorber and it is necessary to have water present on the tooth surface for efficient ablation at erbium laser wavelengths. In this study, sound human tooth sections of approximately 2-3-mm thickness were irradiated by free running and Q-switched Er:YAG & Er:YSGG lasers under varying conditions with and without a water spray. The incision area in the interior of each sample was imaged using a tungsten-halogen lamp with a band-pass filter centered at 1310-nm combined with an InGaAs area camera with a NIR zoom microscope. Obvious differences in the crater evolution were observed between CO2 and erbium lasers. Ablation stalled after a few laser pulses without a water spray as anticipated. Efficient ablation was re-initiated by resuming the water spray. Micro-fractures were continuously produced apparently driven along prism lines during multi-pulse ablation. These fractures or fissures appeared to merge together as the crater evolved to form the leading edge of the ablation crater. These observations support the proposed thermo-mechanical mechanisms of erbium laser involving the strong mechanical forces generated by selective absorption by water.

  11. The segmentation of bones in pelvic CT images based on extraction of key frames.

    PubMed

    Yu, Hui; Wang, Haijun; Shi, Yao; Xu, Ke; Yu, Xuyao; Cao, Yuzhen

    2018-05-22

    Bone segmentation is important in computed tomography (CT) imaging of the pelvis, which assists physicians in the early diagnosis of pelvic injury, in planning operations, and in evaluating the effects of surgical treatment. This study developed a new algorithm for the accurate, fast, and efficient segmentation of the pelvis. The proposed method consists of two main parts: the extraction of key frames and the segmentation of pelvic CT images. Key frames were extracted based on pixel difference, mutual information and normalized correlation coefficient. In the pelvis segmentation phase, skeleton extraction from CT images and a marker-based watershed algorithm were combined to segment the pelvis. To meet the requirements of clinical application, physician's judgment is needed. Therefore the proposed methodology is semi-automated. In this paper, 5 sets of CT data were used to test the overlapping area, and 15 CT images were used to determine the average deviation distance. The average overlapping area of the 5 sets was greater than 94%, and the minimum average deviation distance was approximately 0.58 pixels. In addition, the key frame extraction efficiency and the running time of the proposed method were evaluated on 20 sets of CT data. For each set, approximately 13% of the images were selected as key frames, and the average processing time was approximately 2 min (the time for manual marking was not included). The proposed method is able to achieve accurate, fast, and efficient segmentation of pelvic CT image sequences. Segmentation results not only provide an important reference for early diagnosis and decisions regarding surgical procedures, they also offer more accurate data for medical image registration, recognition and 3D reconstruction.

  12. Application of a computationally efficient method to approximate gap model results with a probabilistic approach

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.

    2014-07-01

    To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second-generation DGVM (dynamic global vegetation model) LPJ-GUESS (Lund-Potsdam-Jena General Ecosystem Simulator) to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, which increased the model's speed by approximately the factor 8, we were able to faster detect the shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high-resolution LPJ-GUESS simulation results for a large part of the Alpine region.

  13. Solar energy plant as a complement to a conventional heating system: Measurement of the storage and consumption of solar energy

    NASA Astrophysics Data System (ADS)

    Doering, E.; Lippe, W.

    1982-08-01

    The technical and economic performances of a complementary solar heating installation for a new swimming pool added to a two-floor dwelling were examined after measurements were taken over a period of 12 months and analyzed. In particular, the heat absorption and utilization were measured and modifications were carried out to improve pipe insulation and regulation of mixer valve motor running and volume flow. The collector system efficiency was evaluated at 15.4%, the proportion of solar energy of the total consumption being 6.1%. The solar plant and the measuring instruments are described and recommendations are made for improved design and performance, including enlargement of the collector surface area, further modification of the regulation system, utilization of temperature stratification in the storage tanks and avoiding mutual overshadowing of the collectors.

  14. Parallel programming of industrial applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heroux, M; Koniges, A; Simon, H

    1998-07-21

    In the introductory material, we overview the typical MPP environment for real application computing and the special tools available such as parallel debuggers and performance analyzers. Next, we draw from a series of real applications codes and discuss the specific challenges and problems that are encountered in parallelizing these individual applications. The application areas drawn from include biomedical sciences, materials processing and design, plasma and fluid dynamics, and others. We show how it was possible to get a particular application to run efficiently and what steps were necessary. Finally we end with a summary of the lessons learned from thesemore » applications and predictions for the future of industrial parallel computing. This tutorial is based on material from a forthcoming book entitled: "Industrial Strength Parallel Computing" to be published by Morgan Kaufmann Publishers (ISBN l-55860-54).« less

  15. Mobile Phone Terminal

    NASA Technical Reports Server (NTRS)

    1978-01-01

    In the photo, an employee of a real estate firm is contacting his office by means of HICOM, an advanced central terminal for mobile telephones. Developed by the Orlando Division of Martin Marietta Aerospace, Orlando, Florida, and manufactured by Harris Corporation's RF Division, Rochester, N.Y., HICOM upgrades service to users, provides better system management to telephone companies, and makes more efficient use of available mobile telephone channels through a computerized central control terminal. The real estate man, for example, was able to dial his office and he could also have direct-dialed a long distance number. Mobile phones in most areas not yet served by HICOM require an operator's assistance for both local and long distance calls. HICOM improves system management by automatically recording information on all calls for accurate billing, running continual performance checks on its own operation, and reporting any malfunctions to a central office.

  16. Big data challenges for large radio arrays

    NASA Astrophysics Data System (ADS)

    Jones, D. L.; Wagstaff, K.; Thompson, D. R.; D'Addario, L.; Navarro, R.; Mattmann, C.; Majid, W.; Lazio, J.; Preston, J.; Rebbapragada, U.

    2012-03-01

    Future large radio astronomy arrays, particularly the Square Kilometre Array (SKA), will be able to generate data at rates far higher than can be analyzed or stored affordably with current practices. This is, by definition, a "big data" problem, and requires an end-to-end solution if future radio arrays are to reach their full scientific potential. Similar data processing, transport, storage, and management challenges face next-generation facilities in many other fields. The Jet Propulsion Laboratory is developing technologies to address big data issues, with an emphasis in three areas: 1) Lower-power digital processing architectures to make highvolume data generation operationally affordable, 2) Date-adaptive machine learning algorithms for real-time analysis (or "data triage") of large data volumes, and 3) Scalable data archive systems that allow efficient data mining and remote user code to run locally where the data are stored.

  17. Design of a lamella settler for biomass recycling in continuous ethanol fermentation process.

    PubMed

    Tabera, J; Iznaola, M A

    1989-04-20

    The design and application of a settler to a continuous fermentation process with yeast recycle were studied. The compact lamella-type settler was chosen to avoid large volumes associated with conventional settling tanks. A rationale of the design method is covered. The sedimentation area was determined by classical batch settling rate tests and sedimentation capacity calculation. Limitations on the residence time of the microorganisms in the settler, rather than sludge thickening considerations, was the approach employed for volume calculation. Fermentation rate tests with yeast after different sedimentation periods were carried out to define a suitable residence time. Continuous cell recycle fermentation runs, performed with the old and new sedimentation devices, show that lamella settler improves biomass recycling efficiency, being the process able to operate at higher sugar concentrations and faster dilution rates.

  18. Cleaning Physical Education Areas.

    ERIC Educational Resources Information Center

    Griffin, William R.

    1999-01-01

    Discusses techniques to help create clean and inviting school locker rooms. Daily, weekly or monthly, biannual, and annual cleaning strategies for locker room showers are highlighted as are the specialized maintenance needs for aerobic and dance areas, running tracks, and weight training areas. (GR)

  19. Geology and log responses of the Rose Run sandstone in Randolph Township, Portage County, Ohio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moyer, C.C.

    1996-09-01

    Approximately 75 wells have penetrated the Cambrian Rose Run sandstone in Randolph Township, Portage County, Ohio, about half of which should produce well beyond economic payout. Only one deep test (to the Rose Run or deeper) was drilled in this Township prior to 1990. Two separate and distinct Rose Run producing fields exist in the Township; the western field is predominately gas-productive and the east is predominantly oil-productive. Both fields are on the north side of the Akron-Suffield Fault Zone, which is part of a regional cross-strike structural discontinuity extending from the Pittsburgh, Pennsylvania area northwestward to Lake Erie. Thismore » feature exhibits control over Berea, Oriskany, Newburg, Clinton, and Rose Run production.« less

  20. Biological and environmental determinants of 12-minute run performance in youth.

    PubMed

    Freitas, Duarte; Maia, José; Stasinopoulos, Mikis; Gouveia, Élvio Rúbio; Antunes, António M; Thomis, Martine; Lefevre, Johan; Claessens, Albrecht; Hedeker, Donald; Malina, Robert M

    2017-11-01

    The 12-minute run is a commonly used indicator of cardiorespiratory fitness in youth. Variation in growth and maturity status as potential correlates of test performance has not been systematically addressed. To evaluate biological and environmental determinants of 12-minute run performance in Portuguese youth aged 7-17 years. Mixed-longitudinal samples of 187 boys and 142 girls were surveyed in 1996, 1997 and 1998. The 12-minute run was the indicator of cardiorespiratory fitness. Height, body mass and five skinfolds were measured and skeletal maturity was assessed. Physical activity, socioeconomic status and area of residence were obtained with a questionnaire. Multi-level modelling was used for the analysis. Chronological age and sum of five skinfolds were significant predictors of 12-minute run performance. Older boys and girls ran longer distances than younger peers, while high levels of subcutaneous fat were associated with shorter running distances. Rural boys were more proficient in the 12-minute run than urban peers. Skeletal maturity, height, body mass index, physical activity and socioeconomic status were not significant predictors of 12-minute run performances. Age and sum of skinfolds in both sexes and rural residence in boys are significant predictors of 12-minute run performance in Portuguese youth.

  1. Relationship of pyrogenic polycyclic aromatic hydrocarbons contamination among environmental solid media.

    PubMed

    Kim, Dong Won; Kim, Seung Kyu; Lee, Dong Soo

    2009-06-01

    This study compared the contamination levels and compositional characteristics of PAHs in soil, SS and sediment to understand the cross media characteristics among the three solid media and ecological risk implications for the purpose to help manage in a more integrated manner the environmental quality objectives or the ecological risk in the media. The study area included urban (metropolis and industrial zone), suburban and rural sites. Seasonal samples were concurrently collected in surface soils, surface waters (dissolved and suspended solid (SS) phases separately) and sediments. The emission estimate and source characterizing PAH indices consistently indicated that PAHs were from pyrogenic sources. The level of total PAHs in soil declined along the wind direction from the urban areas to the rural areas. The sorption power of soil appeared distinctly different between the urban and rural areas. The contamination levels and PAH profiles in soil and sediment were closely related to each other while no such correlation was observed between SS and sediment or SS and soil. Comparisons of the observed partitioning coefficients with three different partitioning equilibrium models strongly suggested that PAHs in water appeared to undergo partitioning among the dissolved phase in water, dissolved organic matter, and organic and soot carbons in SS, which might account for the level and profile of PAHs in SS that were not correlated with those in soil or sediment. The observed results suggested that PAHs of pyrogenic origins entered into soil, sediment, and water by the atmospheric deposition and subsequent other cross-media transfers of PAHs. The results also evidenced that sediments were principally contaminated with PAHs delivered via surface run-off from soil although in the urban areas the run-off influence appeared less immediate than in the rural areas. Environmental quality objectives for PAHs in soil and sediment should be set in a coherent manner and the protection efforts for the sediment quality should be made with the consideration of the soil quality particularly where the river bottom sediment is renewed periodically with eroded soil due to heavy rain and/or large river regime coefficient. In spite of the difference in PAH profiles among the three solid media, BaP commonly appeared to present the greatest TEQ, suggesting that strict regulation of BaP is necessary to efficiently and substantially minimize the total risk of the environmental PAHs.

  2. Field Application of Modified In Situ Soil Flushing in Combination with Air Sparging at a Military Site Polluted by Diesel and Gasoline in Korea

    PubMed Central

    Lee, Hwan; Lee, Yoonjin; Kim, Jaeyoung; Kim, Choltae

    2014-01-01

    In this study the full-scale operation of soil flushing with air sparging to improve the removal efficiency of petroleum at depths of less than 7 m at a military site in Korea was evaluated. The target area was polluted by multiple gasoline and diesel fuel sources. The soil was composed of heterogeneous layers of granules, sand, silt and clay. The operation factors were systemically assessed using a column test and a pilot study before running the full-scale process at the site. The discharged TPH and BTEX (benzene, toluene, ethylbenzene, and xylenes) concentrations in the water were highest at 20 min and at a rate of 350 L/min, which was selected as the volume of air for the full-scale operation in the pilot air sparging test. The surfactant-aid condition was 1.4 times more efficient than the non-surfactant condition in the serial operations of modified soil flushing followed by air sparging. The hydraulic conductivity (3.13 × 10−3 cm/s) increased 4.7 times after the serial operation of both processes relative to the existing condition (6.61 × 10−4 cm/s). The removal efficiencies of TPH were 52.8%, 57.4%, and 61.8% for the soil layers at 6 to 7, 7 to 8 and 8 to 9 m, respectively. Therefore, the TPH removal was improved at depth of less than 7 m by using this modified remediation system. The removal efficiencies for the areas with TPH and BTEX concentrations of more than 500 and 80 mg/kg, were 55.5% and 92.9%, respectively, at a pore volume of 2.9. The total TPH and BTEX mass removed during the full-scale operation was 5109 and 752 kg, respectively. PMID:25166919

  3. Analysis of the Water Resources on Baseflow River Basin in Jeju Island, Korea

    NASA Astrophysics Data System (ADS)

    Yang, S.-K.; Jung, W.-Y.; Kang, M.-S.

    2012-04-01

    Jeju Island is a volcanic island located at the southernmost of Korea, and is the heaviest raining area in Korea, but due to its hydrological / geological characteristics different from those of inland areas, most streams are of the dry form, and it relies on groundwater for water resources. As for some streams, however, springwater is discharged at a point near the downstream of the final discharge to maintain the flow of the stream; this has been developed as the source for water supply since the past, but the studies on detail observations and analysis are yet inadequate. This study utilizes the ADCP (Acoustic Doppler Current Profiler) hydrometer to regularly observe the flow amount of base run-off stream, and the water resources of base discharge basin of Jeju Island were analyzed using the SWAT (Soil & Water Assessment Tool) model. The detail water resource analysis study using modeling and site observation with high precision for Jeju Island water resources is expected to become the foundation for efficient usage and security of water resources against future climate changes.

  4. New Parallel Algorithms for Landscape Evolution Model

    NASA Astrophysics Data System (ADS)

    Jin, Y.; Zhang, H.; Shi, Y.

    2017-12-01

    Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.

  5. Magnetically Separable MoS2/Fe3O4/nZVI Nanocomposites for the Treatment of Wastewater Containing Cr(VI) and 4-Chlorophenol

    PubMed Central

    Wang, Jingkang; Wang, Ting

    2017-01-01

    With a large specific surface area, high reactivity, and excellent adsorption properties, nano zerovalent iron (nZVI) can degrade a wide variety of contaminants in wastewater. However, aggregation, oxidation, and separation issues greatly impede its wide application. In this study, MoS2/Fe3O4/nZVI nanocomposites were successfully synthesized by a facile step-by-step approach to overcome these problems. MoS2 nanosheets (MNs) acted as an efficient support for nZVI and enriched the organic pollutants nearby, leading to an enhanced removal efficiency. Fe3O4 nanoparticles (NPs) could not only suppress the agglomeration and restacking of MNs, but also facilitate easy separation and recovery of the nanocomposites. The synergistic effect between MNs and Fe3O4 NPs effectively enhanced the reactivity and efficiency of nZVI. In the system, Cr(VI) was reduced to Cr(III) by nZVI in the nanocomposites, and Fe2+ produced in the process was combined with H2O2 to further remove 4-Chlorophenol (4-CP) through a Fenton reaction. Furthermore, the nanocomposites could be easily separated from wastewater by a magnet and be reused for at least five consecutive runs, revealing good reusability. The results demonstrate that the novel nanocomposites are highly efficient and promising for the simultaneous removal of Cr(VI) and 4-CP in wastewater. PMID:28973986

  6. Consequences of an uncertain mass mortality regime triggered by climate variability on giant clam population management in the Pacific Ocean.

    PubMed

    Van Wynsberge, Simon; Andréfouët, Serge; Gaertner-Mazouni, Nabila; Remoissenet, Georges

    2018-02-01

    Despite actions to manage sustainably tropical Pacific Ocean reef fisheries, managers have faced failures and frustrations because of unpredicted mass mortality events triggered by climate variability. The consequences of these events on the long-term population dynamics of living resources need to be better understood for better management decisions. Here, we use a giant clam (Tridacna maxima) spatially explicit population model to compare the efficiency of several management strategies under various scenarios of natural mortality, including mass mortality due to climatic anomalies. The model was parameterized by in situ estimations of growth and mortality and fishing effort, and was validated by historical and new in situ surveys of giant clam stocks in two French Polynesia lagoons. Projections on the long run (100 years) suggested that the best management strategy was a decrease of fishing pressure through quota implementation, regardless of the mortality regime considered. In contrast, increasing the minimum legal size of catch and closing areas to fishing were less efficient. When high mortality occurred due to climate variability, the efficiency of all management scenarios decreased markedly. Simulating El Niño Southern Oscillation event by adding temporal autocorrelation in natural mortality rates increased the natural variability of stocks, and also decreased the efficiency of management. These results highlight the difficulties that managers in small Pacific islands can expect in the future in the face of global warming, climate anomalies and new mass mortalities. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Efficient degradation of rhodamine B using modified graphite felt gas diffusion electrode by electro-Fenton process.

    PubMed

    Tian, Jiangnan; Olajuyin, Ayobami Matthew; Mu, Tingzhen; Yang, Maohua; Xing, Jianmin

    2016-06-01

    The electro-Fenton (EF) process treatment of 0.1-M (rhodamine B) RhB solution was studied with different graphite cathode materials, and graphite felt (GF) was selected as a promising material in further investigation. Then, the degradation performances of gas diffusion electrode (GDE) and graphite felt (GF) were compared, and GDE was confirmed to be more efficient in RhB removal. The operational parameters such as Fe(2+) dosage and current density were optimized, and comparison among different modified methods-polytetrafluoroethylene-carbon black (PTFE-CB), polytetrafluoroethylene-carbon nanotube (PTFE-CNT), electrodeposition-CB, and electrodeposition-CNT-showed 98.49 % RhB removal by PTFE-CB-modified cathode in 0.05 M Na2SO4 at a current density of 50 A/m(2) and an air flow rate of 1 L/min after 20 min. Meanwhile, after cathode modified by PTFE-CB, the mineralization efficiency and mineralization current efficiency performed absolutely better than the pristine one. Cyclic voltammograms, SEM images, contact angles, and BET surface area were carried out to demonstrate stronger current responses and higher hydrophilicity of GF after modified. The value of biochemical oxygen demand/chemical oxygen demand (BOD5/COD) increased from 0.049 to 0.331 after 90-min treatment, suggesting the solution was biodegradable, and the modified cathode was confirmed to be stable after ten circle runs. Finally, a proposed degradation pathway of RhB was put forward.

  8. High efficient waste-to-energy in Amsterdam: getting ready for the next steps.

    PubMed

    Murer, Martin J; Spliethoff, Hartmut; de Waal, Chantal M W; Wilpshaar, Saskia; Berkhout, Bart; van Berlo, Marcel A J; Gohlke, Oliver; Martin, Johannes J E

    2011-10-01

    Waste-to-energy (WtE) plants are traditionally designed for clean and economical disposal of waste. Design for output on the other hand was the guideline when projecting the HRC (HoogRendement Centrale) block of Afval Energie Bedrijf Amsterdam. Since commissioning of the plant in 2007, operation has continuously improved. In December 2010, the block's running average subsidy efficiency for one year exceeded 30% for the first time. The plant can increase its efficiency even further by raising the steam temperature to 480°C. In addition, the plant throughput can be increased by 10% to reduce the total cost of ownership. In order to take these steps, good preparation is required in areas such as change in heat transfer in the boiler and the resulting higher temperature upstream of the super heaters. A solution was found in the form of combining measured data with a computational fluid dynamics (CFD) model. Suction and acoustic pyrometers are used to obtain a clear picture of the temperature distribution in the first boiler pass. With the help of the CFD model, the change in heat transfer and vertical temperature distribution was predicted. For the increased load, the temperature is increased by 100°C; this implies a higher heat transfer in the first and second boiler passes. Even though the new block was designed beyond state-of-the art in waste-to-energy technology, margins remain for pushing energy efficiency and economy even further.

  9. Dynamically linking economic models to ecological condition for coastal zone management: Application to sustainable tourism planning.

    PubMed

    Dvarskas, Anthony

    2017-03-01

    While the development of the tourism industry can bring economic benefits to an area, it is important to consider the long-run impact of the industry on a given location. Particularly when the tourism industry relies upon a certain ecological state, those weighing different development options need to consider the long-run impacts of increased tourist numbers upon measures of ecological condition. This paper presents one approach for linking a model of recreational visitor behavior with an ecological model that estimates the impact of the increased visitors upon the environment. Two simulations were run for the model using initial parameters available from survey data and water quality data for beach locations in Croatia. Results suggest that the resilience of a given tourist location to the changes brought by increasing tourism numbers is important in determining its long-run sustainability. Further work should investigate additional model components, including the tourism industry, refinement of the relationships assumed by the model, and application of the proposed model in additional areas. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Multi-metric measurement of personal exposure to ultrafine particles in selected urban microenvironments

    NASA Astrophysics Data System (ADS)

    Spinazzè, Andrea; Cattaneo, Andrea; Scocca, Damiano R.; Bonzini, Matteo; Cavallo, Domenico M.

    2015-06-01

    At the beginning of the study, our hypothesis was that visiting certain microenvironments (MEs) is one of the most important determinants of personal exposure to ultrafine particles (UFP) and that moving between microenvironments significantly differentiates exposure. The overall aim of this study is to perform relevant exposure measurements to extend our knowledge on environmental exposure to UFP in urban environments. The UFP concentrations in different urban MEs were measured by personal monitoring in repeated sampling campaigns along a fixed route. The measurement runs were performed on one-week periods and at different times of day (AM: 08.00-10.30; PM: 16.00-18.30) and repeated in different periods of the year (winter, spring, summer, and autumn) for a total of 56 runs (>110 h). Measurements included on-line monitoring of the UFP particle number concentration (PNC), mean diameter (mean-d) and lung-deposited surface-area (LDSA). Additionally, the PNC, particle mass concentration (PMC) profiles for quasi-ultrafine particles (QUFP; PM0.25) were estimated. A significant seasonal difference in the PNC and PMC, mean diameter and surface area was observed as well as between different times of the day and days of the week. In addition, differences in the UFP concentrations were also found in each ME, and there were specific mean-diameter and surface area concentrations. In general, the mean particle diameters showed an inverse relationship with the PNC, while the LDSA had the opposite behaviour. Appreciable differences among all MEs and monitoring periods were observed; the concentration patterns and variations seemed related to the typical sources of urban pollutants (traffic), proximity to sources and time of day. The highest exposures were observed for walking or biking along high-trafficked routes and while using public buses. The UFP exposure levels in modern cars, equipped with high-efficiency filters and in air recirculation mode, were significantly lower.

  11. hybrid\\scriptsize{{MANTIS}}: a CPU-GPU Monte Carlo method for modeling indirect x-ray detectors with columnar scintillators

    NASA Astrophysics Data System (ADS)

    Sharma, Diksha; Badal, Andreu; Badano, Aldo

    2012-04-01

    The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code \\scriptsize{{MANTIS}}, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fast\\scriptsize{{DETECT}}2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the \\scriptsize{{MANTIS}} code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify \\scriptsize{{PENELOPE}} (the open source software package that handles the x-ray and electron transport in \\scriptsize{{MANTIS}}) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fast\\scriptsize{{DETECT}}2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybrid\\scriptsize{{MANTIS}} approach achieves a significant speed-up factor of 627 when compared to \\scriptsize{{MANTIS}} and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybrid\\scriptsize{{MANTIS}}, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical to x-ray transport. The new code requires much less memory than \\scriptsize{{MANTIS}} and, as a result, allows us to efficiently simulate large area detectors.

  12. BASINS Framework and Features

    EPA Pesticide Factsheets

    BASINS enables users to efficiently access nationwide environmental databases and local user-specified datasets, apply assessment and planning tools, and run a variety of proven nonpoint loading and water quality models within a single GIS format.

  13. Fault Injection Campaign for a Fault Tolerant Duplex Framework

    NASA Technical Reports Server (NTRS)

    Sacco, Gian Franco; Ferraro, Robert D.; von llmen, Paul; Rennels, Dave A.

    2007-01-01

    Fault tolerance is an efficient approach adopted to avoid or reduce the damage of a system failure. In this work we present the results of a fault injection campaign we conducted on the Duplex Framework (DF). The DF is a software developed by the UCLA group [1, 2] that uses a fault tolerant approach and allows to run two replicas of the same process on two different nodes of a commercial off-the-shelf (COTS) computer cluster. A third process running on a different node, constantly monitors the results computed by the two replicas, and eventually restarts the two replica processes if an inconsistency in their computation is detected. This approach is very cost efficient and can be adopted to control processes on spacecrafts where the fault rate produced by cosmic rays is not very high.

  14. Parallel ALLSPD-3D: Speeding Up Combustor Analysis Via Parallel Processing

    NASA Technical Reports Server (NTRS)

    Fricker, David M.

    1997-01-01

    The ALLSPD-3D Computational Fluid Dynamics code for reacting flow simulation was run on a set of benchmark test cases to determine its parallel efficiency. These test cases included non-reacting and reacting flow simulations with varying numbers of processors. Also, the tests explored the effects of scaling the simulation with the number of processors in addition to distributing a constant size problem over an increasing number of processors. The test cases were run on a cluster of IBM RS/6000 Model 590 workstations with ethernet and ATM networking plus a shared memory SGI Power Challenge L workstation. The results indicate that the network capabilities significantly influence the parallel efficiency, i.e., a shared memory machine is fastest and ATM networking provides acceptable performance. The limitations of ethernet greatly hamper the rapid calculation of flows using ALLSPD-3D.

  15. Microbial air quality and bacterial surface contamination in ambulances during patient services.

    PubMed

    Luksamijarulkul, Pipat; Pipitsangjan, Sirikun

    2015-03-01

    We sought to assess microbial air quality and bacterial surface contamination on medical instruments and the surrounding areas among 30 ambulance runs during service. We performed a cross-sectional study of 106 air samples collected from 30 ambulances before patient services and 212 air samples collected during patient services to assess the bacterial and fungal counts at the two time points. Additionally, 226 surface swab samples were collected from medical instrument surfaces and the surrounding areas before and after ambulance runs. Groups or genus of isolated bacteria and fungi were preliminarily identified by Gram's stain and lactophenol cotton blue. Data were analyzed using descriptive statistics, t-test, and Pearson's correlation coefficient with a p-value of less than 0.050 considered significant. The mean and standard deviation of bacterial and fungal counts at the start of ambulance runs were 318±485cfu/m(3) and 522±581cfu/m(3), respectively. Bacterial counts during patient services were 468±607cfu/m(3) and fungal counts were 656±612cfu/m(3). Mean bacterial and fungal counts during patient services were significantly higher than those at the start of ambulance runs, p=0.005 and p=0.030, respectively. For surface contamination, the overall bacterial counts before and after patient services were 0.8±0.7cfu/cm(2) and 1.3±1.1cfu/cm(2), respectively (p<0.001). The predominant isolated bacteria and fungi were Staphylococcus spp. and Aspergillus spp., respectively. Additionally, there was a significantly positive correlation between bacterial (r=0.3, p<0.010) and fungal counts (r=0.2, p=0.020) in air samples and bacterial counts on medical instruments and allocated areas. This study revealed high microbial contamination (bacterial and fungal) in ambulance air during services and higher bacterial contamination on medical instrument surfaces and allocated areas after ambulance services compared to the start of ambulance runs. Additionally, bacterial and fungal counts in ambulance air showed a significantly positive correlation with the bacterial surface contamination on medical instruments and allocated areas. Further studies should be conducted to determine the optimal intervention to reduce microbial contamination in the ambulance environment.

  16. Changes in plantar loading based on shoe type and sex during a jump-landing task.

    PubMed

    Debiasio, Justin C; Russell, Mary E; Butler, Robert J; Nunley, James A; Queen, Robin M

    2013-01-01

    Metatarsal stress fractures are common in cleated-sport athletes. Previous authors have shown that plantar loading varies with footwear, sex, and the athletic task. To examine the effects of shoe type and sex on plantar loading in the medial midfoot (MMF), lateral midfoot (LMF), medial forefoot (MFF), middle forefoot (MidFF), and lateral forefoot (LFF) during a jump-landing task. Crossover study. Laboratory. Twenty-seven recreational athletes (14 men, 13 women) with no history of lower extremity injury in the last 6 months and no history of foot or ankle surgery. The athletes completed 7 jumping trials while wearing bladed-cleat, turf-cleat, and running shoes. Maximum force, contact area, contact time, and the force-time integral were analyzed in each foot region. We calculated 2 × 3 analyses of variance (α = .05) to identify shoe-condition and sex differences. We found no shoe × sex interactions, but the MMF, LMF, MFF, and LFF force-time integrals were greater in men (P < .03). The MMF maximum force was less with the bladed-cleat shoes (P = .02). Total foot and MidFF maximum force was less with the running shoes (P < .01). The MFF and LFF maximum forces were different among all shoe conditions (P < .01). Total foot contact area was less in the bladed-cleat shoes (P = .01). The MMF contact area was greatest in the running shoes (P < .01). The LFF contact area was less in the running shoes (P = .03). The MFF and LFF force-time integrals were greater with the bladed-cleat shoes (P < .01). The MidFF force-time integral was less in the running shoes (P < .01). Independent of shoe, men and women loaded the foot differently during a jump landing. The bladed cleat increased forefoot loading, which may increase the risk for forefoot injury. The type of shoe should be considered when choosing footwear for athletes returning to activity after metatarsal stress fractures.

  17. An enhanced Ada run-time system for real-time embedded processors

    NASA Technical Reports Server (NTRS)

    Sims, J. T.

    1991-01-01

    An enhanced Ada run-time system has been developed to support real-time embedded processor applications. The primary focus of this development effort has been on the tasking system and the memory management facilities of the run-time system. The tasking system has been extended to support efficient and precise periodic task execution as required for control applications. Event-driven task execution providing a means of task-asynchronous control and communication among Ada tasks is supported in this system. Inter-task control is even provided among tasks distributed on separate physical processors. The memory management system has been enhanced to provide object allocation and protected access support for memory shared between disjoint processors, each of which is executing a distinct Ada program.

  18. The kinetics-based enzyme-linked immunosorbent assay for coronavirus antibodies in cats: calibration to the indirect immunofluorescence assay and computerized standardization of results through normalization to control values.

    PubMed Central

    Barlough, J E; Jacobson, R H; Downing, D R; Lynch, T J; Scott, F W

    1987-01-01

    The computer-assisted, kinetics-based enzyme-linked immunosorbent assay for coronavirus antibodies in cats was calibrated to the conventional indirect immunofluorescence assay by linear regression analysis and computerized interpolation (generation of "immunofluorescence assay-equivalent" titers). Procedures were developed for normalization and standardization of kinetics-based enzyme-linked immunosorbent assay results through incorporation of five different control sera of predetermined ("expected") titer in daily runs. When used with such sera and with computer assistance, the kinetics-based enzyme-linked immunosorbent assay minimized both within-run and between-run variability while allowing also for efficient data reduction and statistical analysis and reporting of results. PMID:3032390

  19. The kinetics-based enzyme-linked immunosorbent assay for coronavirus antibodies in cats: calibration to the indirect immunofluorescence assay and computerized standardization of results through normalization to control values.

    PubMed

    Barlough, J E; Jacobson, R H; Downing, D R; Lynch, T J; Scott, F W

    1987-01-01

    The computer-assisted, kinetics-based enzyme-linked immunosorbent assay for coronavirus antibodies in cats was calibrated to the conventional indirect immunofluorescence assay by linear regression analysis and computerized interpolation (generation of "immunofluorescence assay-equivalent" titers). Procedures were developed for normalization and standardization of kinetics-based enzyme-linked immunosorbent assay results through incorporation of five different control sera of predetermined ("expected") titer in daily runs. When used with such sera and with computer assistance, the kinetics-based enzyme-linked immunosorbent assay minimized both within-run and between-run variability while allowing also for efficient data reduction and statistical analysis and reporting of results.

  20. Evaluation of the discrete vortex wake cross flow model using vector computers. Part 1: Theory and application

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The current program had the objective to modify a discrete vortex wake method to efficiently compute the aerodynamic forces and moments on high fineness ratio bodies (f approximately 10.0). The approach is to increase computational efficiency by structuring the program to take advantage of new computer vector software and by developing new algorithms when vector software can not efficiently be used. An efficient program was written and substantial savings achieved. Several test cases were run for fineness ratios up to f = 16.0 and angles of attack up to 50 degrees.

  1. Processing and Quality Monitoring for the ATLAS Tile Hadronic Calorimeter Data

    NASA Astrophysics Data System (ADS)

    Burghgrave, Blake; ATLAS Collaboration

    2017-10-01

    An overview is presented of Data Processing and Data Quality (DQ) Monitoring for the ATLAS Tile Hadronic Calorimeter. Calibration runs are monitored from a data quality perspective and used as a cross-check for physics runs. Data quality in physics runs is monitored extensively and continuously. Any problems are reported and immediately investigated. The DQ efficiency achieved was 99.6% in 2012 and 100% in 2015, after the detector maintenance in 2013-2014. Changes to detector status or calibrations are entered into the conditions database (DB) during a brief calibration loop between the end of a run and the beginning of bulk processing of data collected in it. Bulk processed data are reviewed and certified for the ATLAS Good Run List if no problem is detected. Experts maintain the tools used by DQ shifters and the calibration teams during normal operation, and prepare new conditions for data reprocessing and Monte Carlo (MC) production campaigns. Conditions data are stored in 3 databases: Online DB, Offline DB for data and a special DB for Monte Carlo. Database updates can be performed through a custom-made web interface.

  2. 40 CFR 63.3544 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the coating operation during the capture efficiency test run, kg. TVHi = Mass fraction of TVH in... the mass of liquid TVH in materials used in the coating operation to the mass of TVH emissions not... 40 CFR part 51. (2) Use Method 204A or 204F of appendix M to 40 CFR part 51 to determine the mass...

  3. 40 CFR Appendix A to Subpart Kk of... - Data Quality Objective and Lower Confidence Limit Approaches for Alternative Capture Efficiency...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... approach follows: 4.3A source conducts an initial series of at least three runs. The owner or operator may... Confidence Limit Approaches for Alternative Capture Efficiency Protocols and Test Methods A Appendix A to... to Subpart KK of Part 63—Data Quality Objective and Lower Confidence Limit Approaches for Alternative...

  4. 40 CFR Appendix A to Subpart Kk of... - Data Quality Objective and Lower Confidence Limit Approaches for Alternative Capture Efficiency...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... approach follows: 4.3A source conducts an initial series of at least three runs. The owner or operator may... Confidence Limit Approaches for Alternative Capture Efficiency Protocols and Test Methods A Appendix A to... to Subpart KK of Part 63—Data Quality Objective and Lower Confidence Limit Approaches for Alternative...

  5. 40 CFR Appendix A to Subpart Kk of... - Data Quality Objective and Lower Confidence Limit Approaches for Alternative Capture Efficiency...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of the LCL approach follows: 4.3A source conducts an initial series of at least three runs. The owner... Confidence Limit Approaches for Alternative Capture Efficiency Protocols and Test Methods A Appendix A to... to Subpart KK of Part 63—Data Quality Objective and Lower Confidence Limit Approaches for Alternative...

  6. 40 CFR Appendix A to Subpart Kk of... - Data Quality Objective and Lower Confidence Limit Approaches for Alternative Capture Efficiency...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... approach follows: 4.3A source conducts an initial series of at least three runs. The owner or operator may... Confidence Limit Approaches for Alternative Capture Efficiency Protocols and Test Methods A Appendix A to... to Subpart KK of Part 63—Data Quality Objective and Lower Confidence Limit Approaches for Alternative...

  7. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    PubMed

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  8. FY 1993 report on aluminum-nitrate testing at the ETF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodman, M.D.D.; Wise, M.D.

    1993-09-30

    This report summarizes the progress of the Aluminum Nitrate Nonhydrate (ANN) testing program at the F/H-Area Effluent Treatment Facility (ETF) for Fiscal Year 1993. Three tests were conducted in the months of February, April, and September. The tests yielded data that validated earlier conclusions that the addition of ANN to non-routine feed has a positive effect on the performance of ETF`s submicron filtration unit. Performance was observed to increase from 30--309%, depending on the season. The data also supports SRTC`s earlier conclusion that an optimal aluminum concentration exists in the range of 30--40 ppm, and concentrations above this range beginmore » to retard filtration performance. A rudimentary mathematical model that would predict Stage 1 flux was also developed during FY93. The model allowed for a more concise comparison of filter test runs, as well as increase the efficiency of the testing program by allowing shorter test runs to be conducted. It is postulated that the model can be further optimized to include aluminum concentration and time of year as independent variables that determine Stage 1 flux. Such a model should unequivocally prove the merits of pretreating ETF`s wastewater with aluminum nitrate. To proceed with the development of the model, further testing is proposed with stringent control of the aluminum concentration in the feed. In order to account for seasonal effects, one test should be conducted each month for Fiscal Year 1994. High Level Waste Engineering requests permission to conduct these test runs according to the following schedule: conduct tests in even numbered months beginning with October with routine influent as it is collected from normal process sewer influents and conduct tests in odd numbered months beginning with November with non-routine feed from H-Retention Basin.« less

  9. PCA and multidimensional visualization techniques united to aid in the bioindication of elements from transplanted Sphagnum palustre moss exposed in the Gdańsk City area.

    PubMed

    Astel, Aleksander; Astel, Karolina; Biziuk, Marek

    2008-01-01

    During the last decades, a technique for assessing atmospheric deposition of heavy elements was developed based on the principle that samples of moss are able to accumulate elements and airborne particles from rain, melting snow and dry deposition. Despite a broad interest in bioindication there are still ongoing works aimed at the preparation of a standard procedure allowing for a comparison of research carried out in various areas. This is why the comparison of living and dry moss of the same species and growth site seems to be interesting, logical and promising. A most reliable approach seems to be the application of bioindication connected with multivariate statistics and efficient visualization techniques in the interpretation of monitoring data. The aim of this study was: (i) to present cumulative properties of transplanted Sphagnum palustre moss with differentiation into dry and living biomaterial; (ii) to determine and geographically locate types of pollution sources responsible for a structure of the monitoring data set; (iii) to visualize geographical distribution of analytes in the Gdańsk metropolitan area and to identify the high-risk areas which can be targeted for environmental hazards and public health. A six month air pollution study based on Sphagnum palustre bioindication is presented and a simplified procedure of the experiment is given. The study area was located at the mouth of the Vistula River on the Baltic Sea, in Gdańsk City (Poland). Sphagnum palustre was selected for research because of its extraordinary morphological properties and its ease in being raised. The capability of dry and living moss to accumulate elements characteristic for anthropogenic and natural sources was shown by application of Principal Component Analysis. The high-risk areas and pollution profiles are detected and visualized using surface maps based on Kriging algorithm. The original selection of elements included all those that could be reliably determined by Neutron Activation Analysis in moss samples. Elimination of variables covered the elements whose concentrations in moss were lower than the reported detection limits for INAA for most observations or in cases where particular elements did not show any variation. Eighteen elements: a, Ca, Sc, Fe, Co, Zn, As, Br, Mo, Sb, Ba, La, Ce, Sm, Yb, Lu, Hf, Th, were selected for the research presented. Two runs of PCA were performed since, in the first-run a heavy polluted location (Stogi - 'Sto') understood as outlier in the term of PCA approach was detected and results in the form of block diagrams and surface maps were presented. As ensues from the first-run PCA analysis, the factor layout for both indicators is similar but not identical due to the differences in the elements accumulation mechanism. Three latent factors ('phosphatic fertilizer plant impact', 'urban impact' and 'marine impact') explain over 89% and 82% of the total variance for dry and living moss respectively. In the second-run PCA three latent factors are responsible for the data structure in both moss materials. However, in the case of dry moss analysis these factors explain 85% of the total variance but they are rather hard to interpret. On the other hand living moss shows the same pattern as in first-run PCA. Three latent factors explain over 84% of the total variance in this case. The pollution profiles extracted in PCA of dry moss data differ tremendously between both runs, while no deterioration was found after removal of Stogi from data set in case of living moss. Performance of the second-run PCA with exception of Stogi as a heavy polluted location has led to the conclusion that living moss shows better indication properties than dry one. While using moss as wet and dry deposition sampier it is not possible to calculate deposition values since the real volume of collected water and dust is hard to estimate due to a splash effect and irregular surface. Therefore, accumulation values seam to be reasonable for moss-based air pollution surveys. Both biomaterials: dry and living Sphagnum palustre show cumulative properties relative to elements under interest. Dry moss has a very loose collection of the atmospheric particles, which can also easily get lost upon rinsing with rainwater running through exposed dry moss material. The living moss may, on the contrary, incorporate the elements in its tissue, thus being less susceptible to rinsing and thus better reflecting the atmospheric conditions. Despite the differences in element uptake and uphold capabilities dry and living moss reflect characteristic anthropogenic and natural profiles. Visible differences in impacts' map coverage exist mostly due to the accumulation mechanisms differentiating dry from living moss. However, in case of each indicator 'phosphatic fertilizer plant impact' is recognized as the strongest pollution source present in examined region. General types of pollution sources responsible for a structure of monitoring data set were determined as high-risk/low-risk areas and visualized in form of geographic distribution maps. These locations can be targeted for environmental hazards and public health. Chemometric results in the form of easy defined surface maps can became a powerful instrument in hands of decision-makers working in the field of sustainable development implementation.

  10. In-shoe plantar pressure distribution during running on natural grass and asphalt in recreational runners.

    PubMed

    Tessutti, Vitor; Trombini-Souza, Francis; Ribeiro, Ana Paula; Nunes, Ana Luiza; Sacco, Isabel de Camargo Neves

    2010-01-01

    The type of surface used for running can influence the load that the locomotor apparatus will absorb and the load distribution could be related to the incidence of chronic injuries. As there is no consensus on how the locomotor apparatus adapts to loads originating from running surfaces with different compliance, the objective of this study was to investigate how loads are distributed over the plantar surface while running on natural grass and on a rigid surface--asphalt. Forty-four adult runners with 4+/-3 years of running experience were evaluated while running at 12 km/h for 40 m wearing standardised running shoes and Pedar insoles (Novel). Peak pressure, contact time and contact area were measured in six regions: lateral, central and medial rearfoot, midfoot, lateral and medial forefoot. The surfaces and regions were compared by three ANOVAS (2 x 6). Asphalt and natural grass were statistically different in all variables. Higher peak pressures were observed on asphalt at the central (p<0.001) [grass: 303.8(66.7)kPa; asphalt: 342.3(76.3)kPa] and lateral rearfoot (p<0.001) [grass: 312.7(75.8)kPa; asphalt: 350.9(98.3)kPa] and lateral forefoot (p<0.001) [grass: 221.5(42.9)kPa; asphalt: 245.3(55.5)kPa]. For natural grass, contact time and contact area were significantly greater at the central rearfoot (p<0.001). These results suggest that natural grass may be a surface that provokes lighter loads on the rearfoot and forefoot in recreational runners. Copyright (c) 2008 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  11. Mechanical work and efficiency of 5 + 5 m shuttle running.

    PubMed

    Zamparo, Paola; Pavei, Gaspare; Nardello, Francesca; Bartolini, Davide; Monte, Andrea; Minetti, Alberto E

    2016-10-01

    Acceleration and deceleration phases characterise shuttle running (SR) compared to constant speed running (CR); mechanical work is thus expected to be larger in the former compared to the latter, at the same average speed (v mean). The aim of this study was to measure total mechanical work (W tot (+) , J kg(-1) m(-1)) during SR as the sum of internal (W int (+) ) and external (W ext (+) ) work and to calculate the efficiency of SR. Twenty males were requested to perform shuttle runs over a distance of 5 + 5 m at different speeds (slow, moderate and fast) to record kinematic data. Metabolic data were also recorded (at fast speed only) to calculate energy cost (C, J kg(-1) m(-1)) and mechanical efficiency (eff(+) = W tot (+) C (-1)) of SR. Work parameters significantly increased with speed (P < 0.001): W ext (+)  = 1.388 + 0.337 v mean; W int (+)  = -1.002 + 0.853 v mean; W tot (+)  = 1.329 v mean. At the fastest speed C was 27.4 ± 2.6 J kg(-1) m(-1) (i.e. about 7 times larger than in CR) and eff(+) was 16.2 ± 2.0 %. W ext (+) is larger in SR than in CR (2.5 vs. 1.4 J kg(-1) m(-1) in the range of investigated speeds: 2-3.5 m s(-1)) and W int (+) , at fast speed, is about half of W tot (+) . eff(+) is lower in SR (16 %) than in CR (50-60 % at comparable speeds) and this can be attributed to a lower elastic energy reutilization due to the acceleration/deceleration phases over this short shuttle distance.

  12. The antidepressant effect of running is associated with increased hippocampal cell proliferation.

    PubMed

    Bjørnebekk, Astrid; Mathé, Aleksander A; Brené, Stefan

    2005-09-01

    A common trait of antidepressant drugs, electroconvulsive treatment and physical exercise is that they relieve depression and up-regulate neurotrophic factors as well as cell proliferation and neurogenesis in the hippocampus. In order to identify possible biological underpinnings of depression and the antidepressant effect of running, we analysed cell proliferation, the level of the neurotrophic factor BDNF in hippocampus and dynorphin in striatum/accumbens in 'depressed' Flinders Sensitive Line rats (FSL) and Flinders Resistant Line (FRL) rats with and without access to running-wheels. The FRL strain exhibited a higher daily running activity than the FSL strain. Wheel-running had an antidepressant effect in the 'depressed' FSL rats, as indicated by the forced swim test. In the hippocampus, cell proliferation was lower in the 'depressed' rats compared to the control FRL rats but there was no difference in BDNF or dynorphin levels in striatum/accumbens. After 5 wk of running, cell proliferation increased in FSL but not in FRL rats. BDNF and dynorphin mRNA levels were increased in FRL but not to the same extent in the in FSL rats; thus, increased BDNF and dynorphin levels were correlated to the running activity but not to the antidepressant effect of running. The only parameter that was associated to basal level of 'depression' and to the antidepressant effect was cell proliferation in the hippocampus. Thus, suppression of cell proliferation in the hippocampus could constitute one of the mechanisms that underlie depression, and physical activity might be an efficient antidepressant.

  13. Older Runners Retain Youthful Running Economy Despite Biomechanical Differences

    PubMed Central

    Beck, Owen N.; Kipp, Shalaya; Roby, Jaclyn M.; Grabowski, Alena M.; Kram, Rodger; Ortega, Justus D.

    2015-01-01

    Purpose Sixty-five years of age typically marks the onset of impaired walking economy. However, running economy has not been assessed beyond the age of 65 years. Furthermore, a critical determinant of running economy is the spring-like storage and return of elastic energy from the leg during stance, which is related to leg stiffness. Therefore, we investigated whether runners over the age of 65 years retain youthful running economy and/or leg stiffness across running speeds. Methods Fifteen young and fifteen older runners ran on a force-instrumented treadmill at 2.01, 2.46, and 2.91 m·s−1. We measured their rates of metabolic energy consumption (i.e. metabolic power), ground reaction forces, and stride kinematics. Results There were only small differences in running economy between young and older runners across the range of speeds. Statistically, the older runners consumed 2–9% less metabolic energy than the young runners across speeds (p=0.012). Also, the leg stiffness of older runners was 10–20% lower than that of young runners across the range of speeds (p=0.002) and in contrast to the younger runners, the leg stiffness of older runners decreased with speed (p<0.001). Conclusion Runners beyond 65 years of age maintain youthful running economy despite biomechanical differences. It may be that vigorous exercise, such as running, prevents the age related deterioration of muscular efficiency, and therefore may make everyday activities easier. PMID:26587844

  14. Housing conditions modulate escitalopram effects on antidepressive-like behaviour and brain neurochemistry.

    PubMed

    Bjørnebekk, Astrid; Mathé, Aleksander A; Gruber, Susanne H M; Brené, Stefan

    2008-12-01

    Despite limited understanding of the pathophysiology of depression and the underlying mechanisms mediating antidepressant effects, there are several efficient treatments. The anhedonia symptoms of depression are characterized by decreased motivation and drive and imply possible malfunctioning of the mesolimbic dopamine system, whereas cognitive deficits might reflect decreased plasticity in hippocampus. In female Flinders Sensitive Line (FSL) rats, a model of depression, we compared the effects of three long-term antidepressant treatments: voluntary running, escitalopram and the combination of both on antidepressant-like behaviour in the Porsolt swim test (PST), and on regulation of mRNA for dopamine and neuropeptides in striatal dopamine pathways and brain-derived neurotrophic factor (BDNF) in hippocampus. Escitalopram diet attenuated running behaviour in FSL rats but not in non-depressed controls rats. In the PST the running group had increased climbing activity (noradrenergic/dopaminergic response), whereas the combination of escitalopram and running-wheel access increased swimming (serotonergic response). Running elevated mRNA for dynorphin in caudate putamen and BDNF in hippocampus. The combined treatment down-regulated D1 receptor and enkephalin mRNA in accumbens. Escitalopram alone did not affect behaviour or mRNA levels. We demonstrate a novel behavioural effect of escitalopram, i.e. attenuation of running in 'depressed' rats. The antidepressant-like effect of escitalopram was dependent on the presence of a running wheel, but not actual running indicating that the environment influenced the antidepressant effect of escitalopram. Different patterns of mRNA changes in hippocampus and brain reward pathways and responses in the PST by running and escitalopram suggest that antidepressant-like responses by running and escitalopram are achieved by different mechanisms.

  15. Short-run and long-run effects of unemployment on suicides: does welfare regime matter?

    PubMed

    Gajewski, Pawel; Zhukovska, Kateryna

    2017-12-01

    Disentangling the immediate effects of an unemployment shock from the long-run relationship has a strong theoretical rationale. Different economic and psychological forces are at play in the first moment and after prolonged unemployment. This study suggests a diverse impact of short- and long-run unemployment on suicides in liberal and social-democratic countries. We take a macro-level perspective and simultaneously estimate the short- and long-run relationships between unemployment and suicide, along with the speed of convergence towards the long-run relationship after a shock, in a panel of 10 high-income countries. We also account for unemployment benefit spending, the share of the population aged 15-34, and the crisis effects. In the liberal group of countries, only a long-run impact of unemployment on suicides is found to be significant (P = 0.010). In social-democratic countries, suicides are associated with initial changes in unemployment (P = 0.028), but the positive link fades over time and becomes insignificant in the long run. Further, crisis effects are a much stronger determinant of suicides in social-democratic countries. Once the broad welfare regime is controlled for, changes in unemployment-related spending do not matter for preventing suicides. A generous welfare system seems efficient at preventing unemployment-related suicides in the long run, but societies in social-democratic countries might be less psychologically immune to sudden negative changes in their professional lives compared with people in liberal countries. Accounting for the different short- and long-run effects could thus improve our understanding of the unemployment-suicide link. © The Author 2017. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  16. The Risks and Benefits of Running Barefoot or in Minimalist Shoes

    PubMed Central

    Perkins, Kyle P.; Hanney, William J.; Rothschild, Carey E.

    2014-01-01

    Context: The popularity of running barefoot or in minimalist shoes has recently increased because of claims of injury prevention, enhanced running efficiency, and improved performance compared with running in shoes. Potential risks and benefits of running barefoot or in minimalist shoes have yet to be clearly defined. Objective: To determine the methodological quality and level of evidence pertaining to the risks and benefits of running barefoot or in minimalist shoes. Data Sources: In September 2013, a comprehensive search of the Ovid MEDLINE, SPORTDiscus, and CINAHL databases was performed by 2 independent reviewers. Study Selection: Included articles were obtained from peer-reviewed journals in the English language with no limit for year of publication. Final inclusion criteria required at least 1 of the following outcome variables: pain, injury rate, running economy, joint forces, running velocity, electromyography, muscle performance, or edema. Study Design: Systematic review. Level of Evidence: Level 3. Data Extraction: Two reviewers appraised each article using the Downs and Black checklist and appraised each for level of evidence. Results: Twenty-three articles met the criteria for this review. Of 27 possible points on the Downs and Black checklist, articles scored between 13 and 19 points, indicating a range of evidence from very limited to moderate. Moderate evidence supports the following biomechanical differences when running barefoot versus in shoes: overall less maximum vertical ground reaction forces, less extension moment and power absorption at the knee, less foot and ankle dorsiflexion at ground contact, less ground contact time, shorter stride length, increased stride frequency, and increased knee flexion at ground contact. Conclusion: Because of lack of high-quality evidence, no definitive conclusions can be drawn regarding specific risks or benefits to running barefoot, shod, or in minimalist shoes. PMID:25364479

  17. The risks and benefits of running barefoot or in minimalist shoes: a systematic review.

    PubMed

    Perkins, Kyle P; Hanney, William J; Rothschild, Carey E

    2014-11-01

    The popularity of running barefoot or in minimalist shoes has recently increased because of claims of injury prevention, enhanced running efficiency, and improved performance compared with running in shoes. Potential risks and benefits of running barefoot or in minimalist shoes have yet to be clearly defined. To determine the methodological quality and level of evidence pertaining to the risks and benefits of running barefoot or in minimalist shoes. In September 2013, a comprehensive search of the Ovid MEDLINE, SPORTDiscus, and CINAHL databases was performed by 2 independent reviewers. Included articles were obtained from peer-reviewed journals in the English language with no limit for year of publication. Final inclusion criteria required at least 1 of the following outcome variables: pain, injury rate, running economy, joint forces, running velocity, electromyography, muscle performance, or edema. Systematic review. Level 3. Two reviewers appraised each article using the Downs and Black checklist and appraised each for level of evidence. Twenty-three articles met the criteria for this review. Of 27 possible points on the Downs and Black checklist, articles scored between 13 and 19 points, indicating a range of evidence from very limited to moderate. Moderate evidence supports the following biomechanical differences when running barefoot versus in shoes: overall less maximum vertical ground reaction forces, less extension moment and power absorption at the knee, less foot and ankle dorsiflexion at ground contact, less ground contact time, shorter stride length, increased stride frequency, and increased knee flexion at ground contact. Because of lack of high-quality evidence, no definitive conclusions can be drawn regarding specific risks or benefits to running barefoot, shod, or in minimalist shoes.

  18. Scalar versus fermionic top partner interpretations of toverline{t}+{E}_T^{miss} searches at the LHC

    NASA Astrophysics Data System (ADS)

    Kraml, Sabine; Laa, Ursula; Panizzi, Luca; Prager, Hugo

    2016-11-01

    We assess how different ATLAS and CMS searches for supersymmetry in the toverline{t}+{E}_T^{miss} final state at Run 1 of the LHC constrain scenarios with a fermionic top partner and a dark matter candidate. We find that the efficiencies of these searches in all-hadronic, 1-lepton and 2-lepton channels are quite similar for scalar and fermionic top partners. Therefore, in general, efficiency maps for stop-neutralino simplified models can also be applied to fermionic top-partner models, provided the narrow width approximation holds in the latter. Owing to the much higher production cross-sections of heavy top quarks as compared to stops, masses up to m T ≈ 850 GeV can be excluded from the Run 1 stop searches. Since the simplified-model results published by ATLAS and CMS do not extend to such high masses, we provide our own efficiency maps obtained with C heckMATE and M adA nalysis 5 for these searches. Finally, we also discuss how generic gluino/squark searches in multi-jet final states constrain heavy top partner production.

  19. Segmentation, dynamic storage, and variable loading on CDC equipment

    NASA Technical Reports Server (NTRS)

    Tiffany, S. H.

    1980-01-01

    Techniques for varying the segmented load structure of a program and for varying the dynamic storage allocation, depending upon whether a batch type or interactive type run is desired, are explained and demonstrated. All changes are based on a single data input to the program. The techniques involve: code within the program to suppress scratch pad input/output (I/O) for a batch run or translate the in-core data storage area from blank common to the end-of-code+1 address of a particular segment for an interactive run; automatic editing of the segload directives prior to loading, based upon data input to the program, to vary the structure of the load for interactive and batch runs; and automatic editing of the load map to determine the initial addresses for in core data storage for an interactive run.

  20. Run Island, Indonesia

    NASA Image and Video Library

    2017-11-28

    In 1667, the Dutch exchanged Run Island (left-most in the image) with the British for Manhattan (renamed from New Amsterdam to New York). Run Island is one of the smallest, and western-most, of the Banda Islands, part of the Malukus, Indonesia. At the time it was the only source of the incredibly valuable spices nutmeg and mace. The image was acquired January 5, 2016, covers an area of 15.7 by 34.8 kilometers, and is located at 4.5 degrees south, 129.7 degrees east. https://photojournal.jpl.nasa.gov/catalog/PIA22133

  1. SAVAGE RUN WILDERNESS, WYOMING.

    USGS Publications Warehouse

    McCallum, M.E.; Kluender, Steven E.

    1984-01-01

    Mineral evaluation and related surveys were conducted in the Savage Run Wilderness in Wyoming and results of these studies indicate probable mineral-resource potential in four areas. Gold and (or) silver mineralization in veins associated with faults was found in two areas; all known occurrences inside the wilderness are very small in size. Slightly anomalous values of platinum, palladium, and nickel were recorded from rock-chip and stream- sediment samples from the southeast portion of the wilderness where layered mafic rocks predominate, and a probable resource potential exists for platinum, palladium, and nickel. An area of sheared rocks in the northeastern corner of the wilderness has a probable resource potential for copper. The nature of the geologic terrane precludes the occurrence of organic fuels.

  2. Study of characteristic of tsunami base on the coastal morphology in north Donggala, Central Sulawesi

    NASA Astrophysics Data System (ADS)

    Rahmadaningsi, W. S. N.; Assegaf, A. H.; Setyonegoro, W.; Paharuddin

    2018-03-01

    The northern arm of Sulawesi potentials to generate earthquake and Tsunami due to the existence of subduction zone in sulawesi sea. It makes the North Donggala as an area with active seismicity. One of the earthquake and Tsunami events occurred is the earthquake and tsunami of Toli-Toli 1996 (M 7.9) causing 9 people are killed and severe damage in Tonggolobibi, Siboang, and Balukang. This earthquake induced tsunami runup of 3.4 m and inundated as far as 400 meters. The aims of this study is to predict runup and inundation area using numerical model and to find out the characteristics of Tsunami wave on straight, bay and cape shape coastal morphology and slopes of coastal. The data in this research consist of are the Etopo2 bathymetry data in data obtained from NOAA (National Oceanic and Atmospheric Administration), Toli-toli’s main earthquakes focal mechanism data 1st January1996 from GCMT (Global Centroid Moment Tensor), the data gained from the SRTM (Shuttle Radar Topography Mission) data 30 m and land cover data in 1996 from Ministry of environment and forestry . Single fault model is used to predict the high of tsunami run-up and to inundation area along Donggala coastal area. Its reviewed by morphology of coastal area that higher run up shows occurs at coastline type like bay have higher run up compare to area with cape and straight coastline. The result shows that the slopes have negative or contras correlation with Tsunami runup and its inundation area.

  3. Synthesis of Conductive Polymeric Nanocomposites for Applications in Responsive Materials

    NASA Astrophysics Data System (ADS)

    Chavez, Jessica

    The development of next generation "smart" textiles has emerged with significant interest due to the immense demand for high-performance wearable technology. The economic market for wearable technologies is predicted to increase significantly in both volume and value. In the next four years, the wearable technology market will be valued at $34 billion. This large demand has opened up a new research area involving smart wearable devices and conductive fabrics. Many research groups have taken various paths to study and ultimately fabricate wearable devices. Due to the limiting capabilities of conventional conductors, researchers have centered their research on the integration of conductive polymers into textile materials for applications involving responsive material. Conducive polymers are very unique organic molecules that have the ability to transfer electrons across their molecular structure due to the excess presence of pi-electrons. Conductive polymers are favored over conventional conductors because they can be easily manipulated and integrated into flexible material. Two very common conductive polymers are polyaniline (PANI) and polypyrrole (PPY) because of their large favorability in literature, high conductance values, and environmental stability. Common commercial fibers were coated via the chemical polymerization of PANI or PPY. A series of reactions were done to study the polymerization process of each polymer. The conductive efficiency of each conducting polymer is highly dependent on the type of reactants used, the acidic nature of the reaction, and the temperature of the reaction. The coated commercial fiber nanocomposites produced higher conductivity values when the polymerization reaction was run using ammonium peroxydisulfate (APS) as the oxidizing agent, run in an acidic environment, and run at very low temperatures. Other factors that improved the overall efficiency of the coated commercial fiber nanocomposites was the increase in polymer concentration as well as the extension of the reaction time. The overall interaction between the conductive polymer and the commercial fibers showed that the conductive polymer was physically adsorbed to the commercial fiber. This physical adsorption caused a decrease in conductive efficiency as a function of repeated washes because the weak intermolecular forces between the conductive polymer and the commercial fiber. This led to the synthesis of conductive films and nanofibers by integrating the conductive polymers directly into a cellulose acetate matrix. The voltage efficiency of the conductive films was lower compared to the coated commercial fiber nanocomposites. However, the conductive material generated greater lux values compared to the coated commercial fiber nanocomposites. Theses conductive materials can be applied to applications in both the medical field and water filtration. The conductive films can be used to create a sensor based system that can trigger a sensor to signify when bandages used for wound management need to be changed. The conductive nanofibers can be used in water filtration as a means of electroplating metals ions from contaminated water. Overall, the synthesis of these conductive materials can be applicable for responsive materials.

  4. Changes in Track and Field Performance with Chronological Aging.

    ERIC Educational Resources Information Center

    Fung, Lena; Ha, Amy

    1994-01-01

    Examined official records of VIII World Veterans Championships to identify running, jumping, and throwing events whose performance was most affected by age. Found that 400-meter run and long jump were most affected by advancing age among both male and female master athletes whereas, in areas of throws, event most affected was javelin for men and…

  5. The Present Situation, Problems, Countermeasures of Compulsory Education in the Rural Area of Western Region in China

    ERIC Educational Resources Information Center

    Xu, Hui

    2005-01-01

    The present condition of rural education in the western region of China is not optimistic. Existing problems include lacking education investment, poor school-running conditions, simplified running pattern and laggard concept of education. The countermeasures are: firstly, governments at all level especially the center one should increase input to…

  6. 78 FR 3027 - Notice of Temporary Closures of Public Lands in La Paz County, AZ

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-15

    ... (CRIT) Reservation, the closed area runs east along Shea Road, then east into Osborne Wash on the Parker-Swansea Road to the Central Arizona Project (CAP) Canal, then north on the west side of the CAP Canal, crossing the canal on the county-maintained road, running northeast into Mineral Wash Canyon, then...

  7. Run for the Gold: Small Town Fun(ding) Runs.

    ERIC Educational Resources Information Center

    Morris, Judson H., Jr.

    This paper examines the organization, planning, development, and staging of foot races, bicycle races, or multi-event competitions as fund raisers for public or private groups in rural areas. Local businesses often assist in sponsoring such events in order to get free advertising and make money from newcomers drawn to the event. Organizers must be…

  8. Factors affecting the energy cost of level running at submaximal speed.

    PubMed

    Lacour, Jean-René; Bourdin, Muriel

    2015-04-01

    Metabolic measurement is still the criterion for investigation of the efficiency of mechanical work and for analysis of endurance performance in running. Metabolic demand may be expressed either as the energy spent per unit distance (energy cost of running, C r) or as energy demand at a given running speed (running economy). Systematic studies showed a range of costs of about 20 % between runners. Factors affecting C r include body dimensions: body mass and leg architecture, mostly calcaneal tuberosity length, responsible for 60-80 % of the variability. Children show a higher C r than adults. Higher resting metabolism and lower leg length/stature ratio are the main putative factors responsible for the difference. Elastic energy storage and reuse also contribute to the variability of C r. The increase in C r with increasing running speed due to increase in mechanical work is blunted till 6-7 m s(-1) by the increase in vertical stiffness and the decrease in ground contact time. Fatigue induced by prolonged or intense running is associated with up to 10 % increased C r; the contribution of metabolic and biomechanical factors remains unclear. Women show a C r similar to men of similar body mass, despite differences in gait pattern. The superiority of black African runners is presumably related to their leg architecture and better elastic energy storage and reuse.

  9. KERNELHR: A program for estimating animal home ranges

    USGS Publications Warehouse

    Seaman, D.E.; Griffith, B.; Powell, R.A.

    1998-01-01

    Kernel methods are state of the art for estimating animal home-range area and utilization distribution (UD). The KERNELHR program was developed to provide researchers and managers a tool to implement this extremely flexible set of methods with many variants. KERNELHR runs interactively or from the command line on any personal computer (PC) running DOS. KERNELHR provides output of fixed and adaptive kernel home-range estimates, as well as density values in a format suitable for in-depth statistical and spatial analyses. An additional package of programs creates contour files for plotting in geographic information systems (GIS) and estimates core areas of ranges.

  10. Policy Relevance in Studies of Urban Residential Water Demand

    NASA Astrophysics Data System (ADS)

    Martin, William E.; Thomas, John F.

    1986-12-01

    Precise estimates of demand elasticities for a given area may not be necessary for policy purposes. Given the general nature of the demand for urban water, simple cross-sectional comparisons of prices and quantities in similar areas may be most reliable for policy use. Short-run elasticities give little information for policy purposes. Comparison of well-defined price and quantity data from five cities with similar arid environments suggests a long-run price elasticity for residential water of about -0.5 over a wide range of water prices. The potential for price adjustments to affect use is enormous.

  11. High-Intensity Running and Plantar-Flexor Fatigability and Plantar-Pressure Distribution in Adolescent Runners

    PubMed Central

    Fourchet, François; Kelly, Luke; Horobeanu, Cosmin; Loepelt, Heiko; Taiar, Redha; Millet, Grégoire

    2015-01-01

    Context: Fatigue-induced alterations in foot mechanics may lead to structural overload and injury. Objectives: To investigate how a high-intensity running exercise to exhaustion modifies ankle plantar-flexor and dorsiflexor strength and fatigability, as well as plantar-pressure distribution in adolescent runners. Design: Controlled laboratory study. Setting: Academy research laboratory. Patients or Other Participants: Eleven male adolescent distance runners (age = 16.9 ± 2.0 years, height = 170.6 ± 10.9 cm, mass = 54.6 ± 8.6 kg) were tested. Intervention(s): All participants performed an exhausting run on a treadmill. An isokinetic plantar-flexor and dorsiflexor maximal-strength test and a fatigue test were performed before and after the exhausting run. Plantar-pressure distribution was assessed at the beginning and end of the exhausting run. Main Outcome Measure(s): We recorded plantar-flexor and dorsiflexor peak torques and calculated the fatigue index. Plantar-pressure measurements were recorded 1 minute after the start of the run and before exhaustion. Plantar variables (ie, mean area, contact time, mean pressure, relative load) were determined for 9 selected regions. Results: Isokinetic peak torques were similar before and after the run in both muscle groups, whereas the fatigue index increased in plantar flexion (28.1%; P = .01) but not in dorsiflexion. For the whole foot, mean pressure decreased from 1 minute to the end (−3.4%; P = .003); however, mean area (9.5%; P = .005) and relative load (7.2%; P = .009) increased under the medial midfoot, and contact time increased under the central forefoot (8.3%; P = .01) and the lesser toes (8.9%; P = .008). Conclusions: Fatigue resistance in the plantar flexors declined after a high-intensity running bout performed by adolescent male distance runners. This phenomenon was associated with increased loading under the medial arch in the fatigued state but without any excessive pronation. PMID:25531143

  12. Distribution, stock composition and timing, and tagging response of wild Chinook Salmon returning to a large, free-flowing river basin

    USGS Publications Warehouse

    Eiler, John H.; Masuda, Michele; Spencer, Ted R.; Driscoll, Richard J.; Schreck, Carl B.

    2014-01-01

    Chinook Salmon Oncorhynchus tshawytscha returns to the Yukon River basin have declined dramatically since the late 1990s, and detailed information on the spawning distribution, stock structure, and stock timing is needed to better manage the run and facilitate conservation efforts. A total of 2,860 fish were radio-tagged in the lower basin during 2002–2004 and tracked upriver. Fish traveled to spawning areas throughout the basin, ranging from several hundred to over 3,000 km from the tagging site. Similar distribution patterns were observed across years, suggesting that the major components of the run were identified. Daily and seasonal composition estimates were calculated for the component stocks. The run was dominated by two regional components comprising over 70% of the return. Substantially fewer fish returned to other areas, ranging from 2% to 9% of the return, but their collective contribution was appreciable. Most regional components consisted of several principal stocks and a number of small, spatially isolated populations. Regional and stock composition estimates were similar across years even though differences in run abundance were reported, suggesting that the differences in abundance were not related to regional or stock-specific variability. Run timing was relatively compressed compared with that in rivers in the southern portion of the species’ range. Most stocks passed through the lower river over a 6-week period, ranging in duration from 16 to 38 d. Run timing was similar for middle- and upper-basin stocks, limiting the use of timing information for management. The lower-basin stocks were primarily later-run fish. Although differences were observed, there was general agreement between our composition and timing estimates and those from other assessment projects within the basin, suggesting that the telemetry-based estimates provided a plausible approximation of the return. However, the short duration of the run, complex stock structure, and similar stock timing complicate management of Yukon River returns.

  13. Leptin Suppresses the Rewarding Effects of Running via STAT3 Signaling in Dopamine Neurons.

    PubMed

    Fernandes, Maria Fernanda A; Matthys, Dominique; Hryhorczuk, Cécile; Sharma, Sandeep; Mogra, Shabana; Alquier, Thierry; Fulton, Stephanie

    2015-10-06

    The adipose hormone leptin potently influences physical activity. Leptin can decrease locomotion and running, yet the mechanisms involved and the influence of leptin on the rewarding effects of running ("runner's high") are unknown. Leptin receptor (LepR) signaling involves activation of signal transducer and activator of transcription-3 (STAT3), including in dopamine neurons of the ventral tegmental area (VTA) that are essential for reward-relevant behavior. We found that mice lacking STAT3 in dopamine neurons exhibit greater voluntary running, an effect reversed by viral-mediated STAT3 restoration. STAT3 deletion increased the rewarding effects of running whereas intra-VTA leptin blocked it in a STAT3-dependent manner. Finally, STAT3 loss-of-function reduced mesolimbic dopamine overflow and function. Findings suggest that leptin influences the motivational effects of running via LepR-STAT3 modulation of dopamine tone. Falling leptin is hypothesized to increase stamina and the rewarding effects of running as an adaptive means to enhance the pursuit and procurement of food. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Internet-Based Solutions for a Secure and Efficient Seismic Network

    NASA Astrophysics Data System (ADS)

    Bhadha, R.; Black, M.; Bruton, C.; Hauksson, E.; Stubailo, I.; Watkins, M.; Alvarez, M.; Thomas, V.

    2017-12-01

    The Southern California Seismic Network (SCSN), operated by Caltech and USGS, leverages modern Internet-based computing technologies to provide timely earthquake early warning for damage reduction, event notification, ShakeMap, and other data products. Here we present recent and ongoing innovations in telemetry, security, cloud computing, virtualization, and data analysis that have allowed us to develop a network that runs securely and efficiently.Earthquake early warning systems must process seismic data within seconds of being recorded, and SCSN maintains a robust and resilient network of more than 350 digital strong motion and broadband seismic stations to achieve this goal. We have continued to improve the path diversity and fault tolerance within our network, and have also developed new tools for latency monitoring and archiving.Cyberattacks are in the news almost daily, and with most of our seismic data streams running over the Internet, it is only a matter of time before SCSN is targeted. To ensure system integrity and availability across our network, we have implemented strong security, including encryption and Virtual Private Networks (VPNs).SCSN operates its own data center at Caltech, but we have also installed real-time servers on Amazon Web Services (AWS), to provide an additional level of redundancy, and eventually to allow full off-site operations continuity for our network. Our AWS systems receive data from Caltech-based import servers and directly from field locations, and are able to process the seismic data, calculate earthquake locations and magnitudes, and distribute earthquake alerts, directly from the cloud.We have also begun a virtualization project at our Caltech data center, allowing us to serve data from Virtual Machines (VMs), making efficient use of high-performance hardware and increasing flexibility and scalability of our data processing systems.Finally, we have developed new monitoring of station average noise levels at most stations. Noise monitoring is effective at identifying anthropogenic noise sources and malfunctioning acquisition equipment. We have built a dynamic display of results with sorting and mapping capabilities that allow us to quickly identify problematic sites and areas with elevated noise.

  15. NPDES Permit for \\tWashington Metropolitan Area Transit Authority (WMATA) Mississippi Avenue Pumping Station

    EPA Pesticide Factsheets

    Under National Pollutant Discharge Elimination System permit number DC0000337, the Washington Metropolitan Area Transit Authority (WMATA) is authorized to dischargefrom a facility to receiving waters named Oxon Run.

  16. Working against gravity: horizontal honeybee waggle runs have greater angular scatter than vertical waggle runs

    PubMed Central

    Couvillon, Margaret J.; Phillipps, Hunter L. F.; Schürch, Roger; Ratnieks, Francis L. W.

    2012-01-01

    The presence of noise in a communication system may be adaptive or may reflect unavoidable constraints. One communication system where these alternatives are debated is the honeybee (Apis mellifera) waggle dance. Successful foragers communicate resource locations to nest-mates by a dance comprising repeated units (waggle runs), which repetitively transmit the same distance and direction vector from the nest. Intra-dance waggle run variation occurs and has been hypothesized as a colony-level adaptation to direct recruits over an area rather than a single location. Alternatively, variation may simply be due to constraints on bees' abilities to orient waggle runs. Here, we ask whether the angle at which the bee dances on vertical comb influences waggle run variation. In particular, we determine whether horizontal dances, where gravity is not aligned with the waggle run orientation, are more variable in their directional component. We analysed 198 dances from foragers visiting natural resources and found support for our prediction. More horizontal dances have greater angular variation than dances performed close to vertical. However, there is no effect of waggle run angle on variation in the duration of waggle runs, which communicates distance. Our results weaken the hypothesis that variation is adaptive and provide novel support for the constraint hypothesis. PMID:22513277

  17. Working against gravity: horizontal honeybee waggle runs have greater angular scatter than vertical waggle runs.

    PubMed

    Couvillon, Margaret J; Phillipps, Hunter L F; Schürch, Roger; Ratnieks, Francis L W

    2012-08-23

    The presence of noise in a communication system may be adaptive or may reflect unavoidable constraints. One communication system where these alternatives are debated is the honeybee (Apis mellifera) waggle dance. Successful foragers communicate resource locations to nest-mates by a dance comprising repeated units (waggle runs), which repetitively transmit the same distance and direction vector from the nest. Intra-dance waggle run variation occurs and has been hypothesized as a colony-level adaptation to direct recruits over an area rather than a single location. Alternatively, variation may simply be due to constraints on bees' abilities to orient waggle runs. Here, we ask whether the angle at which the bee dances on vertical comb influences waggle run variation. In particular, we determine whether horizontal dances, where gravity is not aligned with the waggle run orientation, are more variable in their directional component. We analysed 198 dances from foragers visiting natural resources and found support for our prediction. More horizontal dances have greater angular variation than dances performed close to vertical. However, there is no effect of waggle run angle on variation in the duration of waggle runs, which communicates distance. Our results weaken the hypothesis that variation is adaptive and provide novel support for the constraint hypothesis.

  18. Numerical performance evaluation of design modifications on a centrifugal pump impeller running in reverse mode

    NASA Astrophysics Data System (ADS)

    Kassanos, Ioannis; Chrysovergis, Marios; Anagnostopoulos, John; Papantonis, Dimitris; Charalampopoulos, George

    2016-06-01

    In this paper the effect of impeller design variations on the performance of a centrifugal pump running as turbine is presented. Numerical simulations were performed after introducing various modifications in the design for various operating conditions. Specifically, the effects of the inlet edge shape, the meridional channel width, the number of blades and the addition of splitter blades on impeller performance was investigated. The results showed that, an increase in efficiency can be achieved by increasing the number of blades and by introducing splitter blades.

  19. Electricity generating capacity and performance deterioration of a microbial fuel cell fed with beer brewery wastewater.

    PubMed

    Köroğlu, Emre Oğuz; Özkaya, Bestamin; Denktaş, Cenk; Çakmakci, Mehmet

    2014-12-01

    This study focused on using beer brewery wastewater (BBW) to evaluate membrane concentrate disposal and production of electricity in microbial fuel cells. In the membrane treatment of BBW, the membrane permeate concentration was 570 ± 30 mg/L corresponding to a chemical oxygen demand (COD) removal efficiency of 75 ± 5%, and the flux values changed between 160 and 40 L/m(2)-h for all membrane runs. For electricity production from membrane concentrate, the highest current density in the microbial fuel cell (MFC) was observed to be 1950 mA/m(2) according to electrode surface area with 36% COD removal efficiency and 2.48% CE with 60% BBW membrane concentrate. The morphologies of the cation exchange membrane and the MFC deterioration were studied using a scanning electron microscope (SEM), attenuated total reflection-Fourier transform infrared (ATR-FTIR) spectroscopy, differential scanning calorimetry (DSC), and thermal gravimetric analysis (TGA). A decrease in the thermal stability of the sulfonate (-SO3H) groups was demonstrated and morphological changes were detected in the SEM analysis. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  20. iParking: An Intelligent Indoor Location-Based Smartphone Parking Service

    PubMed Central

    Liu, Jingbin; Chen, Ruizhi; Chen, Yuwei; Pei, Ling; Chen, Liang

    2012-01-01

    Indoor positioning technologies have been widely studied with a number of solutions being proposed, yet substantial applications and services are still fairly primitive. Taking advantage of the emerging concept of the connected car, the popularity of smartphones and mobile Internet, and precise indoor locations, this study presents the development of a novel intelligent parking service called iParking. With the iParking service, multiple parties such as users, parking facilities and service providers are connected through Internet in a distributed architecture. The client software is a light-weight application running on a smartphone, and it works essentially based on a precise indoor positioning solution, which fuses Wireless Local Area Network (WLAN) signals and the measurements of the built-in sensors of the smartphones. The positioning accuracy, availability and reliability of the proposed positioning solution are adequate for facilitating the novel parking service. An iParking prototype has been developed and demonstrated in a real parking environment at a shopping mall. The demonstration showed how the iParking service could improve the parking experience and increase the efficiency of parking facilities. The iParking is a novel service in terms of cost- and energy-efficient solution. PMID:23202179

  1. iParking: an intelligent indoor location-based smartphone parking service.

    PubMed

    Liu, Jingbin; Chen, Ruizhi; Chen, Yuwei; Pei, Ling; Chen, Liang

    2012-10-31

    Indoor positioning technologies have been widely studied with a number of solutions being proposed, yet substantial applications and services are still fairly primitive. Taking advantage of the emerging concept of the connected car, the popularity of smartphones and mobile Internet, and precise indoor locations, this study presents the development of a novel intelligent parking service called iParking. With the iParking service, multiple parties such as users, parking facilities and service providers are connected through Internet in a distributed architecture. The client software is a light-weight application running on a smartphone, and it works essentially based on a precise indoor positioning solution, which fuses Wireless Local Area Network (WLAN) signals and the measurements of the built-in sensors of the smartphones. The positioning accuracy, availability and reliability of the proposed positioning solution are adequate for facilitating the novel parking service. An iParking prototype has been developed and demonstrated in a real parking environment at a shopping mall. The demonstration showed how the iParking service could improve the parking experience and increase the efficiency of parking facilities. The iParking is a novel service in terms of cost- and energy-efficient solution.

  2. Deep-UV-sensitive high-frame-rate backside-illuminated CCD camera developments

    NASA Astrophysics Data System (ADS)

    Dawson, Robin M.; Andreas, Robert; Andrews, James T.; Bhaskaran, Mahalingham; Farkas, Robert; Furst, David; Gershstein, Sergey; Grygon, Mark S.; Levine, Peter A.; Meray, Grazyna M.; O'Neal, Michael; Perna, Steve N.; Proefrock, Donald; Reale, Michael; Soydan, Ramazan; Sudol, Thomas M.; Swain, Pradyumna K.; Tower, John R.; Zanzucchi, Pete

    2002-04-01

    New applications for ultra-violet imaging are emerging in the fields of drug discovery and industrial inspection. High throughput is critical for these applications where millions of drug combinations are analyzed in secondary screenings or high rate inspection of small feature sizes over large areas is required. Sarnoff demonstrated in1990 a back illuminated, 1024 X 1024, 18 um pixel, split-frame-transfer device running at > 150 frames per second with high sensitivity in the visible spectrum. Sarnoff designed, fabricated and delivered cameras based on these CCDs and is now extending this technology to devices with higher pixel counts and higher frame rates through CCD architectural enhancements. The high sensitivities obtained in the visible spectrum are being pushed into the deep UV to support these new medical and industrial inspection applications. Sarnoff has achieved measured quantum efficiencies > 55% at 193 nm, rising to 65% at 300 nm, and remaining almost constant out to 750 nm. Optimization of the sensitivity is being pursued to tailor the quantum efficiency for particular wavelengths. Characteristics of these high frame rate CCDs and cameras will be described and results will be presented demonstrating high UV sensitivity down to 150 nm.

  3. On damage diagnosis for a wind turbine blade using pattern recognition

    NASA Astrophysics Data System (ADS)

    Dervilis, N.; Choi, M.; Taylor, S. G.; Barthorpe, R. J.; Park, G.; Farrar, C. R.; Worden, K.

    2014-03-01

    With the increased interest in implementation of wind turbine power plants in remote areas, structural health monitoring (SHM) will be one of the key cards in the efficient establishment of wind turbines in the energy arena. Detection of blade damage at an early stage is a critical problem, as blade failure can lead to a catastrophic outcome for the entire wind turbine system. Experimental measurements from vibration analysis were extracted from a 9 m CX-100 blade by researchers at Los Alamos National Laboratory (LANL) throughout a full-scale fatigue test conducted at the National Renewable Energy Laboratory (NREL) and National Wind Technology Center (NWTC). The blade was harmonically excited at its first natural frequency using a Universal Resonant EXcitation (UREX) system. In the current study, machine learning algorithms based on Artificial Neural Networks (ANNs), including an Auto-Associative Neural Network (AANN) based on a standard ANN form and a novel approach to auto-association with Radial Basis Functions (RBFs) networks are used, which are optimised for fast and efficient runs. This paper introduces such pattern recognition methods into the wind energy field and attempts to address the effectiveness of such methods by combining vibration response data with novelty detection techniques.

  4. High-power disk lasers: advances and applications

    NASA Astrophysics Data System (ADS)

    Havrilla, David; Ryba, Tracey; Holzer, Marco

    2012-03-01

    Though the genesis of the disk laser concept dates to the early 90's, the disk laser continues to demonstrate the flexibility and the certain future of a breakthrough technology. On-going increases in power per disk, and improvements in beam quality and efficiency continue to validate the genius of the disk laser concept. As of today, the disk principle has not reached any fundamental limits regarding output power per disk or beam quality, and offers numerous advantages over other high power resonator concepts, especially over monolithic architectures. With about 2,000 high power disk lasers installations, and a demand upwards of 1,000 lasers per year, the disk laser has proven to be a robust and reliable industrial tool. With advancements in running cost, investment cost and footprint, manufacturers continue to implement disk laser technology with more vigor than ever. This paper will explain recent advances in disk laser technology and process relevant features of the laser, like pump diode arrangement, resonator design and integrated beam guidance. In addition, advances in applications in the thick sheet area and very cost efficient high productivity applications like remote welding, remote cutting and cutting of thin sheets will be discussed.

  5. Evaluation of bioaugmentation efficiency for the treatment of run-off water under tropical conditions: applications to the Derby-Tacaruna canal (Recife/Brazil).

    PubMed

    da Silva, M C L; Nascimento, A M; da Silva, V L; Pons, M N; da Motta, M

    2009-01-01

    An evaluation of the efficiency of bacterial biomass augmentation was performed at lab-scale for the pollution treatment of the Derby-Tacaruna canal. The canal is located at the central area of Great Recife, alongside an important urban corridor. The characterization of the canal water in different tidal conditions showed that the actual pollution is organic and inorganic (heavy metals). Degradation experiments of water from the canal and rain-off system polluted by synthetic wastewater were performed, using activated sludge and an industrial bioadditive. Continuous reactors under two different conditions were evaluated: with diffuse aeration and without aeration. The channel reactor was operated under steady state conditions at a flow rate of 2.5 L h(-1) and with an average residence time of 22 h without aeration and 17 h with aeration. The organic matter removal was in the range of 60% for the system inoculated with the bioadditive and 85% with activated sludge. It was concluded that the water of the Derby-Tacaruna canal may be treated by activated sludge without being affected by its salt content, while the bioaugmentation technique was not satisfactory due to inhibition by inorganics.

  6. Greenbuilt Retrofit Test House Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sparn, B.; Hudon, K.; Earle, L.

    2014-06-01

    The Greenbuilt house is a 1980's era house in the Sacramento area that was a prominent part of Sacramento Municipal Utility District's (SMUD) Energy Efficient Remodel Demonstration Program. The house underwent an extensive remodel, aimed at improving overall energy efficiency with a goal of reducing the home's energy use by 50%. NREL researchers performed a number of tests on the major systems touched by the retrofit to ensure they were working as planned. Additionally, SMUD rented the house from Greenbuilt Construction for a year to allow NREL to perform a number of tests on the cooling system and the watermore » heating system. The goal of the space conditioning tests was to find the best ways to cut cooling loads and shift the summer peak. The water heating system, comprised of an add-on heat pump water heater and an integrated collector-storage solar water heater, was operated with a number of different draw profiles to see how varying hot water draw volume and schedule affected the performance of the system as a whole. All the experiments were performed with the house empty, with a simulated occupancy schedule running in the house to mimic the load imposed by real occupants.« less

  7. Development and validation of a multiplex real-time PCR method to simultaneously detect 47 targets for the identification of genetically modified organisms.

    PubMed

    Cottenet, Geoffrey; Blancpain, Carine; Sonnard, Véronique; Chuah, Poh Fong

    2013-08-01

    Considering the increase of the total cultivated land area dedicated to genetically modified organisms (GMO), the consumers' perception toward GMO and the need to comply with various local GMO legislations, efficient and accurate analytical methods are needed for their detection and identification. Considered as the gold standard for GMO analysis, the real-time polymerase chain reaction (RTi-PCR) technology was optimised to produce a high-throughput GMO screening method. Based on simultaneous 24 multiplex RTi-PCR running on a ready-to-use 384-well plate, this new procedure allows the detection and identification of 47 targets on seven samples in duplicate. To comply with GMO analytical quality requirements, a negative and a positive control were analysed in parallel. In addition, an internal positive control was also included in each reaction well for the detection of potential PCR inhibition. Tested on non-GM materials, on different GM events and on proficiency test samples, the method offered high specificity and sensitivity with an absolute limit of detection between 1 and 16 copies depending on the target. Easy to use, fast and cost efficient, this multiplex approach fits the purpose of GMO testing laboratories.

  8. Production integrated nondestructive testing of composite materials and material compounds - an overview

    NASA Astrophysics Data System (ADS)

    Straß, B.; Conrad, C.; Wolter, B.

    2017-03-01

    Composite materials and material compounds are of increasing importance, because of the steadily rising relevance of resource saving lightweight constructions. Quality assurance with appropriate Nondestructive Testing (NDT) methods is a key aspect for reliable and efficient production. Quality changes have to be detected already in the manufacturing flow in order to take adequate corrective actions. For materials and compounds the classical NDT methods for defectoscopy, like X-ray and Ultrasound (US) are still predominant. Nevertheless, meanwhile fast, contactless NDT methods, like air-borne ultrasound, dynamic thermography and special Eddy-Current techniques are available in order to detect cracks, voids, pores and delaminations but also for characterizing fiber content, distribution and alignment. In Metal-Matrix Composites US back-scattering can be used for this purpose. US run-time measurements allow the detection of thermal stresses at the metal-matrix interface. Another important area is the necessity for NDT in joining. To achieve an optimum material utilization and product safety as well as the best possible production efficiency, there is a need for NDT methods for in-line inspection of the joint quality while joining or immediately afterwards. For this purpose EMAT (Electromagnetic Acoustic Transducer) technique or Acoustic Emission testing can be used.

  9. Possible options to slow down the advancement rate of Tarbela delta.

    PubMed

    Habib-Ur-Rehman; Rehman, Mirza Abdul; Naeem, Usman Ali; Hashmi, Hashim Nisar; Shakir, Abdul Sattar

    2017-12-22

    The pivot point of delta in Tarbela dam has reached at about 10.6 km from the dam face which may result in blocking of tunnels. Tarbela delta was modeled from 1979 to 2060 using hec-6 model. Initially, the model was calibrated for year 1999 and validated for years 2000, 2001, 2002, and 2006 by involving the data of sediment concentration, reservoir cross sections (73 range lines), elevation-area capacity curves, and inflows and outflows from the reservoir. Then, the model was used to generate future scenarios, i.e., run-1, run-2, and run-3 with pool levels; 428, 442, and 457 m, respectively, till 2060. Results of run-1 and run-2 showed advancement to choke the tunnels by 2010 and 2030, respectively. Finally, in run-3, the advancement was further delayed showing that tunnels 1 and 2 will be choked by year 2050 and pivot point will reach at 6.4 km from the dam face.

  10. Efficiently sampling conformations and pathways using the concurrent adaptive sampling (CAS) algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.

    Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,”more » has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer« less

  11. High-Efficiency Multiscale Modeling of Cell Deformations in Confined Microenvironments in Microcirculation and Microfluidics

    NASA Astrophysics Data System (ADS)

    Lu, Huijie; Peng, Zhangli

    2017-11-01

    We developed a high-efficiency multiscale modeling method to predict the stress and deformation of cells during the interactions with their microenvironments in microcirculation and microfluidics, including red blood cells (RBCs) and circulating tumor cells (CTCs). There are more than 1 billion people in the world suffering from RBC diseases. The mechanical properties of RBCs are changed in these diseases due to molecular structure alternations, which is not only important for understanding the disease pathology but also provides an opportunity for diagnostics. On the other hand, the mechanical properties of cancer cells are also altered compared to healthy cells. This can lead to acquired ability to cross the narrow capillary networks and endothelial gaps, which is crucial for metastasis, the leading cause of cancer mortality. Therefore, it is important to predict the deformation and stress of RBCs and CTCs in microcirculations. We develop a high-efficiency multiscale model of cell-fluid interaction. We pass the information from our molecular scale models to the cell scale to study the effect of molecular mutations. Using our high-efficiency boundary element methods of fluids, we will be able to run 3D simulations using a single CPU within several hours, which will enable us to run extensive parametric studies and optimization.

  12. Increase in Leg Stiffness Reduces Joint Work During Backpack Carriage Running at Slow Velocities.

    PubMed

    Liew, Bernard; Netto, Kevin; Morris, Susan

    2017-10-01

    Optimal tuning of leg stiffness has been associated with better running economy. Running with a load is energetically expensive, which could have a significant impact on athletic performance where backpack carriage is involved. The purpose of this study was to investigate the impact of load magnitude and velocity on leg stiffness. We also explored the relationship between leg stiffness and running joint work. Thirty-one healthy participants ran overground at 3 velocities (3.0, 4.0, 5.0 m·s -1 ), whilst carrying 3 load magnitudes (0%, 10%, 20% weight). Leg stiffness was derived using the direct kinetic-kinematic method. Joint work data was previously reported in a separate study. Linear models were used to establish relationships between leg stiffness and load magnitude, velocity, and joint work. Our results found that leg stiffness did not increase with load magnitude. Increased leg stiffness was associated with reduced total joint work at 3.0 m·s -1 , but not at faster velocities. The association between leg stiffness and joint work at slower velocities could be due to an optimal covariation between skeletal and muscular components of leg stiffness, and limb attack angle. When running at a relatively comfortable velocity, greater leg stiffness may reflect a more energy efficient running pattern.

  13. [Influence of the space layout of a surgical department on use efficiency].

    PubMed

    Weiss, G; von Baer, R; Riedl, S

    2002-02-01

    There is a growing gap between the rapidly increasing diagnostic and therapeutic opportunities and the patient demands on one side and the continuously declining hospital budgets on the other side. This gap forces hospitals to search for rationalization potentials and ways to increase their efficiency. It is well known that the operating theatre unit is one of the most important internal cost factors. Many reorganization projects therefore focus on operating theatres. In Germany, several alternative operating room layouts have been developed in order to reduce running und building costs and to reach a high degree of flexibility in their everyday use by means of an improved design. This article analyses and compares the classic operating room and four alternative layouts intended to make them suitable for reaching the promised objectives and, especially, achieving an economically run business management. Furthermore, preferred layouts for certain types of operations are recommended.

  14. Master of Puppets: Cooperative Multitasking for In Situ Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Lukic, Zarija

    2016-01-01

    Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. Here, we present a novel design for running multiple codes in situ: using coroutines and position-independent executables we enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. We present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. This design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The techniques we present can also be integrated into other in situ frameworks.« less

  15. Henson v1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monozov, Dmitriy; Lukie, Zarija

    2016-04-01

    Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. The developers present a novel design for running multiple codes in situ: using coroutines and position-independent executables they enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. They present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. Our design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The presented techniques can also be integrated into other in situ frameworks.« less

  16. Self-running and self-floating two-dimensional actuator using near-field acoustic levitation

    NASA Astrophysics Data System (ADS)

    Chen, Keyu; Gao, Shiming; Pan, Yayue; Guo, Ping

    2016-09-01

    Non-contact actuators are promising technologies in metrology, machine-tools, and hovercars, but have been suffering from low energy efficiency, complex design, and low controllability. Here we report a new design of a self-running and self-floating actuator capable of two-dimensional motion with an unlimited travel range. The proposed design exploits near-field acoustic levitation for heavy object lifting, and coupled resonant vibration for generation of acoustic streaming for non-contact motion in designated directions. The device utilizes resonant vibration of the structure for high energy efficiency, and adopts a single piezo element to achieve both levitation and non-contact motion for a compact and simple design. Experiments demonstrate that the proposed actuator can reach a 1.65 cm/s or faster moving speed and is capable of transporting a total weight of 80 g under 1.2 W power consumption.

  17. Integrating fundamental movement skills in late childhood.

    PubMed

    Gimenez, Roberto; Manoel, Edison de J; de Oliveira, Dalton Lustosa; Dantas, Luiz; Marques, Inara

    2012-04-01

    The study examined how children of different ages integrate fundamental movement skills, such as running and throwing, and whether their developmental status was related to the combination of these skills. Thirty children were divided into three groups (G1 = 6-year-olds, G2 = 9-year-olds, and G3 = 12-year-olds) and filmed performing three tasks: running, overarm throwing, and the combined task. Patterns were identified and described, and the efficiency of integration was calculated (distance differences of the ball thrown in two tasks, overarm throwing and combined task). Differences in integration were related to age: the 6-year-olds were less efficient in combining the two skills than the 9- and 12-year-olds. These differences may be indicative of a phase of integrating fundamental movement skills in the developmental sequence. This developmental status, particularly throwing, seems to be related to the competence to integrate skills, which suggests that fundamental movement skills may be developmental modules.

  18. Solving Equations of Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Lim, Christopher

    2007-01-01

    Darts++ is a computer program for solving the equations of motion of a multibody system or of a multibody model of a dynamic system. It is intended especially for use in dynamical simulations performed in designing and analyzing, and developing software for the control of, complex mechanical systems. Darts++ is based on the Spatial-Operator- Algebra formulation for multibody dynamics. This software reads a description of a multibody system from a model data file, then constructs and implements an efficient algorithm that solves the dynamical equations of the system. The efficiency and, hence, the computational speed is sufficient to make Darts++ suitable for use in realtime closed-loop simulations. Darts++ features an object-oriented software architecture that enables reconfiguration of system topology at run time; in contrast, in related prior software, system topology is fixed during initialization. Darts++ provides an interface to scripting languages, including Tcl and Python, that enable the user to configure and interact with simulation objects at run time.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaunak, S.K.; Soni, B.K.

    With research interests shifting away from primarily military or industrial applications to more environmental applications, the area of ocean modelling has become an increasingly popular and exciting area of research. This paper presents a CIPS (Computation Field Simulation) system customized for the solution of oceanographic problems. This system deals primarily with the generation of simple, yet efficient grids for coastal areas. The two primary grid approaches are both structured in methodology. The first approach is a standard approach which is used in such popular grid generation softwares as GE-NIE++, EAGLEVIEW, and TIGER, where the user defines boundaries via points, lines,more » or curves, varies the distribution of points along these boundaries and then creates the interior grid. The second approach is to allow the user to interactively select points on the screen to form the boundary curves and then create the interior grid from these spline curves. The program has been designed with the needs of the ocean modeller in mind so that the modeller can obtain results in a timely yet elegant manner. The modeller performs four basic steps in using the program. First, he selects a region of interest from a popular database. Then, he creates a grid for that region. Next, he sets up boundary and input conditions and runs a circulation model. Finally, the modeller visualizes the output.« less

  20. Experimental investigation of a supersonic micro turbine running with hexamethyldisiloxane

    NASA Astrophysics Data System (ADS)

    Weiß Andreas, P.; Josef, Hauer; Tobias, Popp; Markus, Preißinger

    2017-09-01

    Experimentally determined efficiency characteristics of a supersonic micro turbine are discussed in the present paper. The micro turbine is a representative of a "micro-turbine-generator-construction-kit" for ORC small scale waste heat recovery. The isentropic total-to-static efficiency of the 12 kW turbine reaches an excellent design point performance of 73.4 %. Furthermore, its off-design operating behavior is very advantageous for small waste heat recovery plants: the turbine efficiency keeps a high level over a wide range of pressure ratio and rotational speed.

  1. Evaluation of Tsunami Run-Up on Coastal Areas at Regional Scale

    NASA Astrophysics Data System (ADS)

    González, M.; Aniel-Quiroga, Í.; Gutiérrez, O.

    2017-12-01

    Tsunami hazard assessment is tackled by means of numerical simulations, giving as a result, the areas flooded by tsunami wave inland. To get this, some input data is required, i.e., the high resolution topobathymetry of the study area, the earthquake focal mechanism parameters, etc. The computational cost of these kinds of simulations are still excessive. An important restriction for the elaboration of large scale maps at National or regional scale is the reconstruction of high resolution topobathymetry on the coastal zone. An alternative and traditional method consists of the application of empirical-analytical formulations to calculate run-up at several coastal profiles (i.e. Synolakis, 1987), combined with numerical simulations offshore without including coastal inundation. In this case, the numerical simulations are faster but some limitations are added as the coastal bathymetric profiles are very simply idealized. In this work, we present a complementary methodology based on a hybrid numerical model, formed by 2 models that were coupled ad hoc for this work: a non-linear shallow water equations model (NLSWE) for the offshore part of the propagation and a Volume of Fluid model (VOF) for the areas near the coast and inland, applying each numerical scheme where they better reproduce the tsunami wave. The run-up of a tsunami scenario is obtained by applying the coupled model to an ad-hoc numerical flume. To design this methodology, hundreds of worldwide topobathymetric profiles have been parameterized, using 5 parameters (2 depths and 3 slopes). In addition, tsunami waves have been also parameterized by their height and period. As an application of the numerical flume methodology, the coastal parameterized profiles and tsunami waves have been combined to build a populated database of run-up calculations. The combination was tackled by means of numerical simulations in the numerical flume The result is a tsunami run-up database that considers real profiles shape, realistic tsunami waves, and optimized numerical simulations. This database allows the calculation of the run-up of any new tsunami wave by interpolation on the database, in a short period of time, based on the tsunami wave characteristics provided as an output of the NLSWE model along the coast at a large scale domain (regional or National scale).

  2. Arcjet nozzle area ratio effects

    NASA Technical Reports Server (NTRS)

    Curran, Francis M.; Sarmiento, Charles J.; Birkner, Bjorn W.; Kwasny, James

    1990-01-01

    An experimental investigation was conducted to determine the effect of nozzle area ratio on the operating characteristics and performance of a low power dc arcjet thruster. Conical thoriated tungsten nozzle inserts were tested in a modular laboratory arcjet thruster run on hydrogen/nitrogen mixtures simulating the decomposition products of hydrazine. The converging and diverging sides of the inserts had half angles of 30 and 20 degrees, respectively, similar to a flight type unit currently under development. The length of the diverging side was varied to change the area ratio. The nozzle inserts were run over a wide range of specific power. Current, voltage, mass flow rate, and thrust were monitored to provide accurate comparisons between tests. While small differences in performance were observed between the two nozzle inserts, it was determined that for each nozzle insert, arcjet performance improved with increasing nozzle area ratio to the highest area ratio tested and that the losses become very pronounced for area ratios below 50. These trends are somewhat different than those obtained in previous experimental and analytical studies of low Re number nozzles. It appears that arcjet performance can be enhanced via area ratio optimization.

  3. Arcjet Nozzle Area Ratio Effects

    NASA Technical Reports Server (NTRS)

    Curran, Francis M.; Sarmiento, Charles J.; Birkner, Bjorn W.; Kwasny, James

    1990-01-01

    An experimental investigation was conducted to determine the effect of nozzle area ratio on the operating characteristics and performance of a low power dc arcjet thruster. Conical thoriated tungsten nozzle inserts were tested in a modular laboratory arcjet thruster run on hydrogen/nitrogen mixtures simulating the decomposition products of hydrazine. The converging and diverging sides of the inserts had half angles of 30 and 20 degrees, respectively, similar to a flight type unit currently under development. The length of the diverging side was varied to change the area ratio. The nozzle inserts were run over a wide range of specific power. Current, voltage, mass flow rate, and thrust were monitored to provide accurate comparisons between tests. While small differences in performance were observed between the two nozzle inserts, it was determined that for each nozzle insert, arcjet performance improved with increasing nozzle area ratio to the highest area ratio tested and that the losses become very pronounced for area ratios below 50. These trends are somewhat different than those obtained in previous experimental and analytical studies of low Re number nozzles. It appears that arcjet performance can be enhanced via area ratio optimization.

  4. CrocoBLAST: Running BLAST efficiently in the age of next-generation sequencing.

    PubMed

    Tristão Ramos, Ravi José; de Azevedo Martins, Allan Cézar; da Silva Delgado, Gabrielle; Ionescu, Crina-Maria; Ürményi, Turán Peter; Silva, Rosane; Koca, Jaroslav

    2017-11-15

    CrocoBLAST is a tool for dramatically speeding up BLAST+ execution on any computer. Alignments that would take days or weeks with NCBI BLAST+ can be run overnight with CrocoBLAST. Additionally, CrocoBLAST provides features critical for NGS data analysis, including: results identical to those of BLAST+; compatibility with any BLAST+ version; real-time information regarding calculation progress and remaining run time; access to partial alignment results; queueing, pausing, and resuming BLAST+ calculations without information loss. CrocoBLAST is freely available online, with ample documentation (webchem.ncbr.muni.cz/Platform/App/CrocoBLAST). No installation or user registration is required. CrocoBLAST is implemented in C, while the graphical user interface is implemented in Java. CrocoBLAST is supported under Linux and Windows, and can be run under Mac OS X in a Linux virtual machine. jkoca@ceitec.cz. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  5. A Fast Implementation of the ISODATA Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2005-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to ISODATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  6. A Fast Implementation of the Isodata Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Le Moigne, Jacqueline; Mount, David M.; Netanyahu, Nathan S.

    2007-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to IsoDATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  7. Energy saving through LED in signaling functions for automotive exterior lighting

    NASA Astrophysics Data System (ADS)

    Bony, Alexis; Hamami, Khaled; Tebbe, Frank; Mertens, Jens

    2011-05-01

    Safety considerations have always driven the way for improving exterior automotive lighting legal requirements. With the recent adoption of day-time running lamps for passenger cars, the steadily increasing need for reduction of vehicle power consumption has led to the introduction of LED-based day-time running lamps. Solutions with incandescent bulbs have also been implemented, as they present price advantages while offering limited design perspectives. In the meantime, technology developments has turned LED sources into ideal candidates for daytime running lamps by increasing their lumen per watt efficiency ratio towards values around 100 lm/W or higher. In this work, taking as an example the new Mercedes-Benz roadster SLK (R172), we present the first single LED daytime- running lamp, with a total power consumption below 5W per vehicle. After reviewing legal requirements, the optical and electronic concepts are discussed. Details on the tail lamp LED functions are also discussed, and particularly the advantages from the realization of fog lamp with LEDs.

  8. Racial Prejudice and Locational Equilibrium in an Urban Area.

    ERIC Educational Resources Information Center

    Yinger, John

    Racial prejudice is said to influence strongly the locational decisions of households in urban areas. This paper introduces racial prejudice into a model of an urban area and derives several results about residential location. A previously developed long-run model of an urban area adds a locational dimension to a model of the housing market under…

  9. Impacts of Soil and Water Conservation Practices on Crop Yield, Run-off, Soil Loss and Nutrient Loss in Ethiopia: Review and Synthesis.

    PubMed

    Adimassu, Zenebe; Langan, Simon; Johnston, Robyn; Mekuria, Wolde; Amede, Tilahun

    2017-01-01

    Research results published regarding the impact of soil and water conservation practices in the highland areas of Ethiopia have been inconsistent and scattered. In this paper, a detailed review and synthesis is reported that was conducted to identify the impacts of soil and water conservation practices on crop yield, surface run-off, soil loss, nutrient loss, and the economic viability, as well as to discuss the implications for an integrated approach and ecosystem services. The review and synthesis showed that most physical soil and water conservation practices such as soil bunds and stone bunds were very effective in reducing run-off, soil erosion and nutrient depletion. Despite these positive impacts on these services, the impact of physical soil and water conservation practices on crop yield was negative mainly due to the reduction of effective cultivable area by soil/stone bunds. In contrast, most agronomic soil and water conservation practices increase crop yield and reduce run-off and soil losses. This implies that integrating physical soil and water conservation practices with agronomic soil and water conservation practices are essential to increase both provisioning and regulating ecosystem services. Additionally, effective use of unutilized land (the area occupied by bunds) by planting multipurpose grasses and trees on the bunds may offset the yield lost due to a reduction in planting area. If high value grasses and trees can be grown on this land, farmers can harvest fodder for animals or fuel wood, both in scarce supply in Ethiopia. Growing of these grasses and trees can also help the stability of the bunds and reduce maintenance cost. Economic feasibility analysis also showed that, soil and water conservation practices became economically more viable if physical and agronomic soil and water conservation practices are integrated.

  10. Climate Local Information over the Mediterranean to Respond User Needs

    NASA Astrophysics Data System (ADS)

    Ruti, P.

    2012-12-01

    CLIM-RUN aims at developing a protocol for applying new methodologies and improved modeling and downscaling tools for the provision of adequate climate information at regional to local scale that is relevant to and usable by different sectors of society (policymakers, industry, cities, etc.). Differently from current approaches, CLIM-RUN will develop a bottom-up protocol directly involving stakeholders early in the process with the aim of identifying well defined needs at the regional to local scale. The improved modeling and downscaling tools will then be used to optimally respond to these specific needs. The protocol is assessed by application to relevant case studies involving interdependent sectors, primarily tourism and energy, and natural hazards (wild fires) for representative target areas (mountainous regions, coastal areas, islands). The region of interest for the project is the Greater Mediterranean area, which is particularly important for two reasons. First, the Mediterranean is a recognized climate change hot-spot, i.e. a region particularly sensitive and vulnerable to global warming. Second, while a number of countries in Central and Northern Europe have already in place well developed climate service networks (e.g. the United Kingdom and Germany), no such network is available in the Mediterranean. CLIM-RUN is thus also intended to provide the seed for the formation of a Mediterranean basin-side climate service network which would eventually converge into a pan-European network. The general time horizon of interest for the project is the future period 2010-2050, a time horizon that encompasses the contributions of both inter-decadal variability and greenhouse-forced climate change. In particular, this time horizon places CLIM-RUN within the context of a new emerging area of research, that of decadal prediction, which will provide a strong potential for novel research.

  11. Impacts of Soil and Water Conservation Practices on Crop Yield, Run-off, Soil Loss and Nutrient Loss in Ethiopia: Review and Synthesis

    NASA Astrophysics Data System (ADS)

    Adimassu, Zenebe; Langan, Simon; Johnston, Robyn; Mekuria, Wolde; Amede, Tilahun

    2017-01-01

    Research results published regarding the impact of soil and water conservation practices in the highland areas of Ethiopia have been inconsistent and scattered. In this paper, a detailed review and synthesis is reported that was conducted to identify the impacts of soil and water conservation practices on crop yield, surface run-off, soil loss, nutrient loss, and the economic viability, as well as to discuss the implications for an integrated approach and ecosystem services. The review and synthesis showed that most physical soil and water conservation practices such as soil bunds and stone bunds were very effective in reducing run-off, soil erosion and nutrient depletion. Despite these positive impacts on these services, the impact of physical soil and water conservation practices on crop yield was negative mainly due to the reduction of effective cultivable area by soil/stone bunds. In contrast, most agronomic soil and water conservation practices increase crop yield and reduce run-off and soil losses. This implies that integrating physical soil and water conservation practices with agronomic soil and water conservation practices are essential to increase both provisioning and regulating ecosystem services. Additionally, effective use of unutilized land (the area occupied by bunds) by planting multipurpose grasses and trees on the bunds may offset the yield lost due to a reduction in planting area. If high value grasses and trees can be grown on this land, farmers can harvest fodder for animals or fuel wood, both in scarce supply in Ethiopia. Growing of these grasses and trees can also help the stability of the bunds and reduce maintenance cost. Economic feasibility analysis also showed that, soil and water conservation practices became economically more viable if physical and agronomic soil and water conservation practices are integrated.

  12. Database usage and performance for the Fermilab Run II experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonham, D.; Box, D.; Gallas, E.

    2004-12-01

    The Run II experiments at Fermilab, CDF and D0, have extensive database needs covering many areas of their online and offline operations. Delivering data to users and processing farms worldwide has represented major challenges to both experiments. The range of applications employing databases includes, calibration (conditions), trigger information, run configuration, run quality, luminosity, data management, and others. Oracle is the primary database product being used for these applications at Fermilab and some of its advanced features have been employed, such as table partitioning and replication. There is also experience with open source database products such as MySQL for secondary databasesmore » used, for example, in monitoring. Tools employed for monitoring the operation and diagnosing problems are also described.« less

  13. The MPGD-based photon detectors for the upgrade of COMPASS RICH-1

    NASA Astrophysics Data System (ADS)

    Alexeev, M.; Azevedo, C. D. R.; Birsa, R.; Bradamante, F.; Bressan, A.; Büchele, M.; Chiosso, M.; Ciliberti, P.; Dalla Torre, S.; Dasgupta, S.; Denisov, O.; Finger, M.; Finger, M.; Fischer, H.; Gobbo, B.; Gregori, M.; Hamar, G.; Herrmann, F.; Levorato, S.; Maggiora, A.; Makke, A.; Martin, A.; Menon, G.; Steiger, K.; Novy, J.; Panzieri, D.; Pereira, F. A. B.; Santos, C. A.; Sbrizzai, G.; Schopferer, S.; Slunecka, M.; Steiger, L.; Sulc, M.; Tessarotto, F.; Veloso, J. F. C. A.

    2017-12-01

    The RICH-1 Detector of the COMPASS experiment at CERN SPS has undergone an important upgrade for the 2016 physics run. Four new photon detectors, based on Micro Pattern Gaseous Detector technology and covering a total active area larger than 1.2 m2 have replaced the previously used MWPC-based photon detectors. The upgrade answers the challenging efficiency and stability quest for the new phase of the COMPASS spectrometer physics programme. The new detector architecture consists in a hybrid MPGD combination of two Thick Gas Electron Multipliers and a MicroMegas stage. Signals, extracted from the anode pad by capacitive coupling, are read-out by analog F-E based on the APV25 chip. The main aspects of the COMPASS RICH-1 photon detectors upgrade are presented focussing on detector design, engineering aspects, mass production, the quality assessment and assembly challenges of the MPGD components. The status of the detector commissioning is also presented.

  14. Utility perspective on USEPA analytical methods program redirection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, B.; Davis, M.K.; Krasner, S.W.

    1996-11-01

    The Metropolitan Water District of Southern California (Metropolitan) is a public, municipal corporation, created by the State of California, which wholesales supplemental water trough 27 member agencies (cities and water districts). Metropolitan serves nearly 16 million people in an area along the coastal plain of Southern California that covers approximately 5200 square miles. Water deliveries have averaged up to 2.5 million acre-feet per year. Metropolitan`s Water Quality Laboratory (WQL) conducts compliance monitoring of its source and finished drinking waters for chemical and microbial constituents. The laboratory maintains certification of a large number and variety of analytical procedures. The WQL operatesmore » in a 17,000-square-foot facility. The equipment is state-of-the-art analytical instrumentation. The staff consists of 40 professional chemists and microbiologists whose experience and expertise are extensive and often highly specialized. The staff turnover is very low, and the laboratory is consistently, efficiently, and expertly run.« less

  15. Reconfigurable intelligent sensors for health monitoring: a case study of pulse oximeter sensor.

    PubMed

    Jovanov, E; Milenkovic, A; Basham, S; Clark, D; Kelley, D

    2004-01-01

    Design of low-cost, miniature, lightweight, ultra low-power, intelligent sensors capable of customization and seamless integration into a body area network for health monitoring applications presents one of the most challenging tasks for system designers. To answer this challenge we propose a reconfigurable intelligent sensor platform featuring a low-power microcontroller, a low-power programmable logic device, a communication interface, and a signal conditioning circuit. The proposed solution promises a cost-effective, flexible platform that allows easy customization, run-time reconfiguration, and energy-efficient computation and communication. The development of a common platform for multiple physical sensors and a repository of both software procedures and soft intellectual property cores for hardware acceleration will increase reuse and alleviate costs of transition to a new generation of sensors. As a case study, we present an implementation of a reconfigurable pulse oximeter sensor.

  16. The study and design of tension controller

    NASA Astrophysics Data System (ADS)

    Jun, G.; Lamei, X.

    2018-02-01

    Tension control is a wide used technology in areas such as textiles, paper and plastic films. In this article, the tension control system release and winding process is analyzed and the mathematical model of tension control system is established, and a high performance tension controller is designed. In hardware design, STM32F130 single chip microcomputer is used as the control core, which has the characteristics of fast running speed and rich peripheral features. In software design, μC/OS-II operating system is introduced to improve the efficiency of single chip microcomputer, and enhance the independence of each module, and make development and maintenance more convenient. The taper tension control is adopted in the winding part, which can effectively solve the problem of rolling shrinkage. The results show that the tension controller has the characteristics of simple structure, easy operation and stable performance.

  17. Automatic finger joint synovitis localization in ultrasound images

    NASA Astrophysics Data System (ADS)

    Nurzynska, Karolina; Smolka, Bogdan

    2016-04-01

    A long-lasting inflammation of joints results between others in many arthritis diseases. When not cured, it may influence other organs and general patients' health. Therefore, early detection and running proper medical treatment are of big value. The patients' organs are scanned with high frequency acoustic waves, which enable visualization of interior body structures through an ultrasound sonography (USG) image. However, the procedure is standardized, different projections result in a variety of possible data, which should be analyzed in short period of time by a physician, who is using medical atlases as a guidance. This work introduces an efficient framework based on statistical approach to the finger joint USG image, which enables automatic localization of skin and bone regions, which are then used for localization of the finger joint synovitis area. The processing pipeline realizes the task in real-time and proves high accuracy when compared to annotation prepared by the expert.

  18. Automated object-based classification of rain-induced landslides with VHR multispectral images in Madeira Island

    NASA Astrophysics Data System (ADS)

    Heleno, S.; Matias, M.; Pina, P.; Sousa, A. J.

    2015-09-01

    A method for semi-automatic landslide detection, with the ability to separate source and run-out areas, is presented in this paper. It combines object-based image analysis and a Support Vector Machine classifier on a GeoEye-1 multispectral image, sensed 3 days after the major damaging landslide event that occurred in Madeira island (20 February 2010), with a pre-event LIDAR Digital Elevation Model. The testing is developed in a 15 km2-wide study area, where 95 % of the landslides scars are detected by this supervised approach. The classifier presents a good performance in the delineation of the overall landslide area. In addition, fair results are achieved in the separation of the source from the run-out landslide areas, although in less illuminated slopes this discrimination is less effective than in sunnier east facing-slopes.

  19. The University and Manpower Educational Services: An Experimental and Demonstration Project.

    ERIC Educational Resources Information Center

    Williams, J. Earl

    The goal of the Manpower Educational Services Project at the University of Houston was, in the short run, to explore using a university's capability and position in the community to contribute to the understanding and functioning of manpower programs in its geographic area. In the long run, it was hoped that a permanent center could be established…

  20. Running into Trouble: Constructions of Danger and Risk in Girls' Access to Outdoor Space and Physical Activity

    ERIC Educational Resources Information Center

    Clark, Sheryl

    2015-01-01

    This paper considers girls' participation in running and other outdoor physical activities in their local areas in London, UK. The paper is concerned with the operation of risk discourses in and around this participation and looks at the way that such discourses impacted on girls' opportunities to take part in physical activities that required…

  1. Shape prior modeling using sparse representation and online dictionary learning.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Zhou, Yan; Uzunbas, Mustafa; Metaxas, Dimitris N

    2012-01-01

    The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient.

  2. Progressive Sampling Technique for Efficient and Robust Uncertainty and Sensitivity Analysis of Environmental Systems Models: Stability and Convergence

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, R.; Hosseini, N.; Razavi, S.

    2016-12-01

    Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).

  3. Efficient interface for online coupling of capillary electrophoresis with inductively coupled plasma-mass spectrometry and its application in simultaneous speciation analysis of arsenic and selenium.

    PubMed

    Liu, Lihong; Yun, Zhaojun; He, Bin; Jiang, Guibin

    2014-08-19

    A simple and highly efficient online system coupling of capillary electrophoresis to inductively coupled plasma-mass spectrometry (CE-ICP-MS) for simultaneous separation and determination of arsenic and selenium compounds was developed. CE was coupled to an ICP-MS system by a sprayer with a novel direct-injection high-efficiency nebulizer (DIHEN) chamber as the interface. By using this interface, six arsenic species, including arsenite (As(III), arsenate (As(V)), monomethylarsonic acid (MMA), dimethylarsinic acid (DMA), arsenobetaine (AsB), and arsenocholine (AsC) and five selenium species (such as sodium selenite (Se(IV)), sodium selenate (Se(VI)), selenocysteine (SeCys), selenomethionine (SeMet), and Se-methylselenocysteine (MeSeCys)) were baseline-separated and determined in a single run within 9 min under the optimized conditions. Minimum dead volume, low and steady sheath flow liquid, high nebulization efficiency, and high sample transport efficiency were obtained by using this interface. Detection limits were in the range of 0.11-0.37 μg L(-1) for the six arsenic compounds (determined as (75)As at m/z 75) and 1.33-2.31 μg L(-1) for the five selenium species (determined as (82)Se at m/z 82). Repeatability expressed as the relative standard deviations (RSD, n = 6) of both migration time and peak area were better than 2.68% for arsenic compounds and 3.28% for selenium compounds, respectively. The proposed method had been successfully applied for the determination of arsenic and selenium species in the certified reference materials DORM-3, water, urine, and fish samples.

  4. MM5 simulations for air quality modeling: An application to a coastal area with complex terrain

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Mi; Princevac, Marko; Mitsutomi, Satoru; Cassmassi, Joe

    A series of modifications were implemented in MM5 simulation in order to account for wind along the Santa Clarita valley, a north-south running valley located in the north of Los Angeles. Due to high range mountains in the north and the east of the Los Angeles Air Basin, sea breeze entering Los Angeles exits into two directions. One branch moves toward the eastern part of the basin and the other to the north toward the Santa Clarita valley. However, the northward flow has not been examined thoroughly nor simulated successfully in the previous studies. In the present study, we proposed four modifications to trigger the flow separation. They were (1) increasing drag over the ocean, (2) increasing soil moisture content, (3) selective observational nudging, and (4) one-way nesting for the innermost domain. The Control run overpredicted near-surface wind speed over the ocean and sensible heat flux, in an urbanized area, which justifies the above 1st and 2nd modification. The Modified run provided an improvement in near-surface temperature, sensible heat flux and wind fields including southeasterly flow along the Santa Clarita valley. The improved MM5 wind field triggered a transport to the Santa Clarita valley generating a plume elongated from an urban center to the north, which did not exist in MM5 Control run. In all, the modified MM5 fields yielded better agreement in both CO and O3 simulations especially in the Santa Clarita area.

  5. Urban stormwater run-off promotes compression of saltmarshes by freshwater plants and mangrove forests.

    PubMed

    Geedicke, Ina; Oldeland, Jens; Leishman, Michelle R

    2018-05-08

    Subtropical and temperate coastal saltmarsh of Australia is listed as an endangered ecological community under the Commonwealth Environment Protection and Biodiversity Conservation Act (EPBC Act). Saltmarshes are under threat from sea level rise, landward migration of mangroves, and in urban regions from habitat loss, input of litter, nutrients, and other contaminants. In urbanised catchments, saltmarsh areas receive nutrient-enriched and pollutant-contaminated run-off, such as heavy metals, through the stormwater system. This study aimed to investigate the impact of urban stormwater on saltmarsh and mangrove species composition and distribution. To test the effect of stormwater run-off in urbanised catchments on saltmarsh communities, we analysed the soil for pollutant elements, salinity and nutrient concentration and recorded vegetation composition at eight sites in the Sydney region, Australia. We found that elevated total nitrogen (>0.4 wt%) and reduced salinity of the soil downslope of stormwater outlets facilitates establishment of exotic plants and might promote migration of mangroves into saltmarshes, resulting in a squeezing effect on the distribution of saltmarsh vegetation. Saltmarsh cover was significantly lower below stormwater outlets and exotic plant cover increased significantly with sediment calcium concentrations above 8840 mg/kg, which are associated with stormwater run-off. However, this effect was found to be strongest in highly industrialised areas compared to residential areas. Understanding the impact of pollutants on coastal wetlands will improve management strategies for the conservation of this important endangered ecological community. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Chronic wheel running affects cocaine-induced c-Fos expression in brain reward areas in rats.

    PubMed

    Zlebnik, Natalie E; Hedges, Valerie L; Carroll, Marilyn E; Meisel, Robert L

    2014-03-15

    Emerging evidence from human and animal studies suggests that exercise is a highly effective treatment for drug addiction. However, most work has been done in behavioral models, and the effects of exercise on the neurobiological substrates of addiction have not been identified. Specifically, it is unknown whether prior exercise exposure alters neuronal activation of brain reward circuitry in response to drugs of abuse. To investigate this hypothesis, rats were given 21 days of daily access to voluntary wheel running in a locked or unlocked running wheel. Subsequently, they were challenged with a saline or cocaine (15 mg/kg, i.p.) injection and sacrificed for c-Fos immunohistochemistry. The c-Fos transcription factor is a measure of cellular activity and was used to quantify cocaine-induced activation of reward-processing areas of the brain: nucleus accumbens (NAc), caudate putamen (CPu), medial prefrontal cortex (mPFC), and orbitofrontal cortex (OFC). The mean fold change in cocaine-induced c-Fos cell counts relative to saline-induced c-Fos cell counts was significantly higher in exercising compared to control rats in the NAc core, dorsomedial and dorsolateral CPu, the prelimbic area, and the OFC, indicating differential cocaine-specific cellular activation of brain reward circuitry between exercising and control animals. These results suggest neurobiological mechanisms by which voluntary wheel running attenuates cocaine-motivated behaviors and provide support for exercise as a novel treatment for drug addiction. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Analysis of Three Compounds in Flos Farfarae by Capillary Electrophoresis with Large-Volume Sample Stacking

    PubMed Central

    Hao, Zeng-Yan; Li, Lu; Huang, Ya-yun

    2017-01-01

    The aim of this study was to develop a method combining an online concentration and high-efficiency capillary electrophoresis separation to analyze and detect three compounds (rutin, hyperoside, and chlorogenic acid) in Flos Farfarae. In order to get good resolution and enrichment, several parameters such as the choice of running buffer, pH and concentration of the running buffer, organic modifier, temperature, and separation voltage were all investigated. The optimized conditions were obtained as follows: the buffer of 40 mM NaH2P04-40 mM Borax-30% v/v methanol (pH 9.0); the sample hydrodynamic injection of up to 4 s at 0.5 psi; 20 kV applied voltage. The diode-array detector was used, and the detection wavelength was 364 nm. Based on peak area, higher levels of selective and sensitive improvements in analysis were observed and about 14-, 26-, and 5-fold enrichment of rutin, hyperoside, and chlorogenic acid were achieved, respectively. This method was successfully applied to determine the three compounds in Flos Farfarae. The linear curve of peak response versus concentration was from 20 to 400 µg/ml, 16.5 to 330 µg/mL, and 25 to 500 µg/mL, respectively. The regression coefficients were 0.9998, 0.9999, and 0.9991, respectively. PMID:29056967

  8. Mixture experiment methods in the development and optimization of microemulsion formulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furlanetto, Sandra; Cirri, Marzia; Piepel, Gregory F.

    2011-06-25

    Microemulsion formulations represent an interesting delivery vehicle for lipophilic drugs, allowing for improving their solubility and dissolution properties. This work developed effective microemulsion formulations using glyburide (a very poorly-water-soluble hypoglycaemic agent) as a model drug. First, the area of stable microemulsion (ME) formations was identified using a new approach based on mixture experiment methods. A 13-run mixture design was carried out in an experimental region defined by constraints on three components: aqueous, oil, and surfactant/cosurfactant. The transmittance percentage (at 550 nm) of ME formulations (indicative of their transparency and thus of their stability) was chosen as the response variable. Themore » results obtained using the mixture experiment approach corresponded well with those obtained using the traditional approach based on pseudo-ternary phase diagrams. However, the mixture experiment approach required far less experimental effort than the traditional approach. A subsequent 13-run mixture experiment, in the region of stable MEs, was then performed to identify the optimal formulation (i.e., having the best glyburide dissolution properties). Percent drug dissolved and dissolution efficiency were selected as the responses to be maximized. The ME formulation optimized via the mixture experiment approach consisted of 78% surfactant/cosurfacant (a mixture of Tween 20 and Transcutol, 1:1 v/v), 5% oil (Labrafac Hydro) and 17% aqueous (water). The stable region of MEs was identified using mixture experiment methods for the first time.« less

  9. Geohydrologic Investigations and Landscape Characteristics of Areas Contributing Water to Springs, the Current River, and Jacks Fork, Ozark National Scenic Riverways, Missouri

    USGS Publications Warehouse

    Mugel, Douglas N.; Richards, Joseph M.; Schumacher, John G.

    2009-01-01

    The Ozark National Scenic Riverways (ONSR) is a narrow corridor that stretches for approximately 134 miles along the Current River and Jacks Fork in southern Missouri. Most of the water flowing in the Current River and Jacks Fork is discharged to the rivers from springs within the ONSR, and most of the recharge area of these springs is outside the ONSR. This report describes geohydrologic investigations and landscape characteristics of areas contributing water to springs and the Current River and Jacks Fork in the ONSR. The potentiometric-surface map of the study area for 2000-07 shows that the groundwater divide extends beyond the surface-water divide in some places, notably along Logan Creek and the northeastern part of the study area, indicating interbasin transfer of groundwater between surface-water basins. A low hydraulic gradient occurs in much of the upland area west of the Current River associated with areas of high sinkhole density, which indicates the presence of a network of subsurface karst conduits. The results of a low base-flow seepage run indicate that most of the discharge in the Current River and Jacks Fork was from identified springs, and a smaller amount was from tributaries whose discharge probably originated as spring discharge, or from springs or diffuse groundwater discharge in the streambed. Results of a temperature profile conducted on an 85-mile reach of the Current River indicate that the lowest average temperatures were within or downstream from inflows of springs. A mass-balance on heat calculation of the discharge of Bass Rock Spring, a previously undescribed spring, resulted in an estimated discharge of 34.1 cubic feet per second (ft3/s), making it the sixth largest spring in the Current River Basin. The 13 springs in the study area for which recharge areas have been estimated accounted for 82 percent (867 ft3/s of 1,060 ft3/s) of the discharge of the Current River at Big Spring during the 2006 seepage run. Including discharge from other springs, the cumulative discharge from springs was over 90 percent of the river discharge at most of the spring locations, and was 92 percent at Big Spring and at the lower end of the ONSR. The discharge from the 1.9-mile long Pulltite Springs Complex measured in the 2006 seepage run was 88 ft3/s. Most of this (77 ft3/s) was from the first approximately 0.25 mi of the Pulltite Springs Complex. It has been estimated that the annual mean discharge from the Current River Springs Complex is 125 ft3/s, based on an apparent discharge of 50 ft3/s during a 1966 U.S. Geological Survey seepage run. However, a reinterpretation of the 1966 seepage run data shows that the discharge from the Current River Springs Complex instead was about 12.6 ft3/s, and the annual mean discharge was estimated to be 32 ft3/s, substantially less than 125 ft3/s. The 2006 seepage run showed a gain of only 12 ft3/s from the combined Round Spring and Current River Springs Complex from the mouth of Sinking Creek to 0.7 mi upstream from Root Hollow. The 2006 temperature profile measurements did not indicate any influx of spring discharge throughout the length of the Current River Springs Complex. The spring recharge areas with the largest number of identified sinkholes are Big Spring, Alley Spring, and Welch Spring. The spring recharge areas with the largest number of sinkholes per square mile of recharge area are Alley Spring, Blue Spring (Jacks Fork), Welch Spring, and Round Spring and the Current River Springs Complex. Using the currently known locations of losing streams, the Big Spring recharge area has the largest number of miles of losing stream, and the Bass Rock Spring recharge area has the largest number of miles of losing stream per unit recharge area. The spring recharge areas with the most open land and the least forested land per unit recharge area are Blue Spring (Jacks Fork), Welch Spring, Montauk Springs, and Alley Spring. The spring recharge areas with the least amount

  10. Fox baiting against Echinococcus multilocularis: contrasted achievements among two medium size cities.

    PubMed

    Comte, S; Raton, V; Raoul, F; Hegglin, D; Giraudoux, P; Deplazes, P; Favier, S; Gottschek, D; Umhang, G; Boué, F; Combes, B

    2013-08-01

    In Europe, most cities are currently colonized by red foxes (Vulpes vulpes), which are considered to be the main definitive host of the zoonotic cestode Echinococcus multilocularis. The risk of transmission to humans is of particular concern where high fox populations overlap with high human populations. The distribution of baits containing praziquantel has successfully reduced the infection pressure in rural areas and in small plots within large cities. The purpose of this study was to assess its efficiency in two medium size cities (less than 100,000 inhabitants) in areas of high human alveolar echinococcosis incidence. From August 2006 to March 2009, 14 baiting campaigns of praziquantel treatment were run in Annemasse and Pontarlier (Eastern France), each of which encompassed 33 km(2), with a density of 40 baits/km(2). The bait consumption appeared to be lower in strictly urban context compared to suburban areas (78.9% vs. 93.4%) and lower in Annemasse than in Pontarlier (82.2% vs. 89.5%). During our study, the prevalence of E. multilocularis, as assessed by EM-ELISA on fox faeces collected in the field in Annemasse, was lower within the treated area than in the rural control area. A "before/during" treatment comparison revealed a significant decrease of spring prevalence from 13.3% to 2.2%. No significant change in prevalence was detected in Pontarlier (stable prevalence: 9.1%) where the contamination of the treated area followed the temporal trend observed in the control area. There, a greater resilience of the parasite's life cycle, probably due to a strong pressure of recontamination from outside the treated area, may have counteracted the prophylaxis treatment. These contrasted outcomes suggest that the frequency of fox anthelmintic treatment should be adapted to the local situation. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. GRA prospectus: optimizing design and management of protected areas

    USGS Publications Warehouse

    Bernknopf, Richard; Halsing, David

    2001-01-01

    Protected areas comprise one major type of global conservation effort that has been in the form of parks, easements, or conservation concessions. Though protected areas are increasing in number and size throughout tropical ecosystems, there is no systematic method for optimally targeting specific local areas for protection, designing the protected area, and monitoring it, or for guiding follow-up actions to manage it or its surroundings over the long run. Without such a system, conservation projects often cost more than necessary and/or risk protecting ecosystems and biodiversity less efficiently than desired. Correcting these failures requires tools and strategies for improving the placement, design, and long-term management of protected areas. The objective of this project is to develop a set of spatially based analytical tools to improve the selection, design, and management of protected areas. In this project, several conservation concessions will be compared using an economic optimization technique. The forest land use portfolio model is an integrated assessment that measures investment in different land uses in a forest. The case studies of individual tropical ecosystems are developed as forest (land) use and preservation portfolios in a geographic information system (GIS). Conservation concessions involve a private organization purchasing development and resource access rights in a certain area and retiring them. Forests are put into conservation, and those people who would otherwise have benefited from extracting resources or selling the right to do so are compensated. Concessions are legal agreements wherein the exact amount and nature of the compensation result from a negotiated agreement between an agent of the conservation community and the local community. Funds are placed in a trust fund, and annual payments are made to local communities and regional/national governments. The payments are made pending third-party verification that the forest expanse and quality have been maintained.

  12. Factors affecting running economy in trained distance runners.

    PubMed

    Saunders, Philo U; Pyne, David B; Telford, Richard D; Hawley, John A

    2004-01-01

    Running economy (RE) is typically defined as the energy demand for a given velocity of submaximal running, and is determined by measuring the steady-state consumption of oxygen (VO2) and the respiratory exchange ratio. Taking body mass (BM) into consideration, runners with good RE use less energy and therefore less oxygen than runners with poor RE at the same velocity. There is a strong association between RE and distance running performance, with RE being a better predictor of performance than maximal oxygen uptake (VO2max) in elite runners who have a similar VO2max). RE is traditionally measured by running on a treadmill in standard laboratory conditions, and, although this is not the same as overground running, it gives a good indication of how economical a runner is and how RE changes over time. In order to determine whether changes in RE are real or not, careful standardisation of footwear, time of test and nutritional status are required to limit typical error of measurement. Under controlled conditions, RE is a stable test capable of detecting relatively small changes elicited by training or other interventions. When tracking RE between or within groups it is important to account for BM. As VO2 during submaximal exercise does not, in general, increase linearly with BM, reporting RE with respect to the 0.75 power of BM has been recommended. A number of physiological and biomechanical factors appear to influence RE in highly trained or elite runners. These include metabolic adaptations within the muscle such as increased mitochondria and oxidative enzymes, the ability of the muscles to store and release elastic energy by increasing the stiffness of the muscles, and more efficient mechanics leading to less energy wasted on braking forces and excessive vertical oscillation. Interventions to improve RE are constantly sought after by athletes, coaches and sport scientists. Two interventions that have received recent widespread attention are strength training and altitude training. Strength training allows the muscles to utilise more elastic energy and reduce the amount of energy wasted in braking forces. Altitude exposure enhances discrete metabolic aspects of skeletal muscle, which facilitate more efficient use of oxygen. The importance of RE to successful distance running is well established, and future research should focus on identifying methods to improve RE. Interventions that are easily incorporated into an athlete's training are desirable.

  13. Thread-Level Parallelization and Optimization of NWChem for the Intel MIC Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shan, Hongzhang; Williams, Samuel; Jong, Wibe de

    In the multicore era it was possible to exploit the increase in on-chip parallelism by simply running multiple MPI processes per chip. Unfortunately, manycore processors' greatly increased thread- and data-level parallelism coupled with a reduced memory capacity demand an altogether different approach. In this paper we explore augmenting two NWChem modules, triples correction of the CCSD(T) and Fock matrix construction, with OpenMP in order that they might run efficiently on future manycore architectures. As the next NERSC machine will be a self-hosted Intel MIC (Xeon Phi) based supercomputer, we leverage an existing MIC testbed at NERSC to evaluate our experiments.more » In order to proxy the fact that future MIC machines will not have a host processor, we run all of our experiments in tt native mode. We found that while straightforward application of OpenMP to the deep loop nests associated with the tensor contractions of CCSD(T) was sufficient in attaining high performance, significant effort was required to safely and efficiently thread the TEXAS integral package when constructing the Fock matrix. Ultimately, our new MPI OpenMP hybrid implementations attain up to 65x better performance for the triples part of the CCSD(T) due in large part to the fact that the limited on-card memory limits the existing MPI implementation to a single process per card. Additionally, we obtain up to 1.6x better performance on Fock matrix constructions when compared with the best MPI implementations running multiple processes per card.« less

  14. Designing coastal conservation to deliver ecosystem and human well-being benefits.

    PubMed

    Annis, Gust M; Pearsall, Douglas R; Kahl, Katherine J; Washburn, Erika L; May, Christopher A; Franks Taylor, Rachael; Cole, James B; Ewert, David N; Game, Edward T; Doran, Patrick J

    2017-01-01

    Conservation scientists increasingly recognize that incorporating human values into conservation planning increases the chances for success by garnering broader project acceptance. However, methods for defining quantitative targets for the spatial representation of human well-being priorities are less developed. In this study we employ an approach for identifying regionally important human values and establishing specific spatial targets for their representation based on stakeholder outreach. Our primary objective was to develop a spatially-explicit conservation plan that identifies the most efficient locations for conservation actions to meet ecological goals while sustaining or enhancing human well-being values within the coastal and nearshore areas of the western Lake Erie basin (WLEB). We conducted an optimization analysis using 26 features representing ecological and human well-being priorities (13 of each), and included seven cost layers. The influence that including human well-being had on project results was tested by running five scenarios and setting targets for human well-being at different levels in each scenario. The most important areas for conservation to achieve multiple goals are clustered along the coast, reflecting a concentration of existing or potentially restorable coastal wetlands, coastal landbird stopover habitat and terrestrial biodiversity, as well as important recreational activities. Inland important areas tended to cluster around trails and high quality inland landbird stopover habitat. Most concentrated areas of importance also are centered on lands that are already conserved, reflecting the lower costs and higher benefits of enlarging these conserved areas rather than conserving isolated, dispersed areas. Including human well-being features in the analysis only influenced the solution at the highest target levels.

  15. Designing coastal conservation to deliver ecosystem and human well-being benefits

    PubMed Central

    Pearsall, Douglas R.; Kahl, Katherine J.; Washburn, Erika L.; May, Christopher A.; Franks Taylor, Rachael; Cole, James B.; Ewert, David N.; Game, Edward T.; Doran, Patrick J.

    2017-01-01

    Conservation scientists increasingly recognize that incorporating human values into conservation planning increases the chances for success by garnering broader project acceptance. However, methods for defining quantitative targets for the spatial representation of human well-being priorities are less developed. In this study we employ an approach for identifying regionally important human values and establishing specific spatial targets for their representation based on stakeholder outreach. Our primary objective was to develop a spatially-explicit conservation plan that identifies the most efficient locations for conservation actions to meet ecological goals while sustaining or enhancing human well-being values within the coastal and nearshore areas of the western Lake Erie basin (WLEB). We conducted an optimization analysis using 26 features representing ecological and human well-being priorities (13 of each), and included seven cost layers. The influence that including human well-being had on project results was tested by running five scenarios and setting targets for human well-being at different levels in each scenario. The most important areas for conservation to achieve multiple goals are clustered along the coast, reflecting a concentration of existing or potentially restorable coastal wetlands, coastal landbird stopover habitat and terrestrial biodiversity, as well as important recreational activities. Inland important areas tended to cluster around trails and high quality inland landbird stopover habitat. Most concentrated areas of importance also are centered on lands that are already conserved, reflecting the lower costs and higher benefits of enlarging these conserved areas rather than conserving isolated, dispersed areas. Including human well-being features in the analysis only influenced the solution at the highest target levels. PMID:28241018

  16. Organizational capacity needs of consumer-run organizations.

    PubMed

    Wituk, Scott; Vu, Chi C; Brown, Louis D; Meissen, Greg

    2008-05-01

    Consumer-run organizations (CROs) are self-help oriented organizations that are run entirely by consumers (people who use or have used mental health services). The current study utilizes an organizational capacity framework to explore the needs of operating CROs. This framework includes four core capacity areas: (1) technical, (2) management, (3) leadership, and (4) adaptive capacity. An analysis reveals that the greatest organizational needs are related to technical and management capacities. Implications are discussed in terms of strategies and activities that CRO leaders and mental health professionals and administrators can use to strengthen the organizational capacity of CROs in their community.

  17. Water and processes of degradation in the Martian landscape

    NASA Technical Reports Server (NTRS)

    Milton, D. J.

    1973-01-01

    It is shown that erosion has been active on Mars so that many of the surface landforms are products of degradation. Unlike earth, erosion has not been a universal process, but one areally restricted and intermittently active so that a landscape is the product of one or two cycles of erosion and large areas of essentially undisturbed primitive terrain; running water has been the principal agent of degradation. Many features on Mars are most easily explained by assuming running surface water at some time in the past; for a few features, running water is the only possible explanation.

  18. Biomechanics and running economy.

    PubMed

    Anderson, T

    1996-08-01

    Running economy, which has traditionally been measured as the oxygen cost of running at a given velocity, has been accepted as the physiological criterion for 'efficient' performance and has been identified as a critical element of overall distance running performance. There is an intuitive link between running mechanics and energy cost of running, but research to date has not established a clear mechanical profile of an economic runner. It appears that through training, individuals are able to integrate and accommodate their own unique combination of dimensions and mechanical characteristics so that they arrive at a running motion which is most economical for them. Information in the literature suggests that biomechanical factors are likely to contribute to better economy in any runner. A variety of anthropometric dimensions could influence biomechanical effectiveness. These include: average or slightly smaller than average height for men and slightly greater than average height for women; high ponderal index and ectomorphic or ectomesomorphic physique; low percentage body fat; leg morphology which distributes mass closer to the hip joint; narrow pelvis and smaller than average feet. Gait patterns, kinematics and the kinetics of running may also be related to running economy. These factors include: stride length which is freely chosen over considerable running time; low vertical oscillation of body centre of mass; more acute knee angle during swing; less range of motion but greater angular velocity of plantar flexion during toe-off; arm motion of smaller amplitude; low peak ground reaction forces; faster rotation of shoulders in the transverse plane; greater angular excursion of the hips and shoulders about the polar axis in the transverse plane; and effective exploitation of stored elastic energy. Other factors which may improve running economy are: lightweight but well-cushioned shoes; more comprehensive training history; and the running surface of intermediate compliance. At the developmental level, this information might be useful in identifying athletes with favourable characteristics for economical distance running. At higher levels of competition, it is likely that 'natural selection' tends to eliminate athletes who failed to either inherit or develop characteristics which favour economy.

  19. Sprint Interval Training Induces A Sexual Dimorphism but does not Improve Peak Bone Mass in Young and Healthy Mice

    PubMed Central

    Koenen, Kathrin; Knepper, Isabell; Klodt, Madlen; Osterberg, Anja; Stratos, Ioannis; Mittlmeier, Thomas; Histing, Tina; Menger, Michael D.; Vollmar, Brigitte; Bruhn, Sven; Müller-Hilke, Brigitte

    2017-01-01

    Elevated peak bone mass in early adulthood reduces the risk for osteoporotic fractures at old age. As sports participation has been correlated with elevated peak bone masses, we aimed to establish a training program that would efficiently stimulate bone accrual in healthy young mice. We combined voluntary treadmill running with sprint interval training modalities that were tailored to the individual performance limits and were of either high or intermediate intensity. Adolescent male and female STR/ort mice underwent 8 weeks of training before the hind legs were analyzed for cortical and trabecular bone parameters and biomechanical strength. Sprint interval training led to increased running speeds, confirming an efficient training. However, males and females responded differently. The males improved their running speeds in response to intermediate intensities only and accrued cortical bone at the expense of mechanical strength. High training intensities induced a significant loss of trabecular bone. The female bones showed neither adverse nor beneficial effects in response to either training intensities. Speculations about the failure to improve geometric alongside mechanical bone properties include the possibility that our training lacked sufficient axial loading, that high cardio-vascular strains adversely affect bone growth and that there are physiological limits to bone accrual. PMID:28303909

  20. A Polynomial Time, Numerically Stable Integer Relation Algorithm

    NASA Technical Reports Server (NTRS)

    Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.

  1. A submerged tubular ceramic membrane bioreactor for high strength wastewater treatment.

    PubMed

    Sun, D D; Zeng, J L; Tay, J H

    2003-01-01

    A 4 L submerged tubular ceramic membrane bioreactor (MBR) was applied in laboratory scale to treat 2,400 mg-COD/L high strength wastewater. A prolonged sludge retention time (SRT) of 200 day, in contrast to the conventional SRT of 5 to 15 days, was explored in this study, aiming to reduce substantially the amount of disposed sludge. The MBR system was operated for a period of 142 days in four runs, differentiated by specific oxygen utilization rate (SOUR) and hydraulic retention time (HRT). It was found that the MBR system produced more than 99% of suspended solid reduction. Mixed liquor suspended solids (MLSS) was found to be adversely proportional to HRT, and in general higher than the value from a conventional wastewater treatment plant. A chemical oxygen demand (COD) removal efficiency was achieved as high as 98% in Run 1, when SOUR was in the range of 100-200 mg-O/g-MLVSS/hr. Unexpectedly, the COD removal efficiency in Run 2 to 4 was higher than 92%, on average, where higher HRT and abnormally low SOUR of 20-30 mg-O/g-MLVSS/hr prevailed. It was noted that the ceramic membrane presented a significant soluble nutrient rejection when the microbial metabolism of biological treatment broke down.

  2. GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid

    NASA Astrophysics Data System (ADS)

    Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua

    2016-10-01

    A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.

  3. Determination of cocaine in postmortem human liver exposed to overdose. Application of an innovative and efficient extraction/clean up procedure and gas chromatography-mass spectrometry analysis.

    PubMed

    Magalhães, Elisângela Jaqueline; Ribeiro de Queiroz, Maria Eliana Lopes; Penido, Marcus Luiz de Oliveira; Paiva, Marco Antônio Ribeiro; Teodoro, Janaína Aparecida Reis; Augusti, Rodinei; Nascentes, Clésia Cristina

    2013-09-27

    A simple and efficient method was developed for the determination of cocaine in post-mortem samples of human liver via solid-liquid extraction with low temperature partitioning (SLE-LTP) and analysis by gas chromatography coupled to mass spectrometry (GC-MS). The extraction procedure was optimized by evaluating the influence of the following variables: pH of the extract, volume and composition of the extractor solvent, addition of a sorbent material (PSA: primary-secondary amine) and NaCl to clean up and increase the ionic strength of the extract. A bovine liver sample that was free of cocaine was used as a blank for the optimization of the SLE-LTP extraction procedure. The highest recovery was obtained when crushed bovine liver (2g) was treated with 2mL of ultrapure water plus 8mL of acetonitrile at physiological pH (7.4). The results also indicated no need for using PSA and NaCl. The complete analytical procedure was validated for the following figures of merit: selectivity, lower limit of quantification (LLOQ), calibration curve, recovery, precision and accuracy (for within-run and between-run experiments), matrix effect, dilution integrity and stability. The within-run and between-run precision (at four levels) varied from 2.1% to 9.4% and from 4.0% to 17.0%, respectively. A maximum deviation of 11.62% for the within-run and between-run accuracies in relation to the nominal concentrations was observed. Moreover, the LLOQ value for cocaine was 50.0ngg(-1) whereas no significant effects were noticed in the assays of dilution integrity and stability. To assess its overall performance, the optimized method was applied to the analysis of eight human liver samples collected from individuals who died due to the abusive consumption of cocaine. Due to the existence of a significant matrix effect, a blank human liver was used to construct a matrix-matched analytical curve. The concentrations of cocaine found in these samples ranged from 333.5 to 5969ngg(-1). Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Mechanisms for regulating step length while running towards and over an obstacle.

    PubMed

    Larsen, Roxanne J; Jackson, William H; Schmitt, Daniel

    2016-10-01

    The ability to run across uneven terrain with continuous stable movement is critical to the safety and efficiency of a runner. Successful step-to-step stabilization while running may be mediated by minor adjustments to a few key parameters (e.g., leg stiffness, step length, foot strike pattern). However, it is not known to what degree runners in relatively natural settings (e.g., trails, paved road, curbs) use the same strategies across multiple steps. This study investigates how three readily measurable running parameters - step length, foot placement, and foot strike pattern - are adjusted in response to encountering a typical urban obstacle - a sidewalk curb. Thirteen subjects were video-recorded as they ran at self-selected slow and fast paces. Runners targeted a specific distance before the curb for foot placement, and lengthened their step over the curb (p<0.0001) regardless of where the step over the curb was initiated. These strategies of adaptive locomotion disrupt step cycles temporarily, and may increase locomotor cost and muscle loading, but in the end assure dynamic stability and minimize the risk of injury over the duration of a run. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. 33 CFR 3.70-1 - Fourteenth district.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... GUARD AREAS, DISTRICTS, SECTORS, MARINE INSPECTION ZONES, AND CAPTAIN OF THE PORT ZONES Fourteenth Coast... 5° S., 110° W.; the ocean area west and south of a line running from position 51° N., 158° E. to...

  6. The ISOLDE control system

    NASA Astrophysics Data System (ADS)

    Deloose, I.; Pace, A.

    1994-12-01

    The two CERN isotope separators named ISOLDE have been running on the new Personal Computer (PC) based control system since April 1992. The new architecture that makes heavy use of the commercial software and hardware of the PC market has been implemented on the 1700 geographically distributed control channels of the two separators and their experimental area. Eleven MSDOS Intel-based PCs with approximately 80 acquisition and control boards are used to access the equipment and are controlled from three PCs running Microsoft Windows used as consoles through a Novell Local Area Network. This paper describes the interesting solutions found and discusses the reduced programming workload and costs that have been obtained.

  7. High Efficiency Variable Speed Versatile Power Air Conditioning System

    DTIC Science & Technology

    2013-08-08

    Design concept applicable for wide range of HVAC and refrigeration systems • One TXV size can be used for a wide range of cooling capacity...versatility, can run from AC and DC sources Cooling load adaptive, variable Speed Fully operable up to 140 degrees Fahrenheit 15. SUBJECT TERMS 16. SECURITY...ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 High Efficiency HVAC &R Technology

  8. Free-electron laser simulations on the MPP

    NASA Technical Reports Server (NTRS)

    Vonlaven, Scott A.; Liebrock, Lorie M.

    1987-01-01

    Free electron lasers (FELs) are of interest because they provide high power, high efficiency, and broad tunability. FEL simulations can make efficient use of computers of the Massively Parallel Processor (MPP) class because most of the processing consists of applying a simple equation to a set of identical particles. A test version of the KMS Fusion FEL simulation, which resides mainly in the MPPs host computer and only partially in the MPP, has run successfully.

  9. Evaluation of the Efficiency of Liquid Cooling Garments using a Thermal Manikin

    DTIC Science & Technology

    2005-05-01

    temperatures. The software also calculates thermal resistances and evaporative resistances. TM tests were run dry (i.e. no sweating ) and wet (i.e...REPORT DOCUMENTATION PAGE Form ApprovedOMB No . 0704-0188 SECURITY CLASSIFICATION OF REPORT SECURITY CLASSIFICATION OF THIS PAGE SECURITY CLASSIFICATION...OF ABSTRACT 8. M05-17 1. AGENCY USE ONLY (Leave blank) 4. TITLE AND SUBTITLE EVALUATION OF THE EFFICIENCY OF LIQUID COOLING GARMENTS USING A THERMAL

  10. Streaming fragment assignment for real-time analysis of sequencing experiments

    PubMed Central

    Roberts, Adam; Pachter, Lior

    2013-01-01

    We present eXpress, a software package for highly efficient probabilistic assignment of ambiguously mapping sequenced fragments. eXpress uses a streaming algorithm with linear run time and constant memory use. It can determine abundances of sequenced molecules in real time, and can be applied to ChIP-seq, metagenomics and other large-scale sequencing data. We demonstrate its use on RNA-seq data, showing greater efficiency than other quantification methods. PMID:23160280

  11. The worldwide costs of marine protected areas

    PubMed Central

    Balmford, Andrew; Gravestock, Pippa; Hockley, Neal; McClean, Colin J.; Roberts, Callum M.

    2004-01-01

    Declines in marine harvests, wildlife, and habitats have prompted calls at both the 2002 World Summit on Sustainable Development and the 2003 World Parks Congress for the establishment of a global system of marine protected areas (MPAs). MPAs that restrict fishing and other human activities conserve habitats and populations and, by exporting biomass, may sustain or increase yields of nearby fisheries. Here we provide an estimate of the costs of a global MPA network, based on a survey of the running costs of 83 MPAs worldwide. Annual running costs per unit area spanned six orders of magnitude, and were higher in MPAs that were smaller, closer to coasts, and in high-cost, developed countries. Models extrapolating these findings suggest that a global MPA network meeting the World Parks Congress target of conserving 20–30% of the world's seas might cost between $5 billion and $19 billion annually to run and would probably create around one million jobs. Although substantial, gross network costs are less than current government expenditures on harmful subsidies to industrial fisheries. They also ignore potential private gains from improved fisheries and tourism and are dwarfed by likely social gains from increasing the sustainability of fisheries and securing vital ecosystem services. PMID:15205483

  12. Microbial Air Quality and Bacterial Surface Contamination in Ambulances During Patient Services

    PubMed Central

    Luksamijarulkul, Pipat; Pipitsangjan, Sirikun

    2015-01-01

    Objectives We sought to assess microbial air quality and bacterial surface contamination on medical instruments and the surrounding areas among 30 ambulance runs during service. Methods We performed a cross-sectional study of 106 air samples collected from 30 ambulances before patient services and 212 air samples collected during patient services to assess the bacterial and fungal counts at the two time points. Additionally, 226 surface swab samples were collected from medical instrument surfaces and the surrounding areas before and after ambulance runs. Groups or genus of isolated bacteria and fungi were preliminarily identified by Gram’s stain and lactophenol cotton blue. Data were analyzed using descriptive statistics, t-test, and Pearson’s correlation coefficient with a p-value of less than 0.050 considered significant. Results The mean and standard deviation of bacterial and fungal counts at the start of ambulance runs were 318±485cfu/m3 and 522±581cfu/m3, respectively. Bacterial counts during patient services were 468±607cfu/m3 and fungal counts were 656±612cfu/m3. Mean bacterial and fungal counts during patient services were significantly higher than those at the start of ambulance runs, p=0.005 and p=0.030, respectively. For surface contamination, the overall bacterial counts before and after patient services were 0.8±0.7cfu/cm2 and 1.3±1.1cfu/cm2, respectively (p<0.001). The predominant isolated bacteria and fungi were Staphylococcus spp. and Aspergillus spp., respectively. Additionally, there was a significantly positive correlation between bacterial (r=0.3, p<0.010) and fungal counts (r=0.2, p=0.020) in air samples and bacterial counts on medical instruments and allocated areas. Conclusions This study revealed high microbial contamination (bacterial and fungal) in ambulance air during services and higher bacterial contamination on medical instrument surfaces and allocated areas after ambulance services compared to the start of ambulance runs. Additionally, bacterial and fungal counts in ambulance air showed a significantly positive correlation with the bacterial surface contamination on medical instruments and allocated areas. Further studies should be conducted to determine the optimal intervention to reduce microbial contamination in the ambulance environment. PMID:25960835

  13. VIEWDEX: an efficient and easy-to-use software for observer performance studies.

    PubMed

    Håkansson, Markus; Svensson, Sune; Zachrisson, Sara; Svalkvist, Angelica; Båth, Magnus; Månsson, Lars Gunnar

    2010-01-01

    The development of investigation techniques, image processing, workstation monitors, analysing tools etc. within the field of radiology is vast, and the need for efficient tools in the evaluation and optimisation process of image and investigation quality is important. ViewDEX (Viewer for Digital Evaluation of X-ray images) is an image viewer and task manager suitable for research and optimisation tasks in medical imaging. ViewDEX is DICOM compatible and the features of the interface (tasks, image handling and functionality) are general and flexible. The configuration of a study and output (for example, answers given) can be edited in any text editor. ViewDEX is developed in Java and can run from any disc area connected to a computer. It is free to use for non-commercial purposes and can be downloaded from http://www.vgregion.se/sas/viewdex. In the present work, an evaluation of the efficiency of ViewDEX for receiver operating characteristic (ROC) studies, free-response ROC (FROC) studies and visual grading (VG) studies was conducted. For VG studies, the total scoring rate was dependent on the number of criteria per case. A scoring rate of approximately 150 cases h(-1) can be expected for a typical VG study using single images and five anatomical criteria. For ROC and FROC studies using clinical images, the scoring rate was approximately 100 cases h(-1) using single images and approximately 25 cases h(-1) using image stacks ( approximately 50 images case(-1)). In conclusion, ViewDEX is an efficient and easy-to-use software for observer performance studies.

  14. Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2011-01-01

    Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less

  15. Changes in Plantar Loading Based on Shoe Type and Sex During a Jump-Landing Task

    PubMed Central

    DeBiasio, Justin C.; Russell, Mary E.; Butler, Robert J.; Nunley, James A.; Queen, Robin M.

    2013-01-01

    Context: Metatarsal stress fractures are common in cleated-sport athletes. Previous authors have shown that plantar loading varies with footwear, sex, and the athletic task. Objective: To examine the effects of shoe type and sex on plantar loading in the medial midfoot (MMF), lateral midfoot (LMF), medial forefoot (MFF), middle forefoot (MidFF), and lateral forefoot (LFF) during a jump-landing task. Design: Crossover study. Setting: Laboratory. Patients or Other Participants: Twenty-seven recreational athletes (14 men, 13 women) with no history of lower extremity injury in the last 6 months and no history of foot or ankle surgery. Main Outcome Measure(s): The athletes completed 7 jumping trials while wearing bladed-cleat, turf-cleat, and running shoes. Maximum force, contact area, contact time, and the force-time integral were analyzed in each foot region. We calculated 2 × 3 analyses of variance (α = .05) to identify shoe-condition and sex differences. Results: We found no shoe × sex interactions, but the MMF, LMF, MFF, and LFF force-time integrals were greater in men (P < .03). The MMF maximum force was less with the bladed-cleat shoes (P = .02). Total foot and MidFF maximum force was less with the running shoes (P < .01). The MFF and LFF maximum forces were different among all shoe conditions (P < .01). Total foot contact area was less in the bladed-cleat shoes (P = .01). The MMF contact area was greatest in the running shoes (P < .01). The LFF contact area was less in the running shoes (P = .03). The MFF and LFF force-time integrals were greater with the bladed-cleat shoes (P < .01). The MidFF force-time integral was less in the running shoes (P < .01). Conclusions: Independent of shoe, men and women loaded the foot differently during a jump landing. The bladed cleat increased forefoot loading, which may increase the risk for forefoot injury. The type of shoe should be considered when choosing footwear for athletes returning to activity after metatarsal stress fractures. PMID:24067149

  16. An Evaluation of the Predictability of Austral Summer Season Precipitation over South America.

    NASA Astrophysics Data System (ADS)

    Misra, Vasubandhu

    2004-03-01

    In this study predictability of austral summer seasonal precipitation over South America is investigated using a 12-yr set of a 3.5-month range (seasonal) and a 17-yr range (continuous multiannual) five-member ensemble integrations of the Center for Ocean Land Atmosphere Studies (COLA) atmospheric general circulation model (AGCM). These integrations were performed with prescribed observed sea surface temperature (SST); therefore, skill attained represents an estimate of the upper bound of the skill achievable by COLA AGCM with predicted SST. The seasonal runs outperform the multiannual model integrations both in deterministic and probabilistic skill. The simulation of the January February March (JFM) seasonal climatology of precipitation is vastly superior in the seasonal runs except over the Nordeste region where the multiannual runs show a marginal improvement. The teleconnection of the ensemble mean JFM precipitation over tropical South America with global contemporaneous observed sea surface temperature in the seasonal runs conforms more closely to observations than in the multiannual runs. Both the sets of runs clearly beat persistence in predicting the interannual precipitation anomalies over the Amazon River basin, Nordeste, South Atlantic convergence zone, and subtropical South America. However, both types of runs display poorer simulations over subtropical regions than the tropical areas of South America. The examination of probabilistic skill of precipitation supports the conclusions from deterministic skill analysis that the seasonal runs yield superior simulations than the multiannual-type runs.

  17. Large Eddy Simulation of a Wind Turbine Airfoil at High Freestream-Flow Angle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2015-04-13

    A simulation of the airflow over a section of a wind turbine blade, run on the supercomputer Mira at the Argonne Leadership Computing Facility. Simulations like these help identify ways to make turbine blades more efficient.

  18. Rewarding success

    NASA Astrophysics Data System (ADS)

    captainbonkers

    2015-01-01

    In reply to the news article "LED firm rejects Nobel laureate's olive branch" (December 2014 p7, http://ow.ly/EgLs7) about the long-running feud between Shuji Nakamura and his former employer, Nichia, over compensation for patents on efficient blue light-emitting diodes.

  19. Large Eddy Simulation of a Wind Turbine Airfoil at High Freestream-Flow Angle

    ScienceCinema

    None

    2018-02-07

    A simulation of the airflow over a section of a wind turbine blade, run on the supercomputer Mira at the Argonne Leadership Computing Facility. Simulations like these help identify ways to make turbine blades more efficient.

  20. Efficient implementation of the 3D-DDA ray traversal algorithm on GPU and its application in radiation dose calculation.

    PubMed

    Xiao, Kai; Chen, Danny Z; Hu, X Sharon; Zhou, Bo

    2012-12-01

    The three-dimensional digital differential analyzer (3D-DDA) algorithm is a widely used ray traversal method, which is also at the core of many convolution∕superposition (C∕S) dose calculation approaches. However, porting existing C∕S dose calculation methods onto graphics processing unit (GPU) has brought challenges to retaining the efficiency of this algorithm. In particular, straightforward implementation of the original 3D-DDA algorithm inflicts a lot of branch divergence which conflicts with the GPU programming model and leads to suboptimal performance. In this paper, an efficient GPU implementation of the 3D-DDA algorithm is proposed, which effectively reduces such branch divergence and improves performance of the C∕S dose calculation programs running on GPU. The main idea of the proposed method is to convert a number of conditional statements in the original 3D-DDA algorithm into a set of simple operations (e.g., arithmetic, comparison, and logic) which are better supported by the GPU architecture. To verify and demonstrate the performance improvement, this ray traversal method was integrated into a GPU-based collapsed cone convolution∕superposition (CCCS) dose calculation program. The proposed method has been tested using a water phantom and various clinical cases on an NVIDIA GTX570 GPU. The CCCS dose calculation program based on the efficient 3D-DDA ray traversal implementation runs 1.42 ∼ 2.67× faster than the one based on the original 3D-DDA implementation, without losing any accuracy. The results show that the proposed method can effectively reduce branch divergence in the original 3D-DDA ray traversal algorithm and improve the performance of the CCCS program running on GPU. Considering the wide utilization of the 3D-DDA algorithm, various applications can benefit from this implementation method.

  1. Fabrication Development and Flow Testing of Underwater Superhydrophobic Films for Drag Reduction

    DTIC Science & Technology

    2017-03-21

    form a large area Sidewall improved by using a chisel-edge blade 3/20/17 9 17/52 Task 3: Flow testing and characterization 18/52 Task 3.1 Develop shear...Before data collection for each run , the water tunnel was run at Re= 1.45x107 for a 3-5 minutes to de-wet SHPo sample, but improvement was not

  2. Clustering and Installing Satellite Nodes

    NASA Astrophysics Data System (ADS)

    Lotts, A. P.

    This note describes basic clustering and the installation of a MicroVax or VaxStation as a Satellite node of an LAVC (Local Area VaxCluster). It will NOT describe Dual Porting of a MicroVax. It assumes that VMS 4.6 is running on the Boot node, that the LAVC key has been applied and that BOOT_CONFIG has been run as described in the LAVC manual.

  3. 33 CFR 3.70-1 - Fourteenth district.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Fourteenth Coast Guard District shall comprise the State of Hawaii; and the Pacific Islands belonging to the United States south of latitude 40° N., and west of a line running from 40° N., 150° W. through latitude 5° S., 110° W.; the ocean area west and south of a line running from position 51° N., 158° E. to...

  4. Wave-induced ripple development in mixed clay-sand substrates

    NASA Astrophysics Data System (ADS)

    Wu, Xuxu; Parsons, Daniel; Baas, Jaco H.; Mouazé, Dominique; McLelland, Stuart; Amoudry, Laurent; Eggenhuisen, Jorris; Cartigny, Matthieu; Ruessink, Gerben

    2016-04-01

    This paper reports on a series of experiments that aim to provide a fuller understanding of ripple development within clay-sand mixture substrates under oscillatory flow conditions. The work was conducted in the Total Environment Simulator at the University of Hull and constituted 6 separate runs, in which 5 runs were conducted under identical sets of regular waves (an additional run was conducted under irregular waves, but is not discussed in present paper). The bed content was systematically varied in its composition ranging from a pure sand bed through to a bed comprising 7.4% clay. A series of state-of-the-art measurements were employed to quantify interactions of near-bed hydrodynamics, sediment transport, and turbulence over rippled beds formed by wave action, during and after, each run. The experimental results demonstrate the significant influence of the amount of cohesive clay materials in the substrate on ripple evolution under waves. Most importantly, addition of clay in the bed dramatically slowed down the rate of ripple development and evolution. The equilibrium time of each run increased exponentially from 30 minutes under the control conditions of a pure sand bed, rising to ~350 minutes for the bed with the highest fraction of clay. The paper discusses the slower ripple growth rates with higher cohesive fractions, via an influence on critical shear, but highlights that the end equilibrium size of ripples is found to be independent of increasing substrate clay fraction. The suspended particles mass (SPM) concentration indicates that clay particles were suspended and winnowed by wave action. Additionally, laser granulometry of the final substrates verified that ripple crests were composed of pure sand layers that were absent at ripple troughs, reflecting a relatively higher winnowing efficiency at wave ripples crest. The winnowing process and its efficiency is inexorably linked to wave ripple development and evolution. The implications of the results for sediment dynamics in mixed-bed substrates are highlighted and discussed.

  5. Highly coherent free-running dual-comb chip platform.

    PubMed

    Hébert, Nicolas Bourbeau; Lancaster, David G; Michaud-Belleau, Vincent; Chen, George Y; Genest, Jérôme

    2018-04-15

    We characterize the frequency noise performance of a free-running dual-comb source based on an erbium-doped glass chip running two adjacent mode-locked waveguide lasers. This compact laser platform, contained only in a 1.2 L volume, rejects common-mode environmental noise by 20 dB thanks to the proximity of the two laser cavities. Furthermore, it displays a remarkably low mutual frequency noise floor around 10  Hz 2 /Hz, which is enabled by its large-mode-area waveguides and low Kerr nonlinearity. As a result, it reaches a free-running mutual coherence time of 1 s since mode-resolved dual-comb spectra are generated even on this time scale. This design greatly simplifies dual-comb interferometers by enabling mode-resolved measurements without any phase lock.

  6. Aspen Modeling of the Bayer Process

    NASA Astrophysics Data System (ADS)

    Langa, J. M.; Russell, T. G.; O'Neill, G. A.; Gacka, P.; Shah, V. B.; Stephenson, J. L.; Snyder, J. G.

    The ASPEN simulator was used to model Alcoa's Pt. Comfort Bayer refinery. All areas of the refinery including the lakes and powerhouse were modeled. Each area model was designed to be run stand alone or integrated with others for a full plant model.

  7. Steinway piano and stained glass clerestory window in lounge area, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Steinway piano and stained glass clerestory window in lounge area, upper deck. Hot water radiators can be seen at base of wall. These run throughout the houseboat. - Houseboat LA DUCHESSE, The Antique Boat Museum, Clayton, Jefferson County, NY

  8. Modeling Trip Duration for Mobile Source Emissions Forecasting

    DOT National Transportation Integrated Search

    2000-08-01

    The distribution of the duration of trips in a metropolitan area is an important input to estimating area-wide running loss emissions, operating mode fractions and vehicle miles of travel (VMT) accumulated on local roads in the region. In the current...

  9. Electricity Market Liberalisation and Flexibility of Conventional Generation to Balance Intermittent Renewable Energy - Is It Possible to Stay Competitive?

    NASA Astrophysics Data System (ADS)

    Linkevics, O.; Ivanova, P.; Balodis, M.

    2016-12-01

    Intermittent generation (solar PV and wind energy) integration in power production portfolio as well as electricity price fluctuations have changed the running manner of conventional combined heat and power (CHP) plants: the shift from base load operation to running in cyclic modes. These cogeneration power plants are not adapted to new running conditions. The level of CHP plant flexibility should be improved to operate profitably and efficiently from both technical and fuel usage point of view. There are different ways to increase the flexibility of power plants. Before any improvements, the situation at power plants should be evaluated and the weakest points defined. In this publication, such measures are presented on Riga CHP-2 plant example: installation of heat storage tank; extension of operation rang; acceleration of start-ups.

  10. Pattern of shoreline spawning by sockeye salmon in a glacially turbid lake: evidence for subpopulation differentiation

    USGS Publications Warehouse

    Burger, C.V.; Finn, J.E.; Holland-Bartels, L.

    1995-01-01

    Alaskan sockeye salmon typically spawn in lake tributaries during summer (early run) and along clear-water lake shorelines and outlet rivers during fall (late run). Production at the glacially turbid Tustumena Lake and its outlet, the Kasilof River (south-central Alaska), was thought to be limited to a single run of sockeye salmon that spawned in the lake's clear-water tributaries. However, up to 40% of the returning sockeye salmon enumerated by sonar as they entered the lake could not be accounted for during lake tributary surveys, which suggested either substantial counting errors or that a large number of fish spawned in the lake itself. Lake shoreline spawning had not been documented in a glacially turbid system. We determined the distribution and pattern of sockeye salmon spawning in the Tustumena Lake system from 1989 to 1991 based on fish collected and radiotagged in the Kasilof River. Spawning areas and time were determined for 324 of 413 sockeye salmon tracked upstream into the lake after release. Of these, 224 fish spawned in tributaries by mid-August and 100 spawned along shoreline areas of the lake during late August. In an additional effort, a distinct late run was discovered that spawned in the Kasilof River at the end of September. Between tributary and shoreline spawners, run and spawning time distributions were significantly different. The number of shoreline spawners was relatively stable and independent of annual escapement levels during the study, which suggests that the shoreline spawning component is distinct and not surplus production from an undifferentiated run. Since Tustumena Lake has been fully deglaciated for only about 2,000 years and is still significantly influenced by glacier meltwater, this diversification of spawning populations is probably a relatively recent and ongoing event.

  11. Impacts of Vegetation and Urban planning on micro climate in Hashtgerd new Town

    NASA Astrophysics Data System (ADS)

    Sodoudi, S.; Langer, I.; Cubasch, U.

    2012-12-01

    One of the objectives of climatological part of project Young Cities 'Developing Energy-Efficient Urban Fabric in the Tehran-Karaj Region' is to simulate the micro climate (with 1m resolution) in 35ha of new town Hashtgerd, which is located 65 km far from mega city Tehran. The Project aims are developing, implementing and evaluating building and planning schemes and technologies which allow to plan and build sustainable, energy-efficient and climate sensible form mass housing settlements in arid and semi-arid regions ("energy-efficient fabric"). Climate sensitive form also means designing and planning for climate change and its related effects for Hashtgerd New Town. By configuration of buildings and open spaces according to solar radiation, wind and vegetation, climate sensitive urban form can create outdoor thermal comfort. To simulate the climate on small spatial scales, the micro climate model Envi-met has been used to simulate the micro climate in 35 ha. The Eulerian model ENVI-met is a micro-scale climate model which gives information about the influence of architecture and buildings as well as vegetation and green area on the micro climate up to 1 m resolution. Envi-met has been run with information from topography, downscaled climate data with neuro-fuzzy method, meteorological measurements, building height and different vegetation variants (low and high number of trees) Through the optimal Urban Design and Planning for the 35ha area the micro climate results shows, that with vegetation the micro climate in street canopies will be change: - 2 m temperature is decreased by about 2 K - relative humidity increase by about 10 % - soil temperature is decreased by about 3 K - wind speed is decreased by about 60% The style of buildings allows free movement of air, which is of high importance for fresh air supply. The increase of inbuilt areas in 35 ha reduces the heat island effect through cooling caused by vegetation and increase of air humidity which caused by trees evaporation. The downscaled climate scenarios considering new urban planning strategies in 35ha will be presented till 2100.

  12. Impacts of Vegetation and Urban planning on micro climate in Hashtgerd new Town

    NASA Astrophysics Data System (ADS)

    Sodoudi, Sahar; langer, Ines; Cubasch, Ulrich

    2013-04-01

    One of the objectives of climatological part of project Young Cities 'Developing Energy-Efficient Urban Fabric in the Tehran-Karaj Region' is to simulate the micro climate (with 1m resolution) in 35ha of new town Hashtgerd, which is located 65 km far from mega city Tehran. The Project aims are developing, implementing and evaluating building and planning schemes and technologies which allow to plan and build sustainable, energy-efficient and climate sensible form mass housing settlements in arid and semi-arid regions ("energy-efficient fabric"). Climate sensitive form also means designing and planning for climate change and its related effects for Hashtgerd New Town. By configuration of buildings and open spaces according to solar radiation, wind and vegetation, climate sensitive urban form can create outdoor thermal comfort. To simulate the climate on small spatial scales, the micro climate model Envi-met has been used to simulate the micro climate in 35 ha. The Eulerian model ENVI-met is a micro-scale climate model which gives information about the influence of architecture and buildings as well as vegetation and green area on the micro climate up to 1 m resolution. Envi-met has been run with information from topography, downscaled climate data with neuro-fuzzy method, meteorological measurements, building height and different vegetation variants (low and high number of trees) Through the optimal Urban Design and Planning for the 35ha area the microclimate results shows, that with vegetation the microclimate in street canopies will be change: • 2 m temperature is decreased by about 2 K • relative humidity increase by about 10 % • soil temperature is decreased by about 3 K • wind speed is decreased by about 60% The style of buildings allows free movement of air, which is of high importance for fresh air supply. The increase of inbuilt areas in 35 ha reduces the heat island effect through cooling caused by vegetation and increase of air humidity which caused by trees evaporation. The downscaled climate scenarios considering new urban planning strategies in 35ha will be presented till 2100.

  13. Production experience with the ATLAS Event Service

    NASA Astrophysics Data System (ADS)

    Benjamin, D.; Calafiura, P.; Childers, T.; De, K.; Guan, W.; Maeno, T.; Nilsson, P.; Tsulaia, V.; Van Gemmeren, P.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The ATLAS Event Service (AES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the AES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Google Compute Engine, and a growing number of HPC platforms. After briefly reviewing the concept and the architecture of the Event Service, we will report the status and experience gained in AES commissioning and production operations on supercomputers, and our plans for extending ES application beyond Geant4 simulation to other workflows, such as reconstruction and data analysis.

  14. The influence of the Q-switched and free-running Er:YAG laser beam characteristics on the ablation of root canal dentine

    NASA Astrophysics Data System (ADS)

    Papagiakoumou, Eirini; Papadopoulos, Dimitrios N.; Khabbaz, Marouan G.; Makropoulou, Mersini I.; Serafetinides, Alexander A.

    2004-06-01

    Laser based dental treatment is attractive to many researchers. Lasers in the 3 μm region, as the Er:YAG, are suitable especially for endodontic applications. In this study a pulsed free-running and Q-switched laser was used for the ablation experiments of root canal dentine. The laser beam was either directly focused on the dental tissue or delivered to it through an infrared fiber. For different spatial beam distributions, energies, number of pulses and both laser operations the quality characteristics (crater's shape formation, ablation efficiency and surface characteristics modification) were evaluated using scanning electron microscopy (SEM). The craters produced, generally, reflect the relevant beam profile. Inhomogeneous spatial beam profiles and short pulse duration result in cracks formation and lower tissue removal efficiency, while longer pulse durations cause hard dentine fusion. Any beam profile modification, due to laser characteristics variations and the specific delivering system properties, is directly reflected in the ablation crater shape and the tissue removal efficiency. Therefore, the laser parameters, as fluence, pulse repetition rate and number of pulses, have to be carefully adjusted in relation to the desirable result.

  15. Multitasking kernel for the C and Fortran programming languages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooks, E.D. III

    1984-09-01

    A multitasking kernel for the C and Fortran programming languages which runs on the Unix operating system is presented. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the coding, debugging and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessors. The performance evaluation features require no changes in the source code of the application and are implemented as a set of compile and run time options in the kernel.

  16. Active local control of propeller-aircraft run-up noise.

    PubMed

    Hodgson, Murray; Guo, Jingnan; Germain, Pierre

    2003-12-01

    Engine run-ups are part of the regular maintenance schedule at Vancouver International Airport. The noise generated by the run-ups propagates into neighboring communities, disturbing the residents. Active noise control is a potentially cost-effective alternative to passive methods, such as enclosures. Propeller aircraft generate low-frequency tonal noise that is highly compatible with active control. This paper presents a preliminary investigation of the feasibility and effectiveness of controlling run-up noise from propeller aircraft using local active control. Computer simulations for different configurations of multi-channel active-noise-control systems, aimed at reducing run-up noise in adjacent residential areas using a local-control strategy, were performed. These were based on an optimal configuration of a single-channel control system studied previously. The variations of the attenuation and amplification zones with the number of control channels, and with source/control-system geometry, were studied. Here, the aircraft was modeled using one or two sources, with monopole or multipole radiation patterns. Both free-field and half-space conditions were considered: for the configurations studied, results were similar in the two cases. In both cases, large triangular quiet zones, with local attenuations of 10 dB or more, were obtained when nine or more control channels were used. Increases of noise were predicted outside of these areas, but these were minimized as more control channels were employed. By combining predicted attenuations with measured noise spectra, noise levels after implementation of an active control system were estimated.

  17. Steering cell migration by alternating blebs and actin-rich protrusions.

    PubMed

    Diz-Muñoz, Alba; Romanczuk, Pawel; Yu, Weimiao; Bergert, Martin; Ivanovitch, Kenzo; Salbreux, Guillaume; Heisenberg, Carl-Philipp; Paluch, Ewa K

    2016-09-02

    High directional persistence is often assumed to enhance the efficiency of chemotactic migration. Yet, cells in vivo usually display meandering trajectories with relatively low directional persistence, and the control and function of directional persistence during cell migration in three-dimensional environments are poorly understood. Here, we use mesendoderm progenitors migrating during zebrafish gastrulation as a model system to investigate the control of directional persistence during migration in vivo. We show that progenitor cells alternate persistent run phases with tumble phases that result in cell reorientation. Runs are characterized by the formation of directed actin-rich protrusions and tumbles by enhanced blebbing. Increasing the proportion of actin-rich protrusions or blebs leads to longer or shorter run phases, respectively. Importantly, both reducing and increasing run phases result in larger spatial dispersion of the cells, indicative of reduced migration precision. A physical model quantitatively recapitulating the migratory behavior of mesendoderm progenitors indicates that the ratio of tumbling to run times, and thus the specific degree of directional persistence of migration, are critical for optimizing migration precision. Together, our experiments and model provide mechanistic insight into the control of migration directionality for cells moving in three-dimensional environments that combine different protrusion types, whereby the proportion of blebs to actin-rich protrusions determines the directional persistence and precision of movement by regulating the ratio of tumbling to run times.

  18. Quality and Efficiency Improvement Tools for Every Radiologist.

    PubMed

    Kudla, Alexei U; Brook, Olga R

    2018-06-01

    In an era of value-based medicine, data-driven quality improvement is more important than ever to ensure safe and efficient imaging services. Familiarity with high-value tools enables all radiologists to successfully engage in quality and efficiency improvement. In this article, we review the model for improvement, strategies for measurement, and common practical tools with real-life examples that include Run chart, Control chart (Shewhart chart), Fishbone (Cause-and-Effect or Ishikawa) diagram, Pareto chart, 5 Whys, and Root Cause Analysis. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  19. On the use of tower-flux measurements to assess the performance of global ecosystem models

    NASA Astrophysics Data System (ADS)

    El Maayar, M.; Kucharik, C.

    2003-04-01

    Global ecosystem models are important tools for the study of biospheric processes and their responses to environmental changes. Such models typically translate knowledge, gained from local observations, into estimates of regional or even global outcomes of ecosystem processes. A typical test of ecosystem models consists of comparing their output against tower-flux measurements of land surface-atmosphere exchange of heat and mass. To perform such tests, models are typically run using detailed information on soil properties (texture, carbon content,...) and vegetation structure observed at the experimental site (e.g., vegetation height, vegetation phenology, leaf photosynthetic characteristics,...). In global simulations, however, earth's vegetation is typically represented by a limited number of plant functional types (PFT; group of plant species that have similar physiological and ecological characteristics). For each PFT (e.g., temperate broadleaf trees, boreal conifer evergreen trees,...), which can cover a very large area, a set of typical physiological and physical parameters are assigned. Thus, a legitimate question arises: How does the performance of a global ecosystem model run using detailed site-specific parameters compare with the performance of a less detailed global version where generic parameters are attributed to a group of vegetation species forming a PFT? To answer this question, we used a multiyear dataset, measured at two forest sites with contrasting environments, to compare seasonal and interannual variability of surface-atmosphere exchange of water and carbon predicted by the Integrated BIosphere Simulator-Dynamic Global Vegetation Model. Two types of simulations were, thus, performed: a) Detailed runs: observed vegetation characteristics (leaf area index, vegetation height,...) and soil carbon content, in addition to climate and soil type, are specified for model run; and b) Generic runs: when only observed climates and soil types at the measurement sites are used to run the model. The generic runs were performed for the number of years equal to the current age of the forests, initialized with no vegetation and a soil carbon density equal to zero.

  20. Effects of Short or Long Warm-up on Intermediate Running Performance.

    PubMed

    van den Tillaar, Roland; Vatten, Tormod; von Heimburg, Erna

    2017-01-01

    van den Tillaar, R, Vatten, T, and von Heimburg, E. Effects of short or long warm-up on intermediate running performance. J Strength Cond Res 31(1): 37-44, 2017-The aim of the study was to compare the effects of a long warm-up (general + specific) and a short warm-up (specific) on intermediate running performance (3-minute run). Thirteen experienced endurance-trained athletes (age 23.2 ± 2.3 years, body mass 79.8 ± 8.2 kg, body height 1.82 ± 0.05 m) conducted 2 types of warm-ups in a crossover design with 1 week in between: a long warm-up (10 minutes, 80% maximal heart rate, and 8 × 60 m sprint with increasing intensity and 1 minute rest in between) and a short warm-up (8 × 60 m sprint with increasing intensity and 1 minute rest in between). Each warm-up was followed by a 3-minute running test on a nonmotorized treadmill. Total running distance, running velocity at each 30 seconds, heart rate, blood lactate concentration, oxygen uptake, and rate of perceived exertion were measured. No significant differences in running performance variables and physiological parameters were found between the 2 warm-up protocols, except for the rate of perceived exertion and heart rate, which were higher after the long warm-up and after the 3-minute running test compared with the short warm-up. It was concluded that a short warm-up is as effective as a long warm-up for intermediate performance. Therefore, athletes can choose for themselves if they want to include a general part in their warm-up routines, even though it would not enhance their running performance more compared with only using a short, specific warm-up. However, to increase efficiency of time for training or competition, these short, specific warm-ups should be performed instead of long warm-ups.

Top