Programmable noise bandwidth reduction by means of digital averaging
NASA Technical Reports Server (NTRS)
Poklemba, John J. (Inventor)
1993-01-01
Predetection noise bandwidth reduction is effected by a pre-averager capable of digitally averaging the samples of an input data signal over two or more symbols, the averaging interval being defined by the input sampling rate divided by the output sampling rate. As the averaged sample is clocked to a suitable detector at a much slower rate than the input signal sampling rate the noise bandwidth at the input to the detector is reduced, the input to the detector having an improved signal to noise ratio as a result of the averaging process, and the rate at which such subsequent processing must operate is correspondingly reduced. The pre-averager forms a data filter having an output sampling rate of one sample per symbol of received data. More specifically, selected ones of a plurality of samples accumulated over two or more symbol intervals are output in response to clock signals at a rate of one sample per symbol interval. The pre-averager includes circuitry for weighting digitized signal samples using stored finite impulse response (FIR) filter coefficients. A method according to the present invention is also disclosed.
Ablation dynamics - from absorption to heat accumulation/ultra-fast laser matter interaction
NASA Astrophysics Data System (ADS)
Kramer, Thorsten; Remund, Stefan; Jäggi, Beat; Schmid, Marc; Neuenschwander, Beat
2018-05-01
Ultra-short laser radiation is used in manifold industrial applications today. Although state-of-the-art laser sources are providing an average power of 10-100 W with repetition rates of up to several megahertz, most applications do not benefit from it. On the one hand, the processing speed is limited to some hundred millimeters per second by the dynamics of mechanical axes or galvanometric scanners. On the other hand, high repetition rates require consideration of new physical effects such as heat accumulation and shielding that might reduce the process efficiency. For ablation processes, process efficiency can be expressed by the specific removal rate, ablated volume per time, and average power. The analysis of the specific removal rate for different laser parameters, like average power, repetition rate or pulse duration, and process parameters, like scanning speed or material, can be used to find the best operation point for microprocessing applications. Analytical models and molecular dynamics simulations based on the so-called two-temperature model reveal the causes for the appearance of limiting physical effects. The findings of models and simulations can be used to take advantage and optimize processing strategies.
Robust Characterization of Loss Rates
NASA Astrophysics Data System (ADS)
Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph
2015-08-01
Many physical implementations of qubits—including ion traps, optical lattices and linear optics—suffer from loss. A nonzero probability of irretrievably losing a qubit can be a substantial obstacle to fault-tolerant methods of processing quantum information, requiring new techniques to safeguard against loss that introduce an additional overhead that depends upon the loss rate. Here we present a scalable and platform-independent protocol for estimating the average loss rate (averaged over all input states) resulting from an arbitrary Markovian noise process, as well as an independent estimate of detector efficiency. Moreover, we show that our protocol gives an additional constraint on estimated parameters from randomized benchmarking that improves the reliability of the estimated error rate and provides a new indicator for non-Markovian signatures in the experimental data. We also derive a bound for the state-dependent loss rate in terms of the average loss rate.
NASA Astrophysics Data System (ADS)
Huang, D.; Liu, Y.
2014-12-01
The effects of subgrid cloud variability on grid-average microphysical rates and radiative fluxes are examined by use of long-term retrieval products at the Tropical West Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy's Atmospheric Radiation Measurement (ARM) Program. Four commonly used distribution functions, the truncated Gaussian, Gamma, lognormal, and Weibull distributions, are constrained to have the same mean and standard deviation as observed cloud liquid water content. The PDFs are then used to upscale relevant physical processes to obtain grid-average process rates. It is found that the truncated Gaussian representation results in up to 30% mean bias in autoconversion rate whereas the mean bias for the lognormal representation is about 10%. The Gamma and Weibull distribution function performs the best for the grid-average autoconversion rate with the mean relative bias less than 5%. For radiative fluxes, the lognormal and truncated Gaussian representations perform better than the Gamma and Weibull representations. The results show that the optimal choice of subgrid cloud distribution function depends on the nonlinearity of the process of interest and thus there is no single distribution function that works best for all parameterizations. Examination of the scale (window size) dependence of the mean bias indicates that the bias in grid-average process rates monotonically increases with increasing window sizes, suggesting the increasing importance of subgrid variability with increasing grid sizes.
NASA Astrophysics Data System (ADS)
Xiaoxu, SUN; Jin, XU; Xingyu, LI
2017-12-01
In this paper dyeing waste water was simulated by reactive brilliant blue XBR, activated carbon adsorption process, coagulation process and chemical oxidation process were used to treat dyeing waste water. In activated carbon adsorption process and coagulation process, the water absorbance values were measured. The CODcr value of water was determined in Fenton chemical oxidation process. Then, the decolorization rate and COD removal rate were calculated respectively. The results showed that the optimum conditions of activated carbon adsorption process were as follows: pH=2, the dosage of activated carbon was 1.2g/L, the adsorption reaction time was 60 min, and the average decolorization rate of the three parallel experiments was 85.30%. The optimum conditions of coagulation experiment were as follows: pH=8~9, PAC dosage was 70mg/L, stirring time was 20min, standing time was 45min, the average decolorization rate of the three parallel experiments was 74.48%. The optimum conditions for Fenton oxidation were Fe2+ 0.05g/L, H2O2 (30%) 14mL/L, pH=3, reaction time 40min. The average CODcr removal rate was 69.35% in three parallel experiments. It can be seen that in the three methods the activated carbon adsorption treatment of dyeing wastewater was the best one.
High-power picosecond laser with 400W average power for large scale applications
NASA Astrophysics Data System (ADS)
Du, Keming; Brüning, Stephan; Gillner, Arnold
2012-03-01
Laser processing is generally known for low thermal influence, precise energy processing and the possibility to ablate every type of material independent on hardness and vaporisation temperature. The use of ultra-short pulsed lasers offers new possibilities in the manufacturing of high end products with extra high processing qualities. For achieving a sufficient and economical processing speed, high average power is needed. To scale the power for industrial uses the picosecond laser system has been developed, which consists of a seeder, a preamplifier and an end amplifier. With the oscillator/amplifier system more than 400W average power and maximum pulse energy 1mJ was obtained. For study of high speed processing of large embossing metal roller two different ps laser systems have been integrated into a cylinder engraving machine. One of the ps lasers has an average power of 80W while the other has 300W. With this high power ps laser fluencies of up to 30 J/cm2 at pulse repetition rates in the multi MHz range have been achieved. Different materials (Cu, Ni, Al, steel) have been explored for parameters like ablation rate per pulse, ablation geometry, surface roughness, influence of pulse overlap and number of loops. An enhanced ablation quality and an effective ablation rate of 4mm3/min have been achieved by using different scanning systems and an optimized processing strategy. The max. achieved volume rate is 20mm3/min.
Time scale bias in erosion rates of glaciated landscapes
Ganti, Vamsi; von Hagke, Christoph; Scherler, Dirk; Lamb, Michael P.; Fischer, Woodward W.; Avouac, Jean-Philippe
2016-01-01
Deciphering erosion rates over geologic time is fundamental for understanding the interplay between climate, tectonic, and erosional processes. Existing techniques integrate erosion over different time scales, and direct comparison of such rates is routinely done in earth science. On the basis of a global compilation, we show that erosion rate estimates in glaciated landscapes may be affected by a systematic averaging bias that produces higher estimated erosion rates toward the present, which do not reflect straightforward changes in erosion rates through time. This trend can result from a heavy-tailed distribution of erosional hiatuses (that is, time periods where no or relatively slow erosion occurs). We argue that such a distribution can result from the intermittency of erosional processes in glaciated landscapes that are tightly coupled to climate variability from decadal to millennial time scales. In contrast, we find no evidence for a time scale bias in spatially averaged erosion rates of landscapes dominated by river incision. We discuss the implications of our findings in the context of the proposed coupling between climate and tectonics, and interpreting erosion rate estimates with different averaging time scales through geologic time. PMID:27713925
Time scale bias in erosion rates of glaciated landscapes.
Ganti, Vamsi; von Hagke, Christoph; Scherler, Dirk; Lamb, Michael P; Fischer, Woodward W; Avouac, Jean-Philippe
2016-10-01
Deciphering erosion rates over geologic time is fundamental for understanding the interplay between climate, tectonic, and erosional processes. Existing techniques integrate erosion over different time scales, and direct comparison of such rates is routinely done in earth science. On the basis of a global compilation, we show that erosion rate estimates in glaciated landscapes may be affected by a systematic averaging bias that produces higher estimated erosion rates toward the present, which do not reflect straightforward changes in erosion rates through time. This trend can result from a heavy-tailed distribution of erosional hiatuses (that is, time periods where no or relatively slow erosion occurs). We argue that such a distribution can result from the intermittency of erosional processes in glaciated landscapes that are tightly coupled to climate variability from decadal to millennial time scales. In contrast, we find no evidence for a time scale bias in spatially averaged erosion rates of landscapes dominated by river incision. We discuss the implications of our findings in the context of the proposed coupling between climate and tectonics, and interpreting erosion rate estimates with different averaging time scales through geologic time.
Heat input and accumulation for ultrashort pulse processing with high average power
NASA Astrophysics Data System (ADS)
Finger, Johannes; Bornschlegel, Benedikt; Reininghaus, Martin; Dohrn, Andreas; Nießen, Markus; Gillner, Arnold; Poprawe, Reinhart
2018-05-01
Materials processing using ultrashort pulsed laser radiation with pulse durations <10 ps is known to enable very precise processing with negligible thermal load. However, even for the application of picosecond and femtosecond laser radiation, not the full amount of the absorbed energy is converted into ablation products and a distinct fraction of the absorbed energy remains as residual heat in the processed workpiece. For low average power and power densities, this heat is usually not relevant for the processing results and dissipates into the workpiece. In contrast, when higher average powers and repetition rates are applied to increase the throughput and upscale ultrashort pulse processing, this heat input becomes relevant and significantly affects the achieved processing results. In this paper, we outline the relevance of heat input for ultrashort pulse processing, starting with the heat input of a single ultrashort laser pulse. Heat accumulation during ultrashort pulse processing with high repetition rate is discussed as well as heat accumulation for materials processing using pulse bursts. In addition, the relevance of heat accumulation with multiple scanning passes and processing with multiple laser spots is shown.
Generation of µW level plateau harmonics at high repetition rate.
Hädrich, S; Krebs, M; Rothhardt, J; Carstens, H; Demmler, S; Limpert, J; Tünnermann, A
2011-09-26
The process of high harmonic generation allows for coherent transfer of infrared laser light to the extreme ultraviolet spectral range opening a variety of applications. The low conversion efficiency of this process calls for optimization or higher repetition rate intense ultrashort pulse lasers. Here we present state-of-the-art fiber laser systems for the generation of high harmonics up to 1 MHz repetition rate. We perform measurements of the average power with a calibrated spectrometer and achieved µW harmonics between 45 nm and 61 nm (H23-H17) at a repetition rate of 50 kHz. Additionally, we show the potential for few-cycle pulses at high average power and repetition rate that may enable water-window harmonics at unprecedented repetition rate. © 2011 Optical Society of America
Laser processing of ceramics for microelectronics manufacturing
NASA Astrophysics Data System (ADS)
Sposili, Robert S.; Bovatsek, James; Patel, Rajesh
2017-03-01
Ceramic materials are used extensively in the microelectronics, semiconductor, and LED lighting industries because of their electrically insulating and thermally conductive properties, as well as for their high-temperature-service capabilities. However, their brittleness presents significant challenges for conventional machining processes. In this paper we report on a series of experiments that demonstrate and characterize the efficacy of pulsed nanosecond UV and green lasers in machining ceramics commonly used in microelectronics manufacturing, such as aluminum oxide (alumina) and aluminum nitride. With a series of laser pocket milling experiments, fundamental volume ablation rate and ablation efficiency data were generated. In addition, techniques for various industrial machining processes, such as shallow scribing and deep scribing, were developed and demonstrated. We demonstrate that lasers with higher average powers offer higher processing rates with the one exception of deep scribes in aluminum nitride, where a lower average power but higher pulse energy source outperformed a higher average power laser.
NASA Astrophysics Data System (ADS)
Pritychenko, B.; Mughabghab, S. F.
2012-12-01
We present calculations of neutron thermal cross sections, Westcott factors, resonance integrals, Maxwellian-averaged cross sections and astrophysical reaction rates for 843 ENDF materials using data from the major evaluated nuclear libraries and European activation file. Extensive analysis of newly-evaluated neutron reaction cross sections, neutron covariances, and improvements in data processing techniques motivated us to calculate nuclear industry and neutron physics quantities, produce s-process Maxwellian-averaged cross sections and astrophysical reaction rates, systematically calculate uncertainties, and provide additional insights on currently available neutron-induced reaction data. Nuclear reaction calculations are discussed and new results are presented. Due to space limitations, the present paper contains only calculated Maxwellian-averaged cross sections and their uncertainties. The complete data sets for all results are published in the Brookhaven National Laboratory report.
40 CFR 63.7522 - Can I use emissions averaging to comply with this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
... monitored for each averaging group; (iii) The specific control technology or pollution prevention measure to... section. You may not include new boilers or process heaters in an emissions average. (b) For a group of... heater in the averaging group, the emission rate achieved during the initial compliance test for the HAP...
40 CFR 63.7522 - Can I use emissions averaging to comply with this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... monitored for each averaging group; (iii) The specific control technology or pollution prevention measure to... section. You may not include new boilers or process heaters in an emissions average. (b) For a group of... heater in the averaging group, the emission rate achieved during the initial compliance test for the HAP...
On the theory relating changes in area-average and pan evaporation (Invited)
NASA Astrophysics Data System (ADS)
Shuttleworth, W.; Serrat-Capdevila, A.; Roderick, M. L.; Scott, R.
2009-12-01
Theory relating changes in area-average evaporation with changes in the evaporation from pans or open water is developed. Such changes can arise by Type (a) processes related to large-scale changes in atmospheric concentrations and circulation that modify surface evaporation rates in the same direction, and Type (b) processes related to coupling between the surface and atmospheric boundary layer (ABL) at the landscape scale that usually modify area-average evaporation and pan evaporation in different directions. The interrelationship between evaporation rates in response to Type (a) changes is derived. They have the same sign and broadly similar magnitude but the change in area-average evaporation is modified by surface resistance. As an alternative to assuming the complementary evaporation hypothesis, the results of previous modeling studies that investigated surface-atmosphere coupling are parameterized and used to develop a theoretical description of Type (b) coupling via vapor pressure deficit (VPD) in the ABL. The interrelationship between appropriately normalized pan and area-average evaporation rates is shown to vary with temperature and wind speed but, on average, the Type (b) changes are approximately equal and opposite. Long-term Australian pan evaporation data are analyzed to demonstrate the simultaneous presence of Type (a) and (b) processes, and observations from three field sites in southwestern USA show support for the theory describing Type (b) coupling via VPD. England's victory over Australia in 2009 Ashes cricket test match series will not be mentioned.
The fastest spreader in SIS epidemics on networks
NASA Astrophysics Data System (ADS)
He, Zhidong; Van Mieghem, Piet
2018-05-01
Identifying the fastest spreaders in epidemics on a network helps to ensure an efficient spreading. By ranking the average spreading time for different spreaders, we show that the fastest spreader may change with the effective infection rate of a SIS epidemic process, which means that the time-dependent influence of a node is usually strongly coupled to the dynamic process and the underlying network. With increasing effective infection rate, we illustrate that the fastest spreader changes from the node with the largest degree to the node with the shortest flooding time. (The flooding time is the minimum time needed to reach all other nodes if the process is reduced to a flooding process.) Furthermore, by taking the local topology around the spreader and the average flooding time into account, we propose the spreading efficiency as a metric to quantify the efficiency of a spreader and identify the fastest spreader, which is adaptive to different infection rates in general networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B.; Mughabghab, S.F.
We present calculations of neutron thermal cross sections, Westcott factors, resonance integrals, Maxwellian-averaged cross sections and astrophysical reaction rates for 843 ENDF materials using data from the major evaluated nuclear libraries and European activation file. Extensive analysis of newly-evaluated neutron reaction cross sections, neutron covariances, and improvements in data processing techniques motivated us to calculate nuclear industry and neutron physics quantities, produce s-process Maxwellian-averaged cross sections and astrophysical reaction rates, systematically calculate uncertainties, and provide additional insights on currently available neutron-induced reaction data. Nuclear reaction calculations are discussed and new results are presented. Due to space limitations, the present papermore » contains only calculated Maxwellian-averaged cross sections and their uncertainties. The complete data sets for all results are published in the Brookhaven National Laboratory report.« less
Isothermal Bainite Processing of ASTM A723 Components
1993-09-01
4 List of Illustrations 1. Photograph of typical cylinders used in the present study ................................. 7 2. CCT diagram for... CCT ) diagram obtained by thermal and magnetic analyses for the high nickel (-3 percent) ASTM A723 steel. The bainite knee for the 2 percent nickel...block. Also shown is the slope giving the average cooling rate in the critical region of the CCT diagram . This average rate happens to approximately
Use of Balanced Indicators as a Management Tool in Nursing.
Fugaça, Neidamar Pedrini Arias; Cubas, Marcia Regina; Carvalho, Deborah Ribeiro
2015-01-01
To develop a proposal for a nursing panel of indicators based on the guiding principles of Balanced Scorecard. A single case study that ranked 200 medical records of patients, management reports and protocols, which are capable of generating indicators. We identified 163 variables that resulted in 72 indicators; of these, 32 nursing-related: two financial indicators (patient's average revenue per day and patient's revenue per day by product used); two client indicators (overall satisfaction rate of patient with nursing care and adherence rate to the patient satisfaction survey); 23 process indicators, and five learning and growth indicators (average total hours of training, total of approved nursing professionals in the internal selection process, absenteeism rate, turnover rate and index of performance evaluation). Although there is a limit related to the amount of data generated, the methodology of Balanced Scorecard has proved to be flexible and adaptable to incorporate nursing services. It was possible to identify indicators with adherence to more than one area. Internal processes was the area with the higher number of indicators.
Stochastic Averaging Principle for Spatial Birth-and-Death Evolutions in the Continuum
NASA Astrophysics Data System (ADS)
Friesen, Martin; Kondratiev, Yuri
2018-06-01
We study a spatial birth-and-death process on the phase space of locally finite configurations Γ^+ × Γ^- over R}^d. Dynamics is described by an non-equilibrium evolution of states obtained from the Fokker-Planck equation and associated with the Markov operator L^+(γ ^-) + 1/ɛ L^-, ɛ > 0. Here L^- describes the environment process on Γ^- and L^+(γ ^-) describes the system process on Γ^+, where γ ^- indicates that the corresponding birth-and-death rates depend on another locally finite configuration γ ^- \\in Γ^-. We prove that, for a certain class of birth-and-death rates, the corresponding Fokker-Planck equation is well-posed, i.e. there exists a unique evolution of states μ _t^{ɛ } on Γ^+ × Γ^-. Moreover, we give a sufficient condition such that the environment is ergodic with exponential rate. Let μ _{inv} be the invariant measure for the environment process on Γ^-. In the main part of this work we establish the stochastic averaging principle, i.e. we prove that the marginal of μ _t^{ɛ } onto Γ^+ converges weakly to an evolution of states on {Γ}^+ associated with the averaged Markov birth-and-death operator {\\overline{L}} = \\int _{Γ}^- L^+(γ ^-)d μ _{inv}(γ ^-).
Stochastic Averaging Principle for Spatial Birth-and-Death Evolutions in the Continuum
NASA Astrophysics Data System (ADS)
Friesen, Martin; Kondratiev, Yuri
2018-04-01
We study a spatial birth-and-death process on the phase space of locally finite configurations Γ^+ × Γ^- over R^d . Dynamics is described by an non-equilibrium evolution of states obtained from the Fokker-Planck equation and associated with the Markov operator L^+(γ ^-) + 1/ɛ L^- , ɛ > 0 . Here L^- describes the environment process on Γ^- and L^+(γ ^-) describes the system process on Γ^+ , where γ ^- indicates that the corresponding birth-and-death rates depend on another locally finite configuration γ ^- \\in Γ^- . We prove that, for a certain class of birth-and-death rates, the corresponding Fokker-Planck equation is well-posed, i.e. there exists a unique evolution of states μ _t^{ɛ } on Γ^+ × Γ^- . Moreover, we give a sufficient condition such that the environment is ergodic with exponential rate. Let μ _{inv} be the invariant measure for the environment process on Γ^- . In the main part of this work we establish the stochastic averaging principle, i.e. we prove that the marginal of μ _t^{ɛ } onto Γ^+ converges weakly to an evolution of states on Γ^+ associated with the averaged Markov birth-and-death operator \\overline{L} = \\int _{Γ}^-}L^+(γ ^-)d μ _{inv}(γ ^-).
NASA Astrophysics Data System (ADS)
Lu, Meng-Chang; Huang, -Chuan, Jr.; Chang, Chung-Te; Shih, Yu-Ting; Lin, Teng-Chiu
2016-04-01
The riverine DIN is a crucial indicator for eutrophication in river network. The riverine DIN export in Taiwan is featured by the extremely high yield, ~3800 kg-N km-2yr-1, nearly 20-fold than the global average, showing the interesting terrestrial N process yet rarely documented. In this study we collected the DIN samples in rainwater, soil water, and stream water in a mountainous forest watershed, FuShan experimental forest watershed 1 (WS1) which is a natural broadleaf forest without human activities. Based on the intensive observations, we applied the INCA-N to simulate the riverine DIN response and thus estimate the terrestrial N processes in a global synthesis. The result showed that both discharge and DIN yield were simulated well with the average Nash-Sutcliffe efficiency coefficient of 0.83 and 0.76 , respectively. Among all N processes, N uptake, mineralization, nitrification, denitrfication, and immobilization are significantly positive correlated with soil moisture (R2>0.99), which indicates that soil moisture greatly influences N cycle processes. The average rate of mineralization and nitrification in wet years are consistent with documented values, whereas the rates in dry years are lower than the observations. Despite the high nitrification rate, the secondary forest may uptake abundant N indicating the plant uptake, which responds for removing considerable nitrate, is a controlling factor in forest ecosystem. Our simulated denitrification rate falls between the documented rates of temperate forest and agricultural area, and that may be affected by the high N-deposition in Taiwan. Simulated in-stream denitrification rate is less than 10% of the rate in soil, and is a little lower than that in temperate forest. This preliminary simulation provides an insightful guide to establish the monitoring programme and improve the understanding of N cycle in subtropical.
Pan, Ying; Zhang, Yunshu; Peng, Yan; Zhao, Qinghua; Sun, Shucun
2015-01-01
Aquatic microcosm studies often increase either chamber height or base diameter (to increase water volume) to test spatial ecology theories such as "scale" effects on ecological processes, but it is unclear whether the increase of chamber height or base diameter have the same effect on the processes, i.e., whether the effect of the shape of three-dimensional spaces is significant. We orthogonally manipulated chamber height and base diameter and determined swimming activity, average swimming velocity and grazing rates of the cladocerans Daphnia magna and Moina micrura (on two algae Scenedesmus quadricauda and Chlorella vulgaris; leading to four aquatic algae-cladoceran systems in total) under different microcosm conditions. Across all the four aquatic systems, increasing chamber height at a given base diameter significantly decreased the duration and velocity of horizontal swimming, and it tended to increase the duration but decrease the velocity of vertical swimming. These collectively led to decreases in both average swimming velocity and grazing rate of the cladocerans in the tall chambers (at a given base diameter), in accordance with the positive relationship between average swimming velocity and grazing rate. In contrast, an increase of base diameter at a given chamber height showed contrasting effects on the above parameters. Consistently, at a given chamber volume increasing ratio of chamber height to base diameter decreased the average swimming velocity and grazing rate across all the aquatic systems. In general, increasing chamber depth and base diameter may exert contrasting effects on zooplankton behavior and thus phytoplankton-zooplankton interactions. We suggest that spatial shape plays an important role in determining ecological process and thus should be considered in a theoretical framework of spatial ecology and also the physical setting of aquatic microcosm experiments.
Pan, Ying; Zhang, Yunshu; Peng, Yan; Zhao, Qinghua; Sun, Shucun
2015-01-01
Aquatic microcosm studies often increase either chamber height or base diameter (to increase water volume) to test spatial ecology theories such as “scale” effects on ecological processes, but it is unclear whether the increase of chamber height or base diameter have the same effect on the processes, i.e., whether the effect of the shape of three-dimensional spaces is significant. We orthogonally manipulated chamber height and base diameter and determined swimming activity, average swimming velocity and grazing rates of the cladocerans Daphnia magna and Moina micrura (on two algae Scenedesmus quadricauda and Chlorella vulgaris; leading to four aquatic algae-cladoceran systems in total) under different microcosm conditions. Across all the four aquatic systems, increasing chamber height at a given base diameter significantly decreased the duration and velocity of horizontal swimming, and it tended to increase the duration but decrease the velocity of vertical swimming. These collectively led to decreases in both average swimming velocity and grazing rate of the cladocerans in the tall chambers (at a given base diameter), in accordance with the positive relationship between average swimming velocity and grazing rate. In contrast, an increase of base diameter at a given chamber height showed contrasting effects on the above parameters. Consistently, at a given chamber volume increasing ratio of chamber height to base diameter decreased the average swimming velocity and grazing rate across all the aquatic systems. In general, increasing chamber depth and base diameter may exert contrasting effects on zooplankton behavior and thus phytoplankton-zooplankton interactions. We suggest that spatial shape plays an important role in determining ecological process and thus should be considered in a theoretical framework of spatial ecology and also the physical setting of aquatic microcosm experiments. PMID:26273836
Zhang, Pei-feng; Hu, Yuan-man; He, Hong-shi; Xiong, Zai-ping; Liu, Miao
2010-12-01
In this paper, three-dimensional building information was extracted from high resolution satellite image based on Barista software. Combined with ArcGIS software, the dynamic changes of the building landscape in Tiexi District of Shenyang City during urban renewal process were analyzed from the conversion contribution rate, building density, average building height, and built-up area rate. It was found that during this urban renewal process, four dominant landscape types (vacant lot, residential building, industrial building, and road) were the main parts of the landscape changes. The areas of vacant lot, residential building, commercial building, and road increased, while that of industrial building decreased. The building density decreased, while the average building height increased. There was an obvious regional variation in building landscape. The building density in industrial district was higher than that in residential district, while the average building height was in adverse. The further from the city center, the lower the building density and building average height.
NASA Astrophysics Data System (ADS)
Gao, Peng
2018-06-01
This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.
NASA Astrophysics Data System (ADS)
Gao, Peng
2018-04-01
This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.
Road sign recognition with fuzzy adaptive pre-processing models.
Lin, Chien-Chuan; Wang, Ming-Shi
2012-01-01
A road sign recognition system based on adaptive image pre-processing models using two fuzzy inference schemes has been proposed. The first fuzzy inference scheme is to check the changes of the light illumination and rich red color of a frame image by the checking areas. The other is to check the variance of vehicle's speed and angle of steering wheel to select an adaptive size and position of the detection area. The Adaboost classifier was employed to detect the road sign candidates from an image and the support vector machine technique was employed to recognize the content of the road sign candidates. The prohibitory and warning road traffic signs are the processing targets in this research. The detection rate in the detection phase is 97.42%. In the recognition phase, the recognition rate is 93.04%. The total accuracy rate of the system is 92.47%. For video sequences, the best accuracy rate is 90.54%, and the average accuracy rate is 80.17%. The average computing time is 51.86 milliseconds per frame. The proposed system can not only overcome low illumination and rich red color around the road sign problems but also offer high detection rates and high computing performance.
Road Sign Recognition with Fuzzy Adaptive Pre-Processing Models
Lin, Chien-Chuan; Wang, Ming-Shi
2012-01-01
A road sign recognition system based on adaptive image pre-processing models using two fuzzy inference schemes has been proposed. The first fuzzy inference scheme is to check the changes of the light illumination and rich red color of a frame image by the checking areas. The other is to check the variance of vehicle's speed and angle of steering wheel to select an adaptive size and position of the detection area. The Adaboost classifier was employed to detect the road sign candidates from an image and the support vector machine technique was employed to recognize the content of the road sign candidates. The prohibitory and warning road traffic signs are the processing targets in this research. The detection rate in the detection phase is 97.42%. In the recognition phase, the recognition rate is 93.04%. The total accuracy rate of the system is 92.47%. For video sequences, the best accuracy rate is 90.54%, and the average accuracy rate is 80.17%. The average computing time is 51.86 milliseconds per frame. The proposed system can not only overcome low illumination and rich red color around the road sign problems but also offer high detection rates and high computing performance. PMID:22778650
Hayes, Robert
2002-10-01
An approach is described for estimating future dose rates to Waste Isolation Pilot Plant workers processing remote handled transuranic waste. The waste streams will come from the entire U.S. Department of Energy complex and can take on virtually any form found from the processing sequences for defense-related production, radiochemistry, activation and related work. For this reason, the average waste matrix from all generator sites is used to estimate the average radiation fields over the facility lifetime. Innovative new techniques were applied to estimate expected radiation fields. Non-linear curve fitting techniques were used to predict exposure rate profiles from cylindrical sources using closed form equations for lines and disks. This information becomes the basis for Safety Analysis Report dose rate estimates and for present and future ALARA design reviews when attempts are made to reduce worker doses.
Light propagation in the averaged universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagheri, Samae; Schwarz, Dominik J., E-mail: s_bagheri@physik.uni-bielefeld.de, E-mail: dschwarz@physik.uni-bielefeld.de
Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of themore » null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.« less
[Drying characteristics and apparent change of sludge granules during drying].
Ma, Xue-Wen; Weng, Huan-Xin; Zhang, Jin-Jun
2011-08-01
Three different weight grades of sludge granules (2.5, 5, 10 g) were dried at constant temperature of 100, 200, 300, 400 and 500 degrees C, respectively. Then characteristics of weight loss and change of apparent form during sludge drying were analyzed. Results showed that there were three stages during sludge drying at 100-200 degrees C: acceleration phase, constant-rate phase, and falling-rate phase. At 300-500 degrees C, there were no constant-rate phase, but due to lots of cracks generated at sludge surface, average drying rates were still high. There was a quadratic nonlinear relationship between average drying rate and drying temperature. At 100-200 degrees C, drying processes of different weight grade sludge granules were similar. At 300-500 degrees C, drying processes of same weight grade of sludge granules were similar. Little organic matter decomposed till sludge burning at 100-300 degrees C, while some organic matter began to decompose at the beginning of sludge drying at 400-500 degrees C.
Generation of plasma X-ray sources via high repetition rate femtosecond laser pulses
NASA Astrophysics Data System (ADS)
Baguckis, Artūras; Plukis, Artūras; Reklaitis, Jonas; Remeikis, Vidmantas; Giniūnas, Linas; Vengris, Mikas
2017-12-01
In this study, we present the development and characterization of Cu plasma X-ray source driven by 20 W average power high repetition rate femtosecond laser in ambient atmosphere environment. The peak Cu- Kα photon flux of 2.3 × 109 photons/s into full solid angle is demonstrated (with a process conversion efficiency of 10-7), using pulses with peak intensity of 4.65 × 1014 W/cm2. Such Cu- Kα flux is significantly larger than others found in comparable experiments, performed in air environment. The effects of resonance plasma absorption process, when optimized, are shown to increase measured flux by the factor of 2-3. The relationship between X-ray photon flux and plasma-driving pulse repetition rate is quasi-linear, suggesting that fluxes could further be increased to 1010 photons/s using even higher average powers of driving radiation. These results suggest that to fully utilize the potential of high repetition rate laser sources, novel target material delivery systems (for example, jet-based ones) are required. On the other hand, this study demonstrates that high energy lasers currently used for plasma X-ray sources can be conveniently and efficiently replaced by high average power and repetition rate laser radiation, as a way to increase the brightness of the generated X-rays.
Use of Balanced Indicators as a Management Tool in Nursing1
Fugaça, Neidamar Pedrini Arias; Cubas, Marcia Regina; Carvalho, Deborah Ribeiro
2015-01-01
Objective: to develop a proposal for a nursing panel of indicators based on the guiding principles of Balanced Scorecard. Method: a single case study that ranked 200 medical records of patients, management reports and protocols, which are capable of generating indicators. Results: we identified 163 variables that resulted in 72 indicators; of these, 32 nursing-related: two financial indicators (patient's average revenue per day and patient's revenue per day by product used); two client indicators (overall satisfaction rate of patient with nursing care and adherence rate to the patient satisfaction survey); 23 process indicators, and five learning and growth indicators (average total hours of training, total of approved nursing professionals in the internal selection process, absenteeism rate, turnover rate and index of performance evaluation). Conclusion: although there is a limit related to the amount of data generated, the methodology of Balanced Scorecard has proved to be flexible and adaptable to incorporate nursing services. It was possible to identify indicators with adherence to more than one area. Internal processes was the area with the higher number of indicators. PMID:26625995
Sacha, Gregory A; Schmitt, William J; Nail, Steven L
2006-01-01
The critical processing parameters affecting average particle size, particle size distribution, yield, and level of residual carrier solvent using the supercritical anti-solvent method (SAS) were identified. Carbon dioxide was used as the supercritical fluid. Methylprednisolone acetate was used as the model solute in tetrahydrofuran. Parameters examined included pressure of the supercritical fluid, agitation rate, feed solution flow rate, impeller diameter, and nozzle design. Pressure was identified as the most important process parameter affecting average particle size, either through the effect of pressure on dispersion of the feed solution into the precipitation vessel or through the effect of pressure on solubility of drug in the CO2/organic solvent mixture. Agitation rate, impeller diameter, feed solution flow rate, and nozzle design had significant effects on particle size, which suggests that dispersion of the feed solution is important. Crimped HPLC tubing was the most effective method of introducing feed solution into the precipitation vessel, largely because it resulted in the least amount of clogging during the precipitation. Yields of 82% or greater were consistently produced and were not affected by the processing variables. Similarly, the level of residual solvent was independent of the processing variables and was present at 0.0002% wt/wt THF or less.
Numerical simulation and experiment on effect of ultrasonic in polymer extrusion processing
NASA Astrophysics Data System (ADS)
Wan, Yue; Fu, ZhiHong; Wei, LingJiao; Zang, Gongzheng; Zhang, Lei
2018-01-01
The influence of ultrasonic wave on the flow field parameters and the precision of extruded products are studied. Firstly, the effect of vibration power on the average velocity of the outlet, the average viscosity of the die section, the average shear rate and the inlet pressure of the die section were studied by using the Polyflow software. Secondly, the effects of ultrasonic strength on the die temperature and the drop of die pressure were studied experimentally by different head temperature and different screw speed. Finally, the relationship between die pressure and extrusion flow rate under different ultrasonic power were studied through experiments.
Stochastic demography and the neutral substitution rate in class-structured populations.
Lehmann, Laurent
2014-05-01
The neutral rate of allelic substitution is analyzed for a class-structured population subject to a stationary stochastic demographic process. The substitution rate is shown to be generally equal to the effective mutation rate, and under overlapping generations it can be expressed as the effective mutation rate in newborns when measured in units of average generation time. With uniform mutation rate across classes the substitution rate reduces to the mutation rate.
NASA Astrophysics Data System (ADS)
Massah, Mozhdeh; Kantz, Holger
2016-04-01
As we have one and only one earth and no replicas, climate characteristics are usually computed as time averages from a single time series. For understanding climate variability, it is essential to understand how close a single time average will typically be to an ensemble average. To answer this question, we study large deviation probabilities (LDP) of stochastic processes and characterize them by their dependence on the time window. In contrast to iid variables for which there exists an analytical expression for the rate function, the correlated variables such as auto-regressive (short memory) and auto-regressive fractionally integrated moving average (long memory) processes, have not an analytical LDP. We study LDP for these processes, in order to see how correlation affects this probability in comparison to iid data. Although short range correlations lead to a simple correction of sample size, long range correlations lead to a sub-exponential decay of LDP and hence to a very slow convergence of time averages. This effect is demonstrated for a 120 year long time series of daily temperature anomalies measured in Potsdam (Germany).
NASA Technical Reports Server (NTRS)
Polites, M. E.
1991-01-01
This paper presents a new approach to processing noisy startracker measurements in spacecraft attitude determination systems. It takes N measurements in each T-second interval and combines them to produce tracker outputs that are estimates of star position at the end of each interval, when the tracker outputs become available. This is an improvement over the standard method, measurement averaging, which generates outputs that are estimates of the average position of the star over each interval. This new scheme is superior to measurement averaging when the spacecraft has some rotation rate as in target tracking or earth pointing. Also, it is not just limited to startracker, but has potential application wherever measurement averaging of sensor outputs is used.
[Analysis of hydrodynamics parameters of runoff erosion and sediment-yielding on unpaved road].
Huang, Peng-Fei; Wang, Wen-Long; Luo, Ting; Wang, Zhen; Wang, Zheng-Li; Li, Ren
2013-02-01
By the method of field runoff washout experiment, a simulation study was conducted on the relationships between the soil detachment rate and the hydrodynamic parameters on unpaved road, and the related quantitative formulas were established. Under the conditions of different flow discharges and road gradients, the averaged soil detachment rate increased with increasing flow discharge and road gradient, and the relationships between them could be described by a power function. As compared with road gradient, flow discharge had greater effects on the soil detachment rate. The soil detachment rate had a power relation with water flow velocity and runoff kinetic energy, and the runoff kinetic energy was of importance to the soil detachment rate. The soil detachment rate was linearly correlated with the unit runoff kinetic energy. The averaged soil erodibility was 0.120 g m-1.J-F-1, and the averaged critical unit runoff kinetic energy was 2.875 g.m-1.J-1. Flow discharge, road gradient, and unit runoff kinetic energy could be used to accurately describe the soil erosion process and calculate the soil erosion rate on unpaved road.
The analysis of critical cooling rate for high-rise building steel S460
NASA Astrophysics Data System (ADS)
Lu, Shiping; Chen, Xia; Li, Qun; Wang, Haibao; Gu, Linhao
2017-09-01
High-rise building steel S460 is an important structure steel.The product process of the steel is Quenching&Tempering. The critical cooling rate of steel is very important in heavy plate quenching process, and it is also the basis of the cooling process[1].The critical cooling rate of HSLA steel S460 is obtained from the Thermal simulation method,and the differences about the microstructure and properties of different cooling rate is also analyzed.In this article, the angle of the grain boundary and the average grain size are analyzed by EBSD under different cooling rate. The relationship between grain boundary angle and grain size with the cooling rate is obtained. According to the experiment,it provides the basis for the formulation of the quenching process of the industrial production.
The influence of averaging and noisy decision strategies on the recognition memory ROC.
Malmberg, Kenneth J; Xu, Jing
2006-02-01
Many single- and dual-process models of recognition memory predict that the ratings and remember-know receiver operating characteristics (ROCs) are the same, but Rotello, Macmillan, and Reeder (2004) reported that the slopes of the remember-know and ratings z-transformed ROCs (zROCs) are different The authors show that averaging introduces nonlinearities to the form of the zROC and that ratings and remember-know zROCs are indistinguishable when constructed in a conventional manner. The authors show, further, that some nonoptimal decision strategies have a distinctive, nonlinear effect on the form of the single-process continuous-state zROC. The conclusion is that many factors having nothing to do with the nature of recognition memory can affect the shape of zROCs, and that therefore, the shape of the zROC does not, alone, characterize different memory models.
The Effects of Forming Parameters on Conical Ring Rolling Process
Meng, Wen; Zhao, Guoqun; Guan, Yanjin
2014-01-01
The plastic penetration condition and biting-in condition of a radial conical ring rolling process with a closed die structure on the top and bottom of driven roll, simplified as RCRRCDS, were established. The reasonable value range of mandrel feed rate in rolling process was deduced. A coupled thermomechanical 3D FE model of RCRRCDS process was established. The changing laws of equivalent plastic strain (PEEQ) and temperature distributions with rolling time were investigated. The effects of ring's outer radius growth rate and rolls sizes on the uniformities of PEEQ and temperature distributions, average rolling force, and average rolling moment were studied. The results indicate that the PEEQ at the inner layer and outer layer of rolled ring are larger than that at the middle layer of ring; the temperatures at the “obtuse angle zone” of ring's cross-section are higher than those at “acute angle zone”; the temperature at the central part of ring is higher than that at the middle part of ring's outer surfaces. As the ring's outer radius growth rate increases at its reasonable value ranges, the uniformities of PEEQ and temperature distributions increase. Finally, the optimal values of the ring's outer radius growth rate and rolls sizes were obtained. PMID:25202716
Code of Federal Regulations, 2011 CFR
2011-07-01
... Vents, Storage Vessels, Transfer Operations, and Wastewater § 63.117 Process vent provisions—reporting... incinerators, boilers or process heaters specified in table 3 of this subpart, and averaged over the same time... content determinations, flow rate measurements, and exit velocity determinations made during the...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Vents, Storage Vessels, Transfer Operations, and Wastewater § 63.117 Process vent provisions—reporting... incinerators, boilers or process heaters specified in table 3 of this subpart, and averaged over the same time... content determinations, flow rate measurements, and exit velocity determinations made during the...
ERIC Educational Resources Information Center
Tucker, James F.
Foreign currency units, exchange rates, the international payments process, and spending money abroad are discussed briefly in this booklet to help teachers explain the international economy. Thirty-two countries are listed with their respective currency units, some of which are illustrated. A chart shows the average yearly exchange rates between…
Value of the future: Discounting in random environments
NASA Astrophysics Data System (ADS)
Farmer, J. Doyne; Geanakoplos, John; Masoliver, Jaume; Montero, Miquel; Perelló, Josep
2015-05-01
We analyze how to value future costs and benefits when they must be discounted relative to the present. We introduce the subject for the nonspecialist and take into account the randomness of the economic evolution by studying the discount function of three widely used processes for the dynamics of interest rates: Ornstein-Uhlenbeck, Feller, and log-normal. Besides obtaining exact expressions for the discount function and simple asymptotic approximations, we show that historical average interest rates overestimate long-run discount rates and that this effect can be large. In other words, long-run discount rates should be substantially less than the average rate observed in the past, otherwise any cost-benefit calculation would be biased in favor of the present and against interventions that may protect the future.
Value of the future: Discounting in random environments.
Farmer, J Doyne; Geanakoplos, John; Masoliver, Jaume; Montero, Miquel; Perelló, Josep
2015-05-01
We analyze how to value future costs and benefits when they must be discounted relative to the present. We introduce the subject for the nonspecialist and take into account the randomness of the economic evolution by studying the discount function of three widely used processes for the dynamics of interest rates: Ornstein-Uhlenbeck, Feller, and log-normal. Besides obtaining exact expressions for the discount function and simple asymptotic approximations, we show that historical average interest rates overestimate long-run discount rates and that this effect can be large. In other words, long-run discount rates should be substantially less than the average rate observed in the past, otherwise any cost-benefit calculation would be biased in favor of the present and against interventions that may protect the future.
Surface Modification and Surface - Subsurface Exchange Processes on Europa
NASA Astrophysics Data System (ADS)
Phillips, Cynthia B.; Molaro, Jamie; Hand, Kevin P.
2017-10-01
The surface of Jupiter’s moon Europa is modified by exogenic processes such as sputtering, gardening, radiolysis, sulfur ion implantation, and thermal processing, as well as endogenic processes including tidal shaking, mass wasting, and the effects of subsurface tectonic and perhaps cryovolcanic activity. New materials are created or deposited on the surface (radiolysis, micrometeorite impacts, sulfur ion implantation, cryovolcanic plume deposits), modified in place (thermal segregation, sintering), transported either vertically or horizontally (sputtering, gardening, mass wasting, tectonic and cryovolcanic activity), or lost from Europa completely (sputtering, plumes, larger impacts). Some of these processes vary spatially, as visible in Europa’s leading-trailing hemisphere brightness asymmetry.Endogenic geologic processes also vary spatially, depending on terrain type. The surface can be classified into general landform categories that include tectonic features (ridges, bands, cracks); disrupted “chaos-type” terrain (chaos blocks, matrix, domes, pits, spots); and impact craters (simple, complex, multi-ring). The spatial distribution of these terrain types is relatively random, with some differences in apex-antiapex cratering rates and latitudinal variation in chaos vs. tectonic features.In this work, we extrapolate surface processes and rates from the top meter of the surface in conjunction with global estimates of transport and resurfacing rates. We combine near-surface modification with an estimate of surface-subsurface (and vice versa) transport rates for various geologic terrains based on an average of proposed formation mechanisms, and a spatial distribution of each landform type over Europa’s surface area.Understanding the rates and mass balance for each of these processes, as well as their spatial and temporal variability, allows us to estimate surface - subsurface exchange rates over the average surface age (~50myr) of Europa. Quantifying the timescale and volume of transported material will yield insight on whether such a process may provide fuel to sustain a biosphere in Europa’s subsurface ocean, which is relevant to searches for life by a future mission such as a potential Europa Lander.
Surface Modification and Surface - Subsurface Exchange Processes on Europa
NASA Astrophysics Data System (ADS)
Phillips, C. B.; Molaro, J.; Hand, K. P.
2017-12-01
The surface of Jupiter's moon Europa is modified by exogenic processes such as sputtering, gardening, radiolysis, sulfur ion implantation, and thermal processing, as well as endogenic processes including tidal shaking, mass wasting, and the effects of subsurface tectonic and perhaps cryovolcanic activity. New materials are created or deposited on the surface (radiolysis, micrometeorite impacts, sulfur ion implantation, cryovolcanic plume deposits), modified in place (thermal segregation, sintering), transported either vertically or horizontally (sputtering, gardening, mass wasting, tectonic and cryovolcanic activity), or lost from Europa completely (sputtering, plumes, larger impacts). Some of these processes vary spatially, as visible in Europa's leading-trailing hemisphere brightness asymmetry. Endogenic geologic processes also vary spatially, depending on terrain type. The surface can be classified into general landform categories that include tectonic features (ridges, bands, cracks); disrupted "chaos-type" terrain (chaos blocks, matrix, domes, pits, spots); and impact craters (simple, complex, multi-ring). The spatial distribution of these terrain types is relatively random, with some differences in apex-antiapex cratering rates and latitudinal variation in chaos vs. tectonic features. In this work, we extrapolate surface processes and rates from the top meter of the surface in conjunction with global estimates of transport and resurfacing rates. We combine near-surface modification with an estimate of surface-subsurface (and vice versa) transport rates for various geologic terrains based on an average of proposed formation mechanisms, and a spatial distribution of each landform type over Europa's surface area. Understanding the rates and mass balance for each of these processes, as well as their spatial and temporal variability, allows us to estimate surface - subsurface exchange rates over the average surface age ( 50myr) of Europa. Quantifying the timescale and volume of transported material will yield insight on whether such a process may provide fuel to sustain a biosphere in Europa's subsurface ocean, which is relevant to searches for life by a future mission such as a potential Europa Lander.
Study of Natural Fiber Breakage during Composite Processing
NASA Astrophysics Data System (ADS)
Quijano-Solis, Carlos Jafet
Biofiber-thermoplastic composites have gained considerable importance in the last century. To provide mechanical reinforcement to the polymer, fibers must be larger than a critical aspect ratio (length-to-width ratio). However, biofibers undergo breakage in length or width during processing, affecting their final aspect ratio in the composites. In this study, influence on biofiber breakage by factors related to processing conditions, fiber morphology and the flow type was investigated through: a) experiments using an internal mixer, a twin-screw extruder (TSE) or a capillary rheometer; and b) a Monte Carlo computer simulation. Composites of thermomechanical fibers of aspen or wheat straw mixed with polypropylene were studied. Internal mixer experiments analyzed wheat straw and two batches of aspen fibers, named AL and AS. AL fibers had longer average length. Processing variables included the temperature, rotors speed and fiber concentration. TSE experiments studied AL and AS fiber composites under various screws speeds, temperatures and feeding rates of the polymer and fibers. Capillary rheometers experiments determined AL fiber breakage in shear and elongational flows for composites processed at different concentrations, temperatures, and strain rates. Finally, the internal mixer experimental results where compared to Monte Carlo simulation predictions. The simulation focused on fiber length breakage due to fiber-polymer interactions. Internal mixer results showed that final fiber average length depended almost solely on processing conditions while final fiber average width depended on both processing conditions and initial fiber morphology. In the TSE, processing conditions as well as initial fiber length influenced final average length. TSE results showed that the fiber concentration regime seems to influence the effect of processing variables on fiber breakage. Capillary rheometer experiments demonstrated that biofiber breakage happens in both elongational and shear flows. In some cases, percentage of biofiber breakage in elongational flow is higher. In general, simulation predictions of final average lengths were in good agreement with experiments, indicating the importance of fiber-polymer interactions on fiber breakage. The largest discrepancies were obtained at higher fiber concentration composites; these differences might be resolved, in future simulations, by including the effect of fiber-fiber interactions.
Frame sequences analysis technique of linear objects movement
NASA Astrophysics Data System (ADS)
Oshchepkova, V. Y.; Berg, I. A.; Shchepkin, D. V.; Kopylova, G. V.
2017-12-01
Obtaining data by noninvasive methods are often needed in many fields of science and engineering. This is achieved through video recording in various frame rate and light spectra. In doing so quantitative analysis of movement of the objects being studied becomes an important component of the research. This work discusses analysis of motion of linear objects on the two-dimensional plane. The complexity of this problem increases when the frame contains numerous objects whose images may overlap. This study uses a sequence containing 30 frames at the resolution of 62 × 62 pixels and frame rate of 2 Hz. It was required to determine the average velocity of objects motion. This velocity was found as an average velocity for 8-12 objects with the error of 15%. After processing dependencies of the average velocity vs. control parameters were found. The processing was performed in the software environment GMimPro with the subsequent approximation of the data obtained using the Hill equation.
40 CFR 57.203 - Contents of the application.
Code of Federal Regulations, 2010 CFR
2010-07-01
... emission of sulfur dioxide; the characteristics of all gas streams emitted from the smelter's process...'s maximum daily production capacity (as defined in § 57.103(r)), the operational rate (in pounds of... smelter is operating at that capacity; and the smelter's average and maximum daily production rate for...
Fairness emergence from zero-intelligence agents
NASA Astrophysics Data System (ADS)
Duan, Wen-Qi; Stanley, H. Eugene
2010-02-01
Fairness plays a key role in explaining the emergence and maintenance of cooperation. Opponent-oriented social utility models were often proposed to explain the origins of fairness preferences in which agents take into account not only their own outcomes but are also concerned with the outcomes of their opponents. Here, we propose a payoff-oriented mechanism in which agents update their beliefs only based on the payoff signals of the previous ultimatum game, regardless of the behaviors and outcomes of the opponents themselves. Employing adaptive ultimatum game, we show that (1) fairness behaviors can emerge out even under such minimalist assumptions, provided that agents are capable of responding to their payoff signals, (2) the average game payoff per agent per round decreases with the increasing discrepancy rate between the average giving rate and the average asking rate, and (3) the belief update process will lead to 50%-50% fair split provided that there is no mutation in the evolutionary dynamics.
NASA Astrophysics Data System (ADS)
Pritychenko, B.; Mughaghab, S. F.; Sonzogni, A. A.
2010-11-01
We have calculated the Maxwellian-averaged cross sections and astrophysical reaction rates of the stellar nucleosynthesis reactions (n, γ), (n, fission), (n, p), (n, α), and (n, 2n) using the ENDF/B-VII.0, JEFF-3.1, JENDL-3.3, and ENDF/B-VI.8 evaluated nuclear reaction data libraries. These four major nuclear reaction libraries were processed under the same conditions for Maxwellian temperatures (kT) ranging from 1 keV to 1 MeV. We compare our current calculations of the s-process nucleosynthesis nuclei with previous data sets and discuss the differences between them and the implications for nuclear astrophysics.
Studies of the Ocean Surface and the Coupling between the Sea and the Atmosphere.
1984-10-01
design was developed and built for use - in this work.. A number of new ideas were involved and five patents were assigned to the Navy in the following...close enough that the mounting surface was not involved in the drag. A physical description of the process using entraining slabs of turbulent air allowed...each slab the average turbulence, average horizontal velocity and average dissipation rate are still used . The ve- locity at any given level matches
Yue, Xiu; Yu, Guangping; Liu, Zhuhan; Tang, Jiali; Liu, Jian
2018-04-01
The long start-up time of the completely autotrophic nitrogen removal over nitrite (CANON) process is one of the main disadvantages of this system. In this paper, the CANON process with a submerged aerated biological filter (SABF) was rapidly started up within 26 days. It gave an average ammonium nitrogen removal rate (ANR) and a total nitrogen removal rate (TNR) of 94.2% and 81.3%, respectively. The phyla Proteobacteria and Planctomycetes were confirmed as the ammonia oxidizing bacteria (AOB) and anaerobic ammonium oxidation bacteria (AnAOB). The genus Candidatus Brocadia was the major contributor of nitrogen removal. pH and temperature affect the performance of the CANON process. This experimental results showed that the optimum pH and temperature were 8.0 and 30 °C, respectively, which gave the highest average ANR and TNR values of 94.6% and 85.1%, respectively. This research could promote the nitrogen removal ability of CANON process in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zha, B. L.; Shi, Y. A.; Wang, J. J.; Su, Q. D.
2018-01-01
Self-designed oxygen-kerosene ablation system was employed to study the ablation characteristics of silicone rubber based thermal insulation materials under the condition of boron oxide particles erosion. The ablation test was designed with a mass fraction of 1.69% boron oxide particles and particles-free, the microstructure and elemental analysis of the specimens before and after ablation were carried out by Scanning Electron Microscopy (SEM) and Energy Dispersion Spectrum (EDS). Experiment results show that the average mass ablation rate of the materials was 0.0099 g•s-1 and the average ablation rate was -0.025 mm•s-1 under the condition of pure gas phase ablation; and the average mass ablation rate of the multiphase ablation test group was 0.1775 g•s-1, whose average ablation rate was 0.437 mm•s-1 during the ablation process, the boron oxide particles would adhere a molten layer on the flame contact surface of the specimen, which covering the pores on the material surface, blocking the infiltration channel for the oxidizing component and slowing down the oxidation loss rate of the material below the surface, but because the particles erosion was the main reason for material depletion, the combined effect of the above both led to the upward material ablation rates of Silicone Rubber.
40 CFR 98.295 - Procedures for estimating missing data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... value shall be the best available estimate(s) of the parameter(s), based on all available process data or data used for accounting purposes. (c) For each missing value collected during the performance test (hourly CO2 concentration, stack gas volumetric flow rate, or average process vent flow from mine...
Non-contact Real-time heart rate measurements based on high speed circuit technology research
NASA Astrophysics Data System (ADS)
Wu, Jizhe; Liu, Xiaohua; Kong, Lingqin; Shi, Cong; Liu, Ming; Hui, Mei; Dong, Liquan; Zhao, Yuejin
2015-08-01
In recent years, morbidity and mortality of the cardiovascular or cerebrovascular disease, which threaten human health greatly, increased year by year. Heart rate is an important index of these diseases. To address this status, the paper puts forward a kind of simple structure, easy operation, suitable for large populations of daily monitoring non-contact heart rate measurement. In the method we use imaging equipment video sensitive areas. The changes of light intensity reflected through the image grayscale average. The light change is caused by changes in blood volume. We video the people face which include the sensitive areas (ROI), and use high-speed processing circuit to save the video as AVI format into memory. After processing the whole video of a period of time, we draw curve of each color channel with frame number as horizontal axis. Then get heart rate from the curve. We use independent component analysis (ICA) to restrain noise of sports interference, realized the accurate extraction of heart rate signal under the motion state. We design an algorithm, based on high-speed processing circuit, for face recognition and tracking to automatically get face region. We do grayscale average processing to the recognized image, get RGB three grayscale curves, and extract a clearer pulse wave curves through independent component analysis, and then we get the heart rate under the motion state. At last, by means of compare our system with Fingertip Pulse Oximeter, result show the system can realize a more accurate measurement, the error is less than 3 pats per minute.
NASA Astrophysics Data System (ADS)
Khazaeli, Ali; Vatani, Ali; Tahouni, Nassim; Panjeshahi, Mohammad Hassan
2015-10-01
In flow batteries, electrolyte flow rate plays a crucial role on the minimizing mass transfer polarization which is at the compensation of higher pressure drop. In this work, a two-dimensional numerical method is applied to investigate the effect of electrolyte flow rate on cell voltage, maximum depth of discharge and pressure drop a six-cell stack of VRFB. The results show that during the discharge process, increasing electrolyte flow rate can raise the voltage of each cell up to 50 mV on average. Moreover, the maximum depth of discharge dramatically increases with electrolyte flow rate. On the other hand, the pressure drop also positively correlates with electrolyte flow rate. In order to investigate all these effects simultaneously, average energy and exergy efficiencies are introduced in this study for the transient process of VRFB. These efficiencies give insight into choosing an appropriate strategy for the electrolyte flow rate. Finally, the energy efficiency of electricity storage using VRFB is investigated and compared with other energy storage systems. The results illustrate that this kind of battery has at least 61% storage efficiency based on the second law of thermodynamics, which is considerably higher than that of their counterparts.
NASA Astrophysics Data System (ADS)
Baird, M. E.; Walker, S. J.; Wallace, B. B.; Webster, I. T.; Parslow, J. S.
2003-03-01
A simple model of estuarine eutrophication is built on biomechanical (or mechanistic) descriptions of a number of the key ecological processes in estuaries. Mechanistically described processes include the nutrient uptake and light capture of planktonic and benthic autotrophs, and the encounter rates of planktonic predators and prey. Other more complex processes, such as sediment biogeochemistry, detrital processes and phosphate dynamics, are modelled using empirical descriptions from the Port Phillip Bay Environmental Study (PPBES) ecological model. A comparison is made between the mechanistically determined rates of ecological processes and the analogous empirically determined rates in the PPBES ecological model. The rates generally agree, with a few significant exceptions. Model simulations were run at a range of estuarine depths and nutrient loads, with outputs presented as the annually averaged biomass of autotrophs. The simulations followed a simple conceptual model of eutrophication, suggesting a simple biomechanical understanding of estuarine processes can provide a predictive tool for ecological processes in a wide range of estuarine ecosystems.
Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling
NASA Astrophysics Data System (ADS)
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Buschiazzo, Emmanuel; Ritland, Carol; Bohlmann, Jörg; Ritland, Kermit
2012-01-20
Comparative genomics can inform us about the processes of mutation and selection across diverse taxa. Among seed plants, gymnosperms have been lacking in genomic comparisons. Recent EST and full-length cDNA collections for two conifers, Sitka spruce (Picea sitchensis) and loblolly pine (Pinus taeda), together with full genome sequences for two angiosperms, Arabidopsis thaliana and poplar (Populus trichocarpa), offer an opportunity to infer the evolutionary processes underlying thousands of orthologous protein-coding genes in gymnosperms compared with an angiosperm orthologue set. Based upon pairwise comparisons of 3,723 spruce and pine orthologues, we found an average synonymous genetic distance (dS) of 0.191, and an average dN/dS ratio of 0.314. Using a fossil-established divergence time of 140 million years between spruce and pine, we extrapolated a nucleotide substitution rate of 0.68 × 10(-9) synonymous substitutions per site per year. When compared to angiosperms, this indicates a dramatically slower rate of nucleotide substitution rates in conifers: on average 15-fold. Coincidentally, we found a three-fold higher dN/dS for the spruce-pine lineage compared to the poplar-Arabidopsis lineage. This joint occurrence of a slower evolutionary rate in conifers with higher dN/dS, and possibly positive selection, showcases the uniqueness of conifer genome evolution. Our results are in line with documented reduced nucleotide diversity, conservative genome evolution and low rates of diversification in conifers on the one hand and numerous examples of local adaptation in conifers on the other hand. We propose that reduced levels of nucleotide mutation in large and long-lived conifer trees, coupled with large effective population size, were the main factors leading to slow substitution rates but retention of beneficial mutations.
NUCLEOSYNTHESIS CONSTRAINTS ON THE NEUTRON STAR-BLACK HOLE MERGER RATE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauswein, A.; Ardevol Pulpillo, R.; Janka, H.-T.
2014-11-01
We derive constraints on the time-averaged event rate of neutron star-black hole (NS-BH) mergers by using estimates of the population-integrated production of heavy rapid neutron-capture (r-process) elements with nuclear mass numbers A > 140 by such events in comparison to the Galactic repository of these chemical species. Our estimates are based on relativistic hydrodynamical simulations convolved with theoretical predictions of the binary population. This allows us to determine a strict upper limit of the average NS-BH merger rate of ∼6× 10{sup –5} per year. We quantify the uncertainties of this estimate to be within factors of a few mostly becausemore » of the unknown BH spin distribution of such systems, the uncertain equation of state of NS matter, and possible errors in the Galactic content of r-process material. Our approach implies a correlation between the merger rates of NS-BH binaries and of double NS systems. Predictions of the detection rate of gravitational-wave signals from such compact object binaries by Advanced LIGO and Advanced Virgo on the optimistic side are incompatible with the constraints set by our analysis.« less
Knowledge diffusion in complex networks by considering time-varying information channels
NASA Astrophysics Data System (ADS)
Zhu, He; Ma, Jing
2018-03-01
In this article, based on a model of epidemic spreading, we explore the knowledge diffusion process with an innovative mechanism for complex networks by considering time-varying information channels. To cover the knowledge diffusion process in homogeneous and heterogeneous networks, two types of networks (the BA network and the ER network) are investigated. The mean-field theory is used to theoretically draw the knowledge diffusion threshold. Numerical simulation demonstrates that the knowledge diffusion threshold is almost linearly correlated with the mean of the activity rate. In addition, under the influence of the activity rate and distinct from the classic Susceptible-Infected-Susceptible (SIS) model, the density of knowers almost linearly grows with the spreading rate. Finally, in consideration of the ubiquitous mechanism of innovation, we further study the evolution of knowledge in our proposed model. The results suggest that compared with the effect of the spreading rate, the average knowledge version of the population is affected more by the innovation parameter and the mean of the activity rate. Furthermore, in the BA network, the average knowledge version of individuals with higher degree is always newer than those with lower degree.
Guo, Ming-ming; Wang, Wen-long; Li, Jian-ming; Huang, Peng-fei; Zhu, Bao-cai; Wang, Zhen; Luo, Ting
2015-02-01
Non-hardened roads formed in the production of the Shenfu Coalfield have a unique condition of underlying surface. The road surface is composed of a regolith layer with a certain thickness resulted from long-term rolling and thus, is characterized by weakened anti-scourabilty and anti-erodibility. In contrast, soil layer below the regolith has a higher bulk density and anti-erodibility. The processes of soil erosion on the non-hardened roads exhibit some differences under rainfall condition. The process of sediment transport and the relationship between sediment transport rate and erosion factors at different erosion stages were studied on non-hardened roads with slope degrees ranging from 3° to 12° (3°, 6°, 9°, 12°) by a field experiment under artificial rainfall. Results showed that the first peak of sediment transport on the regolith surface was observed at the sheet erosion stage. Sheet erosion occurred only at 3° slope degree, with an average variation coefficient of 0.07 for sediment transport rate. Rills in every testing began to develop at slope degrees of 6° to 12° about 15 min after runoff initiation. At the sheet erosion stage, the process of sediment transport fluctuated considerably at rainfall intensities of > 1.5 mm · min(-1), but the differences in its variation were little at the three slope degrees, with average variation coefficients of 0.20, 0.19 and 0.16, respectively. Rainfall intensity had a more significant impact on sediment transport rate than slope degree. The process of sediment transport at the rill erosion stage fluctuated, but the fluctuation was obviously smaller than that at the sheet erosion stage, with average variation coefficients of 0.05, 0.09 and 0.10 at the three slope degrees. Many wide and shallow rills evolved at the rill erosion stage. The sediment transport rate could be well predicted by a power function of rainfall intensity and slope degree at the sheet and rill erosion stages. The stable sediment transport rate for all the tests was linearly related to runoff rate and sediment concentration.
Pressure-strain-rate events in homogeneous turbulent shear flow
NASA Technical Reports Server (NTRS)
Brasseur, James G.; Lee, Moon J.
1988-01-01
A detailed study of the intercomponent energy transfer processes by the pressure-strain-rate in homogeneous turbulent shear flow is presented. Probability density functions (pdf's) and contour plots of the rapid and slow pressure-strain-rate show that the energy transfer processes are extremely peaky, with high-magnitude events dominating low-magnitude fluctuations, as reflected by very high flatness factors of the pressure-strain-rate. A concept of the energy transfer class was applied to investigate details of the direction as well as magnitude of the energy transfer processes. In incompressible flow, six disjoint energy transfer classes exist. Examination of contours in instantaneous fields, pdf's and weighted pdf's of the pressure-strain-rate indicates that in the low magnitude regions all six classes play an important role, but in the high magnitude regions four classes of transfer processes, dominate. The contribution to the average slow pressure-strain-rate from the high magnitude fluctuations is only 50 percent or less. The relative significance of high and low magnitude transfer events is discussed.
Does implied volatility of currency futures option imply volatility of exchange rates?
NASA Astrophysics Data System (ADS)
Wang, Alan T.
2007-02-01
By investigating currency futures options, this paper provides an alternative economic implication for the result reported by Stein [Overreactions in the options market, Journal of Finance 44 (1989) 1011-1023] that long-maturity options tend to overreact to changes in the implied volatility of short-maturity options. When a GARCH process is assumed for exchange rates, a continuous-time relationship is developed. We provide evidence that implied volatilities may not be the simple average of future expected volatilities. By comparing the term-structure relationship of implied volatilities with the process of the underlying exchange rates, we find that long-maturity options are more consistent with the exchange rates process. In sum, short-maturity options overreact to the dynamics of underlying assets rather than long-maturity options overreacting to short-maturity options.
NASA Astrophysics Data System (ADS)
Liang, L. L.; Arcus, V. L.; Heskel, M.; O'Sullivan, O. S.; Weerasinghe, L. K.; Creek, D.; Egerton, J. J. G.; Tjoelker, M. G.; Atkin, O. K.; Schipper, L. A.
2017-12-01
Temperature is a crucial factor in determining the rates of ecosystem processes such as leaf respiration (R) - the flux of plant respired carbon dioxide (CO2) from leaves to the atmosphere. Generally, respiration rate increases exponentially with temperature as modelled by the Arrhenius equation, but a recent study (Heskel et al., 2016) showed a universally convergent temperature response of R using an empirical exponential/polynomial model whereby the exponent in the Arrhenius model is replaced by a quadratic function of temperature. The exponential/polynomial model has been used elsewhere to describe shoot respiration and plant respiration. What are the principles that underlie these empirical observations? Here, we demonstrate that macromolecular rate theory (MMRT), based on transition state theory for chemical kinetics, is equivalent to the exponential/polynomial model. We re-analyse the data from Heskel et al. 2016 using MMRT to show this equivalence and thus, provide an explanation based on thermodynamics, for the convergent temperature response of R. Using statistical tools, we also show the equivalent explanatory power of MMRT when compared to the exponential/polynomial model and the superiority of both of these models over the Arrhenius function. Three meaningful parameters emerge from MMRT analysis: the temperature at which the rate of respiration is maximum (the so called optimum temperature, Topt), the temperature at which the respiration rate is most sensitive to changes in temperature (the inflection temperature, Tinf) and the overall curvature of the log(rate) versus temperature plot (the so called change in heat capacity for the system, ). The latter term originates from the change in heat capacity between an enzyme-substrate complex and an enzyme transition state complex in enzyme-catalysed metabolic reactions. From MMRT, we find the average Topt and Tinf of R are 67.0±1.2 °C and 41.4±0.7 °C across global sites. The average curvature (average negative) is -1.2±0.1 kJ.mol-1K-1. MMRT extends the classic transition state theory to enzyme-catalysed reactions and scales up to more complex processes including micro-organism growth rates and ecosystem processes.
Stability of the Martian atmosphere: Is heterogeneous catalysis essential?
NASA Technical Reports Server (NTRS)
Atreya, Sushil K.; Gu, Zhen Gang
1994-01-01
A comprehensive homogeneous gas phase photochemical model is developed to study the problem of stability of the Martian atmosphere. The one-dimensional model extends from the ground up to 220 km, passing through the homopause at 125 km. The model thus couples the lower (neutral) atmosphere to the ionosphere above which provides significant downward flux of carbon monoxide and oxygen atoms. It is concluded on the basic of currently accepted values for globally and seasonally averaged water vapor abundance, dust opacity and the middle atmospheric eddy mixing coefficient, as well as the relevant laboratory data (particularly the temperature dependence of CO2 absorption cross section and the rate constant for CO+OH reaction), that the rate of re-formation of carbon dioxide exceeds its photolytic destruction rate by about 40%. Furthermore, it is found that this result is virtually independent of the choice of eddy mixing coefficient, unless its value in the middle atmosphere exceeds 10(exp 8) sq cm/sec or is far smaller than 10(exp 5)sq cm/sec, or the dust opacity, unless it exceeds unity, or the water vapor mixing ratio at the surface, unless it is far smaller (less than or = 1 ppm) or far greater (greater than or = 500 ppm) than the average value (approximately 150 ppm). Since none of these extremes represent globally and seasonally averaged conditions on Mars, we propose that the present model requires existence of a mechanism to throttle down the recycling rate of carbon dioxide on Mars. Therefore, it is suggested that a heterogeneous process which provides a sink to the species that participate in the recycling of CO2, i.e., H2O, H2O2, OH, CO or O, in particular, may be necessary to bring about the balance between the CO2 recycling rate and its photolytic destruction rate. Aerosols of dust or ice (pure or doped water or carbon dioxide ice present in the atmosphere of Mars) can provide the appropriate adsorption sites for the above heterogeneous process. Despite our conclusion that some heterogeneous process may be needed, it is important to recognize that one-dimensional models can only provide first-order results which, most likely, represent globally and seasonally averaged conditions. However, it is only after actual temporal, latitudinal and longitudinal variations of relevant atmospheric parameters are included in the model that one can determine fuly whether the problem of atmospheric stabiltiy still continues to persist and whether some heterogeneous process is required to correct it.
NASA Astrophysics Data System (ADS)
Sun, M.; Yu, P. F.; Fu, J. X.; Ji, X. Q.; Jiang, T.
2017-08-01
The optimal process parameters and conditions for the treatment of slaughterhouse wastewater by coagulation sedimentation-AF - biological contact oxidation process were studied to solve the problem of high concentration organic wastewater treatment in the production of small and medium sized slaughter plants. The suitable water temperature and the optimum reaction time are determined by the experiment of precipitation to study the effect of filtration rate and reflux ratio on COD and SS in anaerobic biological filter and the effect of biofilm thickness and gas water ratio on NH3-N and COD in biological contact oxidation tank, and results show that the optimum temperature is 16-24°C, reaction time is 20 min in coagulating sedimentation, the optimum filtration rate is 0.6 m/h, and the optimum reflux ratio is 300% in anaerobic biological filter reactor. The most suitable biological film thickness range of 1.8-2.2 mm and the most suitable gas water ratio is 12:1-14:1 in biological contact oxidation pool. In the coupling process of continuous operation for 80 days, the average effluent’s mass concentrations of COD, TP and TN were 15.57 mg/L, 40 mg/L and 0.63 mg/L, the average removal rates were 98.93%, 86.10%, 88.95%, respectively. The coupling process has stable operation effect and good effluent quality, and is suitable for the industrial application.
NASA Astrophysics Data System (ADS)
Schille, Joerg; Schneider, Lutz; Streek, André; Kloetzer, Sascha; Loeschner, Udo
2016-09-01
High-throughput ultrashort pulse laser machining is investigated on various industrial grade metals (aluminum, copper, and stainless steel) and Al2O3 ceramic at unprecedented processing speeds. This is achieved by using a high-average power picosecond laser in conjunction with a unique, in-house developed polygon mirror-based biaxial scanning system. Therefore, different concepts of polygon scanners are engineered and tested to find the best architecture for high-speed and precision laser beam scanning. In order to identify the optimum conditions for efficient processing when using high-average laser powers, the depths of cavities made in the samples by varying the processing parameter settings are analyzed and, from the results obtained, the characteristic removal values are specified. For overlapping pulses of optimum fluence, the removal rate is as high as 27.8 mm3/min for aluminum, 21.4 mm3/min for copper, 15.3 mm3/min for stainless steel, and 129.1 mm3/min for Al2O3, when a laser beam of 187 W average laser powers irradiates. On stainless steel, it is demonstrated that the removal rate increases to 23.3 mm3/min when the laser beam is very fast moving. This is thanks to the low pulse overlap as achieved with 800 m/s beam deflection speed; thus, laser beam shielding can be avoided even when irradiating high-repetitive 20-MHz pulses.
Cost minimization analysis of a store-and-forward teledermatology consult system.
Pak, Hon S; Datta, Santanu K; Triplett, Crystal A; Lindquist, Jennifer H; Grambow, Steven C; Whited, John D
2009-03-01
The aim of this study was to perform a cost minimization analysis of store-and-forward teledermatology compared to a conventional dermatology referral process (usual care). In a Department of Defense (DoD) setting, subjects were randomized to either a teledermatology consult or usual care. Accrued healthcare utilization recorded over a 4-month period included clinic visits, teledermatology visits, laboratories, preparations, procedures, radiological tests, and medications. Direct medical care costs were estimated by combining utilization data with Medicare reimbursement rates and wholesale drug prices. The indirect cost of productivity loss for seeking treatment was also included in the analysis using an average labor rate. Total and average costs were compared between groups. Teledermatology patients incurred $103,043 in total direct costs ($294 average), while usual-care patients incurred $98,365 ($283 average). However, teledermatology patients only incurred $16,359 ($47 average) in lost productivity cost while usual-care patients incurred $30,768 ($89 average). In total, teledermatology patients incurred $119,402 ($340 average) and usual-care patients incurred $129,133 ($372 average) in costs. From the economic perspective of the DoD, store-and-forward teledermatology was a cost-saving strategy for delivering dermatology care compared to conventional consultation methods when productivity loss cost is taken into consideration.
Umut, İlhan; Çentik, Güven
2016-01-01
The number of channels used for polysomnographic recording frequently causes difficulties for patients because of the many cables connected. Also, it increases the risk of having troubles during recording process and increases the storage volume. In this study, it is intended to detect periodic leg movement (PLM) in sleep with the use of the channels except leg electromyography (EMG) by analysing polysomnography (PSG) data with digital signal processing (DSP) and machine learning methods. PSG records of 153 patients of different ages and genders with PLM disorder diagnosis were examined retrospectively. A novel software was developed for the analysis of PSG records. The software utilizes the machine learning algorithms, statistical methods, and DSP methods. In order to classify PLM, popular machine learning methods (multilayer perceptron, K-nearest neighbour, and random forests) and logistic regression were used. Comparison of classified results showed that while K-nearest neighbour classification algorithm had higher average classification rate (91.87%) and lower average classification error value (RMSE = 0.2850), multilayer perceptron algorithm had the lowest average classification rate (83.29%) and the highest average classification error value (RMSE = 0.3705). Results showed that PLM can be classified with high accuracy (91.87%) without leg EMG record being present. PMID:27213008
Umut, İlhan; Çentik, Güven
2016-01-01
The number of channels used for polysomnographic recording frequently causes difficulties for patients because of the many cables connected. Also, it increases the risk of having troubles during recording process and increases the storage volume. In this study, it is intended to detect periodic leg movement (PLM) in sleep with the use of the channels except leg electromyography (EMG) by analysing polysomnography (PSG) data with digital signal processing (DSP) and machine learning methods. PSG records of 153 patients of different ages and genders with PLM disorder diagnosis were examined retrospectively. A novel software was developed for the analysis of PSG records. The software utilizes the machine learning algorithms, statistical methods, and DSP methods. In order to classify PLM, popular machine learning methods (multilayer perceptron, K-nearest neighbour, and random forests) and logistic regression were used. Comparison of classified results showed that while K-nearest neighbour classification algorithm had higher average classification rate (91.87%) and lower average classification error value (RMSE = 0.2850), multilayer perceptron algorithm had the lowest average classification rate (83.29%) and the highest average classification error value (RMSE = 0.3705). Results showed that PLM can be classified with high accuracy (91.87%) without leg EMG record being present.
Zhang, Yan; Wang, Yong-sheng; Bai, Yu-hua; Chen, Chen; Lü, Jian; Zhang, Jie
2007-10-01
Swimming bed combined with aerobic granular sludge as a novel technology for wastewater treatment was developed, which was on the basis of the biofilm process and activated sludge process, and results demonstrated notable performance of high-efficiency treatment capability and sludge reduction. Even when hydraulic retention time (HRT) was only at 3.2 h with average COD volumetric loading of 2.03 kg/(m3 x d) and NH4(+)-N of 0.52 kg/(m3 X d), 90.9% of average COD removal rate and 98.3% of NH4(+)-N removal rate were achieved. Aerobic granular sludge appeared with spherical or rod shape after 16 days operation. Mixed liquor suspended solid (MLSS) concentrations in the reactor reached 5,640 mg/L at the highest during operation period, and the average ratio of mixed liquor volatile suspended solid (MLVSS) to MLSS reached 0.87. Furthermore, microscopic observation of biofilm and aerobic granules revealed much presence of protozoa and metazoa on the biofilm and suspended sludge, and this long food chain can contribute to the sludge reduction. Only 0. 175 5 of sludge yields (MLSS/ CODremoved) was obtained in the experiment, which was only about 50% of the conventional aerobic processes.
Cema, G; Płaza, E; Trela, J; Surmacz-Górska, J
2011-01-01
A biofilm system with Kaldnes biofilm carrier was used in these studies to cultivate bacteria responsible for both partial nitritation and Anammox processes. Due to co-existence of oxygen and oxygen-free zones within the biofilm depth, both processes can occur in a single reactor. Oxygen that inhibits the Anammox process is consumed in the outer layer of the biofilm and in this way Anammox bacteria are protected from oxygen. The impact of oxygen concentration on nitrogen removal rates was investigated in the pilot plant (2.1 m3), supplied with reject water from the Himmerfjärden Waste Water Treatment Plant. The results of batch tests showed that the highest nitrogen removal rates were obtained for a dissolved oxygen (DO) concentration around 3 g O2 m(-3) At a DO concentration of 4 g O2 m(-3), an increase of nitrite and nitrate nitrogen concentrations in the batch reactor were observed. The average nitrogen removal rate in the pilot plant during a whole operating period oscillated around 1.3 g N m(-2)d(-1) (0.3 +/- 0.1 kg N m(-3)d(-1)) at the average dissolved oxygen concentration of 2.3 g O2 m(-3). The maximum value of a nitrogen removal rate amounted to 1.9 g N m(-2)d(-1) (0.47 kg N m(-3)d(-1)) and was observed for a DO concentration equal to 2.5 g O2 m(-3). It was observed that increase of biofilm thickness during the operational period, had no influence on nitrogen removal rates in the pilot plant.
Code of Federal Regulations, 2012 CFR
2012-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...
Code of Federal Regulations, 2013 CFR
2013-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...
Code of Federal Regulations, 2014 CFR
2014-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...
Code of Federal Regulations, 2011 CFR
2011-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...
Code of Federal Regulations, 2010 CFR
2010-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...
NASA Astrophysics Data System (ADS)
Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing
2017-09-01
The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poirier, M.; Gaufridy de Dortan, F. de
A collisional-radiative model describing nonlocal-thermodynamic-equilibrium plasmas is developed. It is based on the HULLAC (Hebrew University Lawrence Livermore Atomic Code) suite for the transitions rates, in the zero-temperature radiation field hypothesis. Two variants of the model are presented: the first one is configuration averaged, while the second one is a detailed level version. Comparisons are made between them in the case of a carbon plasma; they show that the configuration-averaged code gives correct results for an electronic temperature T{sub e}=10 eV (or higher) but fails at lower temperatures such as T{sub e}=1 eV. The validity of the configuration-averaged approximation ismore » discussed: the intuitive criterion requiring that the average configuration-energy dispersion must be less than the electron thermal energy turns out to be a necessary but far from sufficient condition. Another condition based on the resolution of a modified rate-equation system is proposed. Its efficiency is emphasized in the case of low-temperature plasmas. Finally, it is shown that near-threshold autoionization cascade processes may induce a severe failure of the configuration-average formalism.« less
Safety leadership: application in construction site.
Cooper, Dominic
2010-01-01
The extant safety literature suggests that managerial Safety Leadership is vital to the success and maintenance of a behavioral safety process. The current paper explores the role of Managerial Safety Leadership behaviors in the success of a behavioral safety intervention in the Middle-East with 47,000 workers from multiple nationalities employed by fourteen sub-contractors and one main contractor. A quasi-experimental repeating ABABAB, within groups design was used. Measurement focused on managerial Safety Leadership and employee safety behaviors as well as Corrective Actions. Data was collected over 104 weeks. During this time, results show safety behavior improved by 30 percentage points from an average of 65% during baseline to an average of 95%. The site achieved 121 million man-hours free of lost-time injuries on the longest run. Stepwise multiple regression analyses indicated 86% of the variation in employee safety behavior was associated with senior, middle and front-line manager's Safety Leadership behaviors and the Corrective Action Rate. Approximately 38% of the variation in the Total Recordable Incident Rate (TRIR) was associated with the Observation rate, Corrective Action Rate and Observers Records of managerial safety leaders (Visible Ongoing Support). The results strongly suggest manager's Safety Leadership influences the success of Behavioral Safety processes.
[Profitability of a day hospital: analysis of activity, cost and effectiveness].
Hernando Ortiz, Lili; Hinojosa Mena-Bernal, Carmen; González Sarmiento, Enrique; González Guilabert, Isabel; Arana Ruiz, Jorge; Muñoz Moreno, M Fe
2012-01-01
Day hospitals are an alternative to conventional hospital care. We analyzed the functioning and profitability of the day hospital of Hospital Clínico de Valladolid, Spain, in 2009. Profitability is expressed as the provision of identical health coverage at a lower cost than that generated by conventional hospital care and with adequate quality indicators. We performed a retrospective, observational and descriptive study of the information obtained on each patient attended in the day hospital from January 1 to December 31, 2009. We studied four quality indicators: cancellation of meetings, the rate of transfusion reactions, the out-patient rate and the satisfaction index. The estimated savings for each process was calculated as the difference in the average cost of hospitalization minus the average cost of the process in the day hospital. The most frequent diseases were systemic and connective tissue diseases, accounting for 25.4% of the processes treated; of these, 17.1% corresponded to rheumatoid arthritis. Patient satisfaction was 93%. Meetings cancellations and the rate of transfusion reactions were 0%. The out-patient rate was 26%. Day hospital costs were 8.6% of conventional hospital costs, with savings of 78,390.69 euros. The day hospital is cost effective due to savings compared with conventional hospitalization and has a satisfactory quality index. Copyright © 2011 SESPAS. Published by Elsevier España, S.L. All rights reserved.
Rethinking the connection between working memory and language impairment.
Archibald, Lisa M D; Harder Griebeling, Katherine
2016-05-01
Working memory deficits have been found for children with specific language impairment (SLI) on tasks imposing increasing short-term memory load with or without additional, consistent (and simple) processing load. To examine the processing function of working memory in children with low language (LL) by employing tasks imposing increasing processing loads with constant storage demands individually adjusted based on each participant's short-term memory capacity. School-age groups with LL (n = 17) and typical language with either average (n = 28) or above-average nonverbal intelligence (n = 15) completed complex working memory-span tasks varying processing load while keeping storage demands constant, varying storage demands while keeping processing load constant, simple storage-span tasks, and measures of language and nonverbal intelligence. Teachers completed questionnaires about cognition and learning. Significantly lower scores were found for the LL than either matched group on storage-based tasks, but no group differences were found on the tasks varying processing load. Teachers' ratings of oral expression and mathematics abilities discriminated those who did or did not complete the most challenging cognitive tasks. The results implicate a deficit in the phonological storage but not in the central executive component of working memory for children with LL. Teacher ratings may reveal personality traits related to perseverance of effort in cognitive research. © 2015 Royal College of Speech and Language Therapists.
Appiani, Elena; Page, Sarah E; McNeill, Kristopher
2014-10-21
Dissolved organic matter (DOM) is involved in numerous environmental processes, and its molecular size is important in many of these processes, such as DOM bioavailability, DOM sorptive capacity, and the formation of disinfection byproducts during water treatment. The size and size distribution of the molecules composing DOM remains an open question. In this contribution, an indirect method to assess the average size of DOM is described, which is based on the reaction of hydroxyl radical (HO(•)) quenching by DOM. HO(•) is often assumed to be relatively unselective, reacting with nearly all organic molecules with similar rate constants. Literature values for HO(•) reaction with organic molecules were surveyed to assess the unselectivity of DOM and to determine a representative quenching rate constant (k(rep) = 5.6 × 10(9) M(-1) s(-1)). This value was used to assess the average molecular weight of various humic and fulvic acid isolates as model DOM, using literature HO(•) quenching constants, kC,DOM. The results obtained by this method were compared with previous estimates of average molecular weight. The average molecular weight (Mn) values obtained with this approach are lower than the Mn measured by other techniques such as size exclusion chromatography (SEC), vapor pressure osmometry (VPO), and flow field fractionation (FFF). This suggests that DOM is an especially good quencher for HO(•), reacting at rates close to the diffusion-control limit. It was further observed that humic acids generally react faster than fulvic acids. The high reactivity of humic acids toward HO(•) is in line with the antioxidant properties of DOM. The benefit of this method is that it provides a firm upper bound on the average molecular weight of DOM, based on the kinetic limits of the HO(•) reaction. The results indicate low average molecular weight values, which is most consistent with the recent understanding of DOM. A possible DOM size distribution is discussed to reconcile the small nature of DOM with the large-molecule behavior observed in other studies.
Development trends for new cancer therapeutics and vaccines.
Reichert, Janice M; Wenger, Julia B
2008-01-01
Global commercial development of cancer treatments has dramatically increased over the past 15 years. To assess trends in the process, we analyzed data for 1111 candidates that entered clinical study during 1990-2006. Our results show that although the average number of therapeutic candidates entering clinical study per year more than doubled, the US approval success rate was low (8%) during the period. The therapeutics took seven years on average to go through the clinical and US approval phases, but cancer vaccines have yet to gain any US approvals. These results indicate that improvement in the efficiency of the development process for innovative cancer treatments is needed.
Economy-wide material input/output and dematerialization analysis of Jilin Province (China).
Li, MingSheng; Zhang, HuiMin; Li, Zhi; Tong, LianJun
2010-06-01
In this paper, both direct material input (DMI) and domestic processed output (DPO) of Jilin Province in 1990-2006 were calculated and then based on these two indexes, a dematerialization model was established. The main results are summarized as follows: (1) both direct material input and domestic processed output increase at a steady rate during 1990-2006, with average annual growth rates of 4.19% and 2.77%, respectively. (2) The average contribution rate of material input to economic growth is 44%, indicating that the economic growth is visibly extensive. (3) During the studied period, accumulative quantity of material input dematerialization is 11,543 x 10(4) t and quantity of waste dematerialization is 5,987 x10(4) t. Moreover, dematerialization gaps are positive, suggesting that the potential of dematerialization has been well fulfilled. (4) In most years of the analyzed period, especially 2003-2006, the economic system of Jilin Province represents an unsustainable state. The accelerated economic growth relies mostly on excessive resources consumption after the Revitalization Strategy of Northeast China was launched.
NASA Technical Reports Server (NTRS)
Pongratz, M.
1972-01-01
Results from a Nike-Tomahawk sounding rocket flight launched from Fort Churchill are presented. The rocket was launched into a breakup aurora at magnetic local midnight on 21 March 1968. The rocket was instrumented to measure electrons with an electrostatic analyzer electron spectrometer which made 29 measurements in the energy interval 0.5 KeV to 30 KeV. Complete energy spectra were obtained at a rate of 10/sec. Pitch angle information is presented via 3 computed average per rocket spin. The dumped electron average corresponds to averages over electrons moving nearly parallel to the B vector. The mirroring electron average corresponds to averages over electrons moving nearly perpendicular to the B vector. The average was also computed over the entire downward hemisphere (the precipitated electron average). The observations were obtained in an altitude range of 10 km at 230 km altitude.
NASA Astrophysics Data System (ADS)
Tomenchok, K.; Hill, M.; Jimerson, C.; Talbot-Wendlandt, , H.; Schmidt, A.; Frey, H. M.
2017-12-01
With 9 active volcanic centers, frequent tropical storms, and widespread landslides, the topography of Dominica is rugged and dynamic. This study aims to fingerprint sediment source dynamics in this relatively unstudied region with fallout radionuclides, clay mineralogy, and acid-extractable grain coating concentration measured in detrital sediments. We also aim to measure basin average erosion rates and determine river incision rates into the underlying ignimbrites. Baseline data on the effects of volcanoes, landslides, land use, and topography in setting erosional dynamics will be established. We sampled outlets of 20 large (>10 km2) rivers as well as 11 points in the Roseau River watershed for a network analysis. Block and ash flows and ignimbrites underlie 89% of the study area. Steep topography (mean slope = 19.6˚) and high levels of rainfall (mean annual rainfall = 1981.41 mm) are consistent throughout the 89% forested island. 934 landslides affect 13% of the study area. We hypothesize that basin average parameters and landslide frequency will correlate with erosion rates and fallout radionuclide activities. In addition, we used topographic data and published ignimbrite ages to calculate river incision rates that ranged from 0.448 - 113.9 mm/yr in the north and 0.86 - 44 mm/yr in the south. Basin average erosion rates will be compared to incision rates to quantify differences between basin wide erosional and river incision processes. We will fingerprint sediment sources with 7Be, 210Pbex, and 137Cs, concentration of grain coatings, and clay mineralogy. We hypothesize that watersheds with erosion from stabilizing landslide scars will have high 7Be, low 210Pbex and 137Cs, low concentrations of grain coatings, and less weathered clays. Watersheds with river bank/scarp erosion or active landslides will have little 7Be, 210Pbex, and 137Cs, less weathered clays, and low concentrations of grain coatings. Watersheds with widespread surface erosion will have high activities, weathered clays, and high concentrations of grain coatings. We will correlate basin average statistics with measured fingerprints to provide a better understanding of sediment source dynamics in an understudied region of the world. With the potential for future landslides, further information will advance hazard mitigation in Dominica.
A flexible importance sampling method for integrating subgrid processes
Raut, E. K.; Larson, V. E.
2016-01-29
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). Here, the resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less
High Grazing Angle and High Resolution Sea Clutter: Correlation and Polarisation Analyses
2007-03-01
the azimuthal correlation. The correlation between the HH and VV sea clutter data is low. A CA-CFAR ( cell average constant false-alarm rate...to calculate the power spectra of correlation profiles. The frequency interval of the traditional Discrete Fourier Transform is NT1 Hz, where N and...sea spikes, the Entropy-Alpha decomposition of sea spikes is shown in Figure 30. The process first locates spikes using a cell -average constant false
Pan, Huapu; Assefa, Solomon; Green, William M J; Kuchta, Daniel M; Schow, Clint L; Rylyakov, Alexander V; Lee, Benjamin G; Baks, Christian W; Shank, Steven M; Vlasov, Yurii A
2012-07-30
The performance of a receiver based on a CMOS amplifier circuit designed with 90nm ground rules wire-bonded to a waveguide germanium photodetector is characterized at data rates up to 40Gbps. Both chips were fabricated through the IBM Silicon CMOS Integrated Nanophotonics process on specialty photonics-enabled SOI wafers. At the data rate of 28Gbps which is relevant to the new generation of optical interconnects, a sensitivity of -7.3dBm average optical power is demonstrated with 3.4pJ/bit power-efficiency and 0.6UI horizontal eye opening at a bit-error-rate of 10(-12). The receiver operates error-free (bit-error-rate < 10(-12)) up to 40Gbps with optimized power supply settings demonstrating an energy efficiency of 1.4pJ/bit and 4pJ/bit at data rates of 32Gbps and 40Gbps, respectively, with an average optical power of -0.8dBm.
The Influence of Reading on Vocabulary Growth: A Case for a Matthew Effect.
Duff, Dawna; Tomblin, J Bruce; Catts, Hugh
2015-06-01
Individual differences in vocabulary development may affect academic or social opportunities. It has been proposed that individual differences in word reading could affect the rate of vocabulary growth, mediated by the amount of reading experience, a process referred to as a Matthew effect (Stanovich, 1986). In the current study, assessments of written word-reading skills in the 4th grade and oral vocabulary knowledge collected in kindergarten and in the 4th, 8th, and 10th grades from a large epidemiologically based sample (n = 485) allowed a test of the relationship of early word-reading skills and the subsequent rate of vocabulary growth. Consistent with the hypothesis, multilevel modeling revealed the rate of vocabulary growth after the 4th grade to be significantly related to 4th-grade word reading after controlling for kindergarten vocabulary level, that is, above average readers experienced a higher rate of vocabulary growth than did average readers. Vocabulary growth rate differences accumulated over time such that the effect on vocabulary size was large.
Qiao, Sen; Kawakubo, Yuki; Koyama, Toichiro; Furukawa, Kenji
2008-11-01
This study evaluated performance of swim-bed (SB) reactors packed with a novel acrylic fiber carrier (BF) and swim-bed activated sludge (SBAS) reactor for partial nitritation of anaerobic sludge digester liquor from a municipal wastewater treatment plant. Comparison of characteristics of sludge obtained from both the reactors was also made. The average conversion rates of ammonium to nitrite were 52.3% and 40.0% under relatively high nitrogen loading rates over 3.0 kg-N/m(3)/d, respectively in two reactors. The average BOD(5) removal efficiencies were 74.3% and 64.4%, respectively in the two reactors. The size of the sludge pellets taken from SB and SBAS reactors was found to be approximately three times (229 mum versus 88 mum) of that of the seed sludge. This sludge also had relatively high extracellular proteins levels indicating better sludge settling capability as compared to the sludge taken from SBAS reactor. Although the effluent nitrite/ammonium ratios had fluctuated in both reactor in some extent, the low dissolved oxygen concentration (average of 2.5 versus 0.35 mg/l), low suspended solids (average of 33.3 versus 33.5 mg/l), and about 50% ammonium conversion to nitrite demonstrated the application potential of anammox process for nitrogen removal.
Radiation beam calorimetric power measurement system
Baker, John; Collins, Leland F.; Kuklo, Thomas C.; Micali, James V.
1992-01-01
A radiation beam calorimetric power measurement system for measuring the average power of a beam such as a laser beam, including a calorimeter configured to operate over a wide range of coolant flow rates and being cooled by continuously flowing coolant for absorbing light from a laser beam to convert the laser beam energy into heat. The system further includes a flow meter for measuring the coolant flow in the calorimeter and a pair of thermistors for measuring the temperature difference between the coolant inputs and outputs to the calorimeter. The system also includes a microprocessor for processing the measured coolant flow rate and the measured temperature difference to determine the average power of the laser beam.
Ophir, Ella; Bornstein, Jacob; Odeh, Marwan; Kaminsky, Svetlana; Shnaider, Oleg; Megel, Yuri; Barnea, Ofer
2014-03-01
To obtain and study new data on the dynamics of the labor process and to develop a contraction-based index of labor progress. This study was carried out at the Delivery Room, Department of Obstetrics and Gynecology, Western Galilee Hospital, Nahariya, Israel, using a new device (Birth Track). We continuously monitored cervical dilatation (CD) and head descent (HD) in 30 nulliparaous women during active labor with (augmented group) and without (study group) oxytocin augmentation. This led to the development and validation of progress indices based on features extracted from continuous monitoring. There were no significant differences between the average of each parameter in the study and augmented groups, except for HD velocity. Average HD velocity was faster in the study group. Linear regression analyses demonstrated that head station (HS) amplitude and Toco amplitude were the best parameters for predicting HD velocity in both groups. In the study group, average HD velocity was also significantly related to Toco rate and contraction efficiency. In the augmented group, only a weak correlation with Toco rate was seen, and no correlation with contraction efficiency. With the assistance of the Birth Track device, we can obtain continuous data on the labor process and indices to estimate the labor progress process without the use of vaginal (manual) examination. © 2013 The Authors. Journal of Obstetrics and Gynaecology Research © 2013 Japan Society of Obstetrics and Gynecology.
2012-01-01
Background Comparative genomics can inform us about the processes of mutation and selection across diverse taxa. Among seed plants, gymnosperms have been lacking in genomic comparisons. Recent EST and full-length cDNA collections for two conifers, Sitka spruce (Picea sitchensis) and loblolly pine (Pinus taeda), together with full genome sequences for two angiosperms, Arabidopsis thaliana and poplar (Populus trichocarpa), offer an opportunity to infer the evolutionary processes underlying thousands of orthologous protein-coding genes in gymnosperms compared with an angiosperm orthologue set. Results Based upon pairwise comparisons of 3,723 spruce and pine orthologues, we found an average synonymous genetic distance (dS) of 0.191, and an average dN/dS ratio of 0.314. Using a fossil-established divergence time of 140 million years between spruce and pine, we extrapolated a nucleotide substitution rate of 0.68 × 10-9 synonymous substitutions per site per year. When compared to angiosperms, this indicates a dramatically slower rate of nucleotide substitution rates in conifers: on average 15-fold. Coincidentally, we found a three-fold higher dN/dS for the spruce-pine lineage compared to the poplar-Arabidopsis lineage. This joint occurrence of a slower evolutionary rate in conifers with higher dN/dS, and possibly positive selection, showcases the uniqueness of conifer genome evolution. Conclusions Our results are in line with documented reduced nucleotide diversity, conservative genome evolution and low rates of diversification in conifers on the one hand and numerous examples of local adaptation in conifers on the other hand. We propose that reduced levels of nucleotide mutation in large and long-lived conifer trees, coupled with large effective population size, were the main factors leading to slow substitution rates but retention of beneficial mutations. PMID:22264329
Relationship Among Signal Fidelity, Hearing Loss, and Working Memory for Digital Noise Suppression.
Arehart, Kathryn; Souza, Pamela; Kates, James; Lunner, Thomas; Pedersen, Michael Syskind
2015-01-01
This study considered speech modified by additive babble combined with noise-suppression processing. The purpose was to determine the relative importance of the signal modifications, individual peripheral hearing loss, and individual cognitive capacity on speech intelligibility and speech quality. The participant group consisted of 31 individuals with moderate high-frequency hearing loss ranging in age from 51 to 89 years (mean = 69.6 years). Speech intelligibility and speech quality were measured using low-context sentences presented in babble at several signal-to-noise ratios. Speech stimuli were processed with a binary mask noise-suppression strategy with systematic manipulations of two parameters (error rate and attenuation values). The cumulative effects of signal modification produced by babble and signal processing were quantified using an envelope-distortion metric. Working memory capacity was assessed with a reading span test. Analysis of variance was used to determine the effects of signal processing parameters on perceptual scores. Hierarchical linear modeling was used to determine the role of degree of hearing loss and working memory capacity in individual listener response to the processed noisy speech. The model also considered improvements in envelope fidelity caused by the binary mask and the degradations to envelope caused by error and noise. The participants showed significant benefits in terms of intelligibility scores and quality ratings for noisy speech processed by the ideal binary mask noise-suppression strategy. This benefit was observed across a range of signal-to-noise ratios and persisted when up to a 30% error rate was introduced into the processing. Average intelligibility scores and average quality ratings were well predicted by an objective metric of envelope fidelity. Degree of hearing loss and working memory capacity were significant factors in explaining individual listener's intelligibility scores for binary mask processing applied to speech in babble. Degree of hearing loss and working memory capacity did not predict listeners' quality ratings. The results indicate that envelope fidelity is a primary factor in determining the combined effects of noise and binary mask processing for intelligibility and quality of speech presented in babble noise. Degree of hearing loss and working memory capacity are significant factors in explaining variability in listeners' speech intelligibility scores but not in quality ratings.
Palaeoflood records of the last three centuries from the Pyeongchang and Dong rivers, South Korea
NASA Astrophysics Data System (ADS)
Kim, Song-Hyun; Tanaka, Yukiya
2017-08-01
Slackwater deposits are paleostage indicators in paleoflood hydrology that have commonly been used in numerous studies to estimate the magnitude and frequency of flood events and to reconstruct paleoenvironments and paleohydrology. In this study, individual flood events along the Pyeongchang (PC) and Dong (D) rivers of South Korea were differentiated on the basis of changes in deposit color, grain size, organic content, and the existence of laminations. Based on 137Cs and 14C chronological data, 19 flood events have occurred at site PC since 1720, while 17 flood events have occurred at site D since 1815. At the PC study site, the average time interval between the flood events is 15.4 years and the average sediment accumulation rate is 9.7 mm/y. At study site D, the average time interval is 15 years and the average sediment accumulation rate is 11.6 mm/y. These high sediment accumulation rates are consistent with those in humid areas (e.g., Japan) and explain how slackwater deposits can be preserved despite erosion, bioturbation, and pedogenic processes. Based on the results, the study area was divided into three periods: (1) a relatively wet period (1720-1810 CE), (2) a dry period (1810-1960 CE), and (3) a wet period (1960 CE-present). The flood time intervals and average sediment accumulation rates of the eighteenth century were shorter and higher than those of the nineteenth century. This suggests that on the Korean Peninsula, the paleoclimate of the Little Ice Age (LIA) was wetter than that from the nineteenth century to the early twentieth century.
Common scaling behavior in finance and macroeconomics
NASA Astrophysics Data System (ADS)
Podobnik, B.; Horvatic, D.; Petersen, A. M.; Njavro, M.; Stanley, H. E.
2010-08-01
In order to test whether scaling exists in finance at the world level, we test whether the average growth rates and volatility of market capitalization (MC) depend on the level of MC. We analyze the MC for 54 worldwide stock indices and 48 worldwide bond indices. We find that (i) the average growth rate
Universal approach to analysis of cavitation and liquid-impingement erosion data
NASA Technical Reports Server (NTRS)
Rao, P. V.; Young, S. G.
1982-01-01
Cavitation erosion experimental data was analyzed by using normalization and curve-fitting techniques. Data were taken from experiments on several materials tested in both a rotating disk device and a magnetostriction apparatus. Cumulative average volume loss rate and time data were normalized relative to the peak erosion rate and the time to peak erosion rate, respectively. From this process a universal approach was derived that can include data on specific materials from different test devices for liquid impingement and cavitation erosion studies.
Unemployment rate as predictor of rate of psychiatric admission.
Kammerling, R M; O'Connor, S
1993-12-11
To examine the relation between rates of psychiatric admissions and both the rate of unemployment and the underprivileged area score within small areas. Calculation of correlation coefficients and explanatory power by using data on psychiatric admissions from April 1990 to March 1992. Crude and age standardised rates were used based on all admissions and also on the number of people admitted regardless of the number of times each person was admitted. Sectors with an average population of 45,000 consisting of aggregations of neighbouring wards in Bristol and District Health Authority and electoral wards with an average population of 9400 in the city of Bristol. Unemployment rates explained 93% of the variation in the crude person based admission rates standardised for age for those aged under 65 in the sectors. Person based rates correlated more strongly with unemployment than did rates based on all separate admissions. Inclusion of people aged 65 and over weakened the relation. Within electoral wards unemployment rates explained only about 50-60% of the variation but were still more powerful than the underprivileged area score. There was a significant negative correlation between average length of stay and readmission rates--that is, sectors with short lengths of stay were more likely to have patients readmitted (r = -0.64, 95% confidence interval -0.25 to -0.85). Unemployment rates are an extremely powerful indicator of the rates of serious mental illness that will need treatment in hospital in those aged under 65. This should be considered in the process of resource allocation, particularly to fundholders in general practice, or people with serious mental illness living in areas of high unemployment could be considerably disadvantaged.
Thakur, Ranjit; Gupta, Ram B
2006-02-03
Nanoparticles are of significant importance in drug delivery. Rapid expansion of supercritical solution (RESS) process can produce pure and high-quality drug particles. However, due to extremely low solubility of polar drugs in supercritical CO(2) (sc CO(2)), RESS has limited commercial applicability. To overcome this major limitation, a modified process rapid expansion of supercritical solution with solid cosolvent (RESS-SC) is proposed which uses a solid cosolvent. Here, the new process is tested for phenytoin drug using menthol solid cosolvent. Phenytoin solubility in pure sc CO(2) is only 3 micromol/mol but when menthol solid cosolvent is used the solubility is enhanced to 1,302 micromol/mol, at 196 bar and 45 degrees C. This 400-fold increase in the solubility can be attributed to the interaction between phenytoin and menthol. Particle agglomeration in expansion zone is another major issue with conventional RESS process. In proposed RESS-SC process solid cosolvent hinders the particle growth resulting in the formation of small nanoparticles. For example, the average particle size of phenytoin in conventional RESS process is 200 nm whereas, with RESS-SC process, the average particle size is 120 nm, at 96 bar and 45 degrees C. Similarly at 196 bar and 45 degrees C, 105 nm average particles were obtained by RESS and 75 nm average particles were obtained in RESS-SC process. The particles obtained were characterized by Fourier-transform infrared spectroscopy (FTIR), X-ray diffraction (XRD), dynamic light scattering (DLS) and differential scanning calorimetery (DSC) analyses. Phenytoin nanoparticle production rate in RESS-SC is about 400-fold more in comparison to that in RESS process.
Wang, Peng; Hua, Zulin; Cai, Yunjie; Shen, Xia; Li, Qiongqiong; Liu, Xiaoyuan
2015-08-01
The sorption behaviors of pollutants affected by hydrodynamic conditions were confirmed in natural water environment. The effects of hydrodynamic conditions on the sorption behaviors of aniline on sediment with coexistence of nitrobenzene were investigated. The particle entrainment simulator (PES) was used to simulate varied bottom shear stresses. The batch equilibrium method was applied to the experiments with the stress levels and the action time controlled at 0.2-0.5 N/m(2) and 24 h, respectively. The findings indicated that apparent partition coefficient of aniline on sediment increased with the shear stress significantly, while decreased with nitrobenzene concentration. On the contrary, both the sorption amount of aniline on suspended particulate matter (Q s) and the effect of nitrobenzene concentration on Q s declined as the shear stress increased. The sorption kinetic results showed that the sorption process followed the pseudo-second-order kinetics equation, and the process included two stages: fast sorption stage and slow sorption stage, among which the average sorption rate of fast stage was 7.5-9.5 times that of slow one. The effect of shear stress on the average sorption rate of aniline was enhanced with the increase of nitrobenzene concentration. And shear stress weakened the disturbance of cosolute on main solute sorption process. In addition, experiment results of sorption kinetic show that only the initial sorption rate was affected by shear stress and cosolute concentration. In the first 5 min, shear stress had positive effects on the sorption rate. After that, the sorption rate barely changed with shear stress and cosolute concentration.
Genuine non-self-averaging and ultraslow convergence in gelation.
Cho, Y S; Mazza, M G; Kahng, B; Nagler, J
2016-08-01
In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.
Behind the Match Process: Is There Any Financial Difference Lurking Below the Specialty of Choice?
Oladeji, Lasun O; Raley, James A; Smith, Stephen; Perez, Jorge L; McGwin, Gerald; Ponce, Brent A
2016-12-01
The Match was developed in response to a chaotic residency selection process. While the match has remained relatively unchanged since it was introduced, the number of medical school graduates has increased at a rate outpacing the number of residency positions leading to a more competitive process for applicants. In May 2014, an 18-question mixed-response questionnaire was distributed to fourth year allopathic medical students via an E-mail distribution list for student affairs representatives. The individual surveys were accessible via SurveyMonkey and available for completion over the course of a 4-week period. Approximately 65.1 per cent of students performed at least one audition rotation and documented average expenditures of $2494 on housing, food, and transportation. The average applicant applied to 32 programs and attended 12 interviews while spending $4420 on the interview trail. Applicants for surgical programs applied to approximately 42 programs and attended 13 interviews compared with primary care applicants who averaged 23 programs (P < 0.001) and attended 12 interviews (P = 0.002). Surgical applicants averaged 20 days on the interview trail while spending $5500 ($423/interview) on housing, food, and transportation compared with primary care applicants averaged 19 days away from home (P < 0.05) and spending $3400 ($283/interview) on these same items (P < 0.001). The findings in our study indicate that the "Match process" contributes to the financial burden of graduating medical students and it is more expensive and time consuming for the candidates interested in surgical specialties.
Fransz, Duncan P; Huurnink, Arnold; de Boode, Vosse A; Kingma, Idsart; van Dieën, Jaap H
2015-01-01
Time to stabilization (TTS) is the time it takes for an individual to return to a baseline or stable state following a jump or hop landing. A large variety exists in methods to calculate the TTS. These methods can be described based on four aspects: (1) the input signal used (vertical, anteroposterior, or mediolateral ground reaction force) (2) signal processing (smoothed by sequential averaging, a moving root-mean-square window, or fitting an unbounded third order polynomial), (3) the stable state (threshold), and (4) the definition of when the (processed) signal is considered stable. Furthermore, differences exist with regard to the sample rate, filter settings and trial length. Twenty-five healthy volunteers performed ten 'single leg drop jump landing' trials. For each trial, TTS was calculated according to 18 previously reported methods. Additionally, the effects of sample rate (1000, 500, 200 and 100 samples/s), filter settings (no filter, 40, 15 and 10 Hz), and trial length (20, 14, 10, 7, 5 and 3s) were assessed. The TTS values varied considerably across the calculation methods. The maximum effect of alterations in the processing settings, averaged over calculation methods, were 2.8% (SD 3.3%) for sample rate, 8.8% (SD 7.7%) for filter settings, and 100.5% (SD 100.9%) for trial length. Differences in TTS calculation methods are affected differently by sample rate, filter settings and trial length. The effects of differences in sample rate and filter settings are generally small, while trial length has a large effect on TTS values. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Nisenson, P.; Papaliolios, C.
1983-01-01
An analysis of the effects of photon noise on astronomical speckle image reconstruction using the Knox-Thompson algorithm is presented. It is shown that the quantities resulting from the speckle average arre biased, but that the biases are easily estimated and compensated. Calculations are also made of the convergence rate for the speckle average as a function of the source brightness. An illustration of the effects of photon noise on the image recovery process is included.
On Adaptive Cell-Averaging CFAR (Constant False-Alarm Rate) Radar Signal Detection
1987-10-01
SIICILE COPY 4 F FInI Tedwill Rlmrt to October 197 00 C\\JT ON ADAPTIVE CELL-AVERA81NG CFAR I RADAR SIGNAL DETECTION Syracuse University Mourud krket...NY 13441-5700 ELEMENT NO. NO. NO ACCESSION NO. 11. TITLE (Include Security Classification) 61102F 2’ 05 J8 PD - ON ADAPTIVE CELL-AVERAGING CFAR RADAR... CFAR ). One approach to adaptive detection in nonstationary noise and clutter background is to compare the processed target signal to an adaptive
Real time display Fourier-domain OCT using multi-thread parallel computing with data vectorization
NASA Astrophysics Data System (ADS)
Eom, Tae Joong; Kim, Hoon Seop; Kim, Chul Min; Lee, Yeung Lak; Choi, Eun-Seo
2011-03-01
We demonstrate a real-time display of processed OCT images using multi-thread parallel computing with a quad-core CPU of a personal computer. The data of each A-line are treated as one vector to maximize the data translation rate between the cores of the CPU and RAM stored image data. A display rate of 29.9 frames/sec for processed OCT data (4096 FFT-size x 500 A-scans) is achieved in our system using a wavelength swept source with 52-kHz swept frequency. The data processing times of the OCT image and a Doppler OCT image with a 4-time average are 23.8 msec and 91.4 msec.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiscus, D.E.; Gorman, P.G.; Schrag, M.P.
1977-09-01
The results are presented of processing plant evaluations of the St. Louis-Union Electric Refuse Fuel Project, including equipment and facilities as well as assessment of environmental emissions at both the processing and the power plants. Data on plant material flows and operating parameters, plant operating costs, characteristics of plant material flows, and emissions from various processing operations were obtained during a testing program encompassing 53 calendar weeks. Refuse derived fuel (RDF) is the major product (80.6% by weight) of the refuse processing plant, the other being ferrous metal scrap, a marketable by-product. Average operating costs for the entire evaluation periodmore » were $8.26/Mg ($7.49/ton). The average overall processing rate for the period was 168 Mg/8-h day (185.5 tons/8-h day) at 31.0 Mg/h (34.2 tons/h). Future plants using an air classification system of the type used at the St. Louis demonstration plant will need an emissions control device for particulates from the large de-entrainment cyclone. Also in the air exhaust from the cyclone were total counts of bacteria and viruses several times higher than those of suburban ambient air. No water effluent or noise exposure problems were encountered, although landfill leachate mixed with ground water could result in contamination, given low dilution rates.« less
A novel cleaner production process of citric acid by recycling its treated wastewater.
Xu, Jian; Su, Xian-Feng; Bao, Jia-Wei; Zhang, Hong-Jian; Zeng, Xin; Tang, Lei; Wang, Ke; Zhang, Jian-Hua; Chen, Xu-Sheng; Mao, Zhong-Gui
2016-07-01
In this study, a novel cleaner production process of citric acid was proposed to completely solve the problem of wastewater management in citric acid industry. In the process, wastewater from citric acid fermentation was used to produce methane through anaerobic digestion and then the anaerobic digestion effluent was further treated with air stripping and electrodialysis before recycled as process water for the later citric acid fermentation. This proposed process was performed for 10 batches and the average citric acid production in recycling batches was 142.4±2.1g/L which was comparable to that with tap water (141.6g/L). Anaerobic digestion was also efficient and stable in operation. The average chemical oxygen demand (COD) removal rate was 95.1±1.2% and methane yield approached to 297.7±19.8mL/g TCODremoved. In conclusion, this novel process minimized the wastewater discharge and achieved the cleaner production in citric acid industry. Copyright © 2016 Elsevier Ltd. All rights reserved.
47 CFR 64.1801 - Geographic rate averaging and rate integration.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Geographic rate averaging and rate integration. 64.1801 Section 64.1801 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON... Rate Integration § 64.1801 Geographic rate averaging and rate integration. (a) The rates charged by...
Evaluation of a multifiltration water reclamation subsystem to reclaim domestic clothes wash water
NASA Technical Reports Server (NTRS)
Hall, J. B., Jr.
1973-01-01
An evaluation has been performed of a multifiltration water reclamation subsystem to determine its capability to recover water from domestic clothes wash water. A total of 32.89 kg (72.5 lb) of clothes were washed during eight wash cycles which used 1.4 lb of detergent, 145 gallons of hot water and 133.9 gallons of cold water. Water recovered at a weighted average process rate of 3.81 gallons per hour met the majority of the 23 requirements established for potable water by the U.S. Public Health Service. Average power consumed during this evaluation was approximately 71 watt-hours per gallon of water recovered. Filter replacement, which was required primarily for the control of micro-organisms in the recovered water averaged 4.86 filters per 100 gallons of wash water processed. The subsystem removed approximately 98 percent and virtually 100 percent of the phosphates and surfactants, respectively, from the wash water.
Demographics of an ornate box turtle population experiencing minimal human-induced disturbances
Converse, S.J.; Iverson, J.B.; Savidge, J.A.
2005-01-01
Human-induced disturbances may threaten the viability of many turtle populations, including populations of North American box turtles. Evaluation of the potential impacts of these disturbances can be aided by long-term studies of populations subject to minimal human activity. In such a population of ornate box turtles (Terrapene ornata ornata) in western Nebraska, we examined survival rates and population growth rates from 1981-2000 based on mark-recapture data. The average annual apparent survival rate of adult males was 0.883 (SE = 0.021) and of adult females was 0.932 (SE = 0.014). Minimum winter temperature was the best of five climate variables as a predictor of adult survival. Survival rates were highest in years with low minimum winter temperatures, suggesting that global warming may result in declining survival. We estimated an average adult population growth rate (????) of 1.006 (SE = 0.065), with an estimated temporal process variance (????2) of 0.029 (95% CI = 0.005-0.176). Stochastic simulations suggest that this mean and temporal process variance would result in a 58% probability of a population decrease over a 20-year period. This research provides evidence that, unless unknown density-dependent mechanisms are operating in the adult age class, significant human disturbances, such as commercial harvest or turtle mortality on roads, represent a potential risk to box turtle populations. ?? 2005 by the Ecological Society of America.
Halberstadt, Jamin; Rhodes, Gillian
2003-03-01
Average faces are attractive. We sought to distinguish whether this preference is an adaptation for finding high-quality mates (the direct selection account) or whether it reflects more general information-processing mechanisms. In three experiments, we examined the attractiveness of birds, fish, and automobiles whose averageness had been manipulated using digital image manipulation techniques common in research on facial attractiveness. Both manipulated averageness and rated averageness were strongly associated with attractiveness in all three stimulus categories. In addition, for birds and fish, but not for automobiles, the correlation between subjective averageness and attractiveness remained significant when the effect of subjective familiarity was partialled out. The results suggest that at least two mechanisms contribute to the attractiveness of average exemplars. One is a general preference for familiar stimuli, which contributes to the appeal of averageness in all three categories. The other is a preference for averageness per se, which was found for birds and fish, but not for automobiles, and may reflect a preference for features signaling genetic quality in living organisms, including conspecifics.
NASA Astrophysics Data System (ADS)
Chen, Ningsheng; Li, Jun; Liu, Lihong; Yang, Chenglin; Liu, Mei
2018-05-01
This study characterizes significant changes in denudation and disasters in mountainous areas induced in the humid Chengdu Longmenshan region by the Wenchuan Earthquake in 2008. A study focusing on the Longxi-Baisha River Basin was conducted to investigate the amount of denudation triggered by specific flash flood and debris flow events in 2009-2014. The following results were obtained through a comparison of pre-seismic regional denudation rates and denudation characteristics of other seismically active mountain regions. (1) Regional denudation processes occurred in a wave-like process of initial increase then decline, with a peak exhibiting a hyperbolic attenuation trend. This trend indicates that the denudation rate in the Chengdu Longmenshan region is expected to return to the pre-seismic rate of 0.3 mm a-1 after 81 years. In 22 years after the earthquake (Year 2030), debris flow disasters are expected to be rare. (2) Disasters increased significantly in the Chengdu Longmenshan region after the Wenchuan earthquake, with an average of 29.5 people missing or dead per year (22 times greater than the pre-earthquake rate) and average economic losses of 192 million Yuan per year (1.6 times greater than the pre-earthquake rate). (3) The denudation process was jointly controlled by the quantities of loose solid material and precipitation after the Wenchuan earthquake. The amount of loose solid material influenced the extent of denudation, while vegetation coverage rates and soil consolidation determined the overall denudation trend in the region, and changes in precipitation led to denudation fluctuations. (4) The results can be used to analyze the relationship between the potential flash flood-debris flow disasters after earthquakes in the ancient Shu kingdom and changes in historical social settlements. The results can also be used to predict denudation processes and disaster risks from earthquakes in humid mountainous regions around the world, such as the southern slope of the Himalayas, Japan and the Taiwan mountains.
Effect of Climatic Factors on Hand, Foot, and Mouth Disease in South Korea, 2010-2013.
Kim, Bryan Inho; Ki, Hyunok; Park, Sunhee; Cho, Eunhi; Chun, Byung Chul
2016-01-01
Hand, foot, and mouth disease (HFMD) causes characteristic blisters and sores mainly in infants and children, and has been monitored in South Korea through sentinel surveillance since 2009. We described the patterns of HFMD occurrence and analyzed the effect of climatic factors on national HFMD incidence. Weekly clinically diagnosed HFMD case rates (per 1,000 outpatients) in sentinel sites and weekly climatic factors, such as average temperature, relative humidity, duration of sunshine, precipitation, and wind speed from 2010 to 2013, were used in this study. A generalized additive model with smoothing splines and climatic variables with time lags of up to 2 weeks were considered in the modeling process. To account for long-term trends and seasonality, we controlled for each year and their corresponding weeks. The autocorrelation issue was also adjusted by using autocorrelation variables. At an average temperature below 18°C, the HFMD rate increased by 10.3% for every 1°C rise in average temperature (95% confidence interval (CI): 8.4, 12.3%). We also saw a 6.6% increase in HFMD rate (95% CI: 3.6, 9.7%) with every 1% increase in relative humidity under 65%, with a 1.5% decrease in HFMD rate observed (95% CI: 0.4, 2.7%) with each 1% humidity increase above 65%. Modeling results have shown that average temperature and relative humidity are related to HFMD rate. Additional research on the environmental risk factors of HFMD transmission is required to understand the underlying mechanism between climatic factors and HFMD incidence.
Elhanafi, Sherif; Ortiz, Arleen M; Yarlagadda, Anita; Tsai, Cindy; Eloliby, Mohamed; Mallawaarachchi, Indika; Dwivedi, Alok; Zuckerman, Marc J; Othman, Mohamed O
2015-08-01
Calculating the adenoma detection rate (ADR) is a complex process in contrast to the polyp detection rate (PDR) that can be easily calculated. The average adenoma to polyp detection rate quotient (APDRQ) was proposed as a conversion factor to estimate the ADR for individual endoscopists from the endoscopist's PDR. However, this conversion factor was not validated in different practice settings. To validate the use of the proposed conversion factor in a practice setting with a predominantly Hispanic population. We conducted a retrospective, cross-sectional study (December 2007 to November 2012) of screening colonoscopies at a university practice setting with an 86.9% Hispanic population. The actual ADR and PDR were calculated for all endoscopists. The weighted average of ADR to PDR ratio for each endoscopist was used to obtain APDRQ. The APDRQ was used as a conversion multiplier to estimate each endoscopist's ADR using the single endoscopist's PDR. A total of 2148 screening colonoscopies were included. The average PDR for the whole group was 36.9% (range, 11% to 49%). The actual ADR was estimated as 25.5% (range, 11% to 37%). The average APDRQ for our group was 0.68. The estimated ADR was 25.48% (range, 8% to 33%). There was a high correlation between actual ADR and the estimated ADR (Pearson correlation=0.92). In a practice setting with a predominantly Hispanic population, a conversion factor can be used to estimate ADR from PDR providing a high degree of correlation with the actual ADR.
Effect of Voltage and Flow Rate Electrospinning Parameters on Polyacrylonitrile Electrospun Fibers
NASA Astrophysics Data System (ADS)
Bakar, S. S. S.; Fong, K. C.; Eleyas, A.; Nazeri, M. F. M.
2018-03-01
Currently, electrospinning is a very famous technique and widely used for forming polymer nanofibers. In this paper, the Polyacrylonitrile (PAN) nanofibers were prepared in concentration of 10wt% with varied processing parameters that can affect the properties of PAN fiber in term of fiber diameter and electrical conductivity was presented. Voltage of 10, 15 and 20 kV with PAN flow rate of 1 electrospun PAN fibers were then undergo pyrolysis at 800°C for 30 minutes. The resultant PAN nanofibers were then analysed by SEM, XRD and four point probe test after pyrolysis process. SEM image show continuos uniform and smooth surface fibrous structure of electrospun PAN fibers with average diameter of 1.81 μm. The fiber morphology is controlled by manipulating the processing parameters of electrospinning process. The results showed that the resistance of electrospun PAN fibers decreases as the processing parameter changes by increasing the applied voltage and flow rate of electrospinning.
NASA Astrophysics Data System (ADS)
Rahman, Mohamed Abd; Yeakub Ali, Mohammad; Saddam Khairuddin, Amir
2017-03-01
This paper presents the study on vibration and surface roughness of Inconel 718 workpiece produced by micro end-milling using Mikrotools Integrated Multi-Process machine tool DT-110 with control parameters; spindle speed (15000 rpm and 30000 rpm), feed rate (2 mm/min and 4 mm/min) and depth of cut (0.10 mm and 0.15mm). The vibration was measured using DYTRAN accelerometer instrument and the average surface roughness Ra was measured using Wyko NT1100. The analysis of variance (ANOVA) by using Design Expert software revealed that feed rate and depth of cut are the most significant factors on vibration meanwhile for average surface roughness, Ra, spindle speed is the most significant factor.
Temperature-dependence of biomass accumulation rates during secondary succession.
Anderson, Kristina J; Allen, Andrew P; Gillooly, James F; Brown, James H
2006-06-01
Rates of ecosystem recovery following disturbance affect many ecological processes, including carbon cycling in the biosphere. Here, we present a model that predicts the temperature dependence of the biomass accumulation rate following disturbances in forests. Model predictions are derived based on allometric and biochemical principles that govern plant energetics and are tested using a global database of 91 studies of secondary succession compiled from the literature. The rate of biomass accumulation during secondary succession increases with average growing season temperature as predicted based on the biochemical kinetics of photosynthesis in chloroplasts. In addition, the rate of biomass accumulation is greater in angiosperm-dominated communities than in gymnosperm-dominated ones and greater in plantations than in naturally regenerating stands. By linking the temperature-dependence of photosynthesis to the rate of whole-ecosystem biomass accumulation during secondary succession, our model and results provide one example of how emergent, ecosystem-level rate processes can be predicted based on the kinetics of individual metabolic rate.
Sequenced subjective accents for brain-computer interfaces
NASA Astrophysics Data System (ADS)
Vlek, R. J.; Schaefer, R. S.; Gielen, C. C. A. M.; Farquhar, J. D. R.; Desain, P.
2011-06-01
Subjective accenting is a cognitive process in which identical auditory pulses at an isochronous rate turn into the percept of an accenting pattern. This process can be voluntarily controlled, making it a candidate for communication from human user to machine in a brain-computer interface (BCI) system. In this study we investigated whether subjective accenting is a feasible paradigm for BCI and how its time-structured nature can be exploited for optimal decoding from non-invasive EEG data. Ten subjects perceived and imagined different metric patterns (two-, three- and four-beat) superimposed on a steady metronome. With an offline classification paradigm, we classified imagined accented from non-accented beats on a single trial (0.5 s) level with an average accuracy of 60.4% over all subjects. We show that decoding of imagined accents is also possible with a classifier trained on perception data. Cyclic patterns of accents and non-accents were successfully decoded with a sequence classification algorithm. Classification performances were compared by means of bit rate. Performance in the best scenario translates into an average bit rate of 4.4 bits min-1 over subjects, which makes subjective accenting a promising paradigm for an online auditory BCI.
Convective rain rates and their evolution during storms in a semiarid climate
NASA Technical Reports Server (NTRS)
Doneaud, A. A.; Miller, J. R., Jr.; Ionescu-Niscov, S.
1984-01-01
The semiarid climate of the U.S. northern High Plains region has been studied with respect to rain rates and their evolution during summertime convective storms, using radar data from a total of 750 radar echo clusters. Analysis of this data suggests that the average rain rate R among storms is in a first approximation independent of the total rain volume, if the entire storm duration is considered in the averaging process. R primarily depends on the reflectivity threshold considered in calculating the area coverage integrated over the lifetime of the storm. R evolution during storms is analyzed by dividing each storm lifetime into 10 min, 1, 2, and 4 hours, as well as growing and decaying periods. The value of R remained independent of the total rain volume when the growing or decaying periods of storms were considered separately.
Simulation of 90{degrees} ply fatigue crack growth along the width of cross-ply carbon-epoxy coupons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henaff-Gardin, C.; Urwald, E.; Lafarie-Frenot, M.C.
1994-07-01
We study the mechanism of fatigue cracking of the matrix of cross-ply carbon-epoxy laminates. Primary attention is given to the study of the influence of the specimen width on the evolution of damage. On the basis of shear lag analysis, we determine the strain energy release rate in the processes of initiation and growth of transverse fatigue cracks. We also present results of experimental research on the evolution of the edge crack density per ply, the average length of the cracks, and the crack propagation rate under transverse fatigue cracking. It is shown that these characteristics are independent of themore » specimen width. At the same time, as soon as the edge crack density reaches its saturation value, the average crack growth rate becomes constant. All the experimental results are in good agreement with results obtained by using the theoretical model.« less
Therapeutic plasma exchange: a paired comparison of Fresenius AS104 vs. COBE Spectra.
Burgstaler, E A; Pineda, A A
2001-01-01
For therapeutic plasma exchange (TPE), continuous flow separators are known to be efficient as exemplified by Fresenius AS104 and COBE Spectra. The AS104 uses an interface monitoring system in the centrifuge during TPE, whereas Spectra uses computer algorithms to establish the plasma-cell interface. To determine the plasma collection efficiency (PLCE), anticoagulant (AC) volumes used, and platelets (PLT) lost of the AS104 and the Spectra, we performed a prospective paired comparison of 20 TPE (each machine). The study included 17 patients, 1.3 plasma volume exchanges (without AC), equal inlet rates, and AC ratio of 13:1. Processing times did not include reinfuse mode. Platelet loss was determined by sampling the collection bags. Inlet rates were between 60-110 ml/min. Diagnosis included peripheral neuropathies, TTP and cryoglobulinemia. The AS104 had significantly (P<0.0001) lower average whole blood processed (F:6,601 vs. S:8,584 ml), AC volume (F:532 vs. S:719 ml), and processing time (F:80 vs. S:102 minutes) than Spectra. The AS104 had significantly (P<0.0001) higher average plasma flow rates (F:53 vs. S:44 ml/minute), plasma collection efficiency (F:90 vs. S:69%), and platelet loss (F:2.0 vs. S:0.14 x 10(11) plt) than Spectra. Platelet loss correlated with inlet flow rate with the AS104 but not with the Spectra. The AS104 has a significantly higher collection efficiency than Spectra allowing it to remove the same amount of plasma in significantly less time, by processing significantly less blood, using significantly less AC, but removing significantly more platelets than Spectra. Copyright 2001 Wiley-Liss, Inc.
ERIC Educational Resources Information Center
Hanford, Terry; White, Kathleen
1991-01-01
Although numbers such as average test scores or dropout rates can capture part of a school system's success or failure, school statistics seldom tell the whole story. School board members should realize that numbers might measure compliance or process, rather than improvement. Also, improvements in numbers might reflect changes in assessment…
Emotional processing during psychotherapy among women newly diagnosed with a gynecological cancer.
Manne, Sharon L; Myers-Virtue, Shannon; Darabos, Katie; Ozga, Melissa; Heckman, Carolyn; Kissane, David; Rotter, David
2017-08-01
Our aim was to compare changes in emotional processing by women newly diagnosed with gynecological cancer enrolled in either a coping and communication skills intervention (CCI) or a supportive counseling (SC) intervention. We examined the association between in-session emotional processing and patient-rated therapeutic progress. Three therapy sessions with 201 patients were rated for the depth of emotional processing (peak and mode) during emotion episodes (EEs) using the Experiencing Rating Scale (EXP). Participants completed measures of dispositional emotional expressivity, depressive symptoms, and cancer-related distress before treatment began, as well as ratings of perceived progress in therapy after each session. Peak EXP ratings averaged between 2.7 and 3.1, indicating that women discussed events, their emotional reactions, and their private experiences in sessions. A small proportion of patients had high levels of processing, indicating deeper exploration of the meaning of their feelings and experiences. Women in SC were able to achieve a higher level of emotional processing during the middle and later sessions, and during cancer-related EEs in the later session. However, emotional processing was not significantly associated with a patient's perceived therapeutic progress with SC. In the CCI group, higher levels of emotional processing were associated with greater session progress, suggesting that it may play an important role in patient-rated treatment outcomes. Newly diagnosed gynecological cancer patients are able to attend to their emotions and personal experiences, particularly when discussing cancer-related issues during both short-term SC and prescriptive coping skills interventions.
Gardner, Aimee K; D'Onofrio, Brenna C; Dunkin, Brian J
Guidance on how to train faculty to conduct structured interviews and implement them into current screening processes is lacking. The goal of this study is to describe a structured interview training program designed specifically for surgeons and examine its effectiveness. Faculty involved in advanced surgical fellowship interviews completed a 20-item knowledge assessment and video-based applicant interview ratings before taking a half-day course on conducting structured interviews. The course consisted of evidence-based strategies and methods for conducting structured interviews, asking questions, and rating applicants in a highly interactive format. After the course, faculty again completed the knowledge assessment and provided ratings for 3 video-based applicant interviews. All faculty members (N = 5) responsible for selecting fellows in minimally invasive and bariatric surgery completed the training. Faculty had an average of 15.8 ± 9.12 years in practice. Average performance on the precourse knowledge assessment was 35% ± 6.12% and the group was unable to achieve acceptable agreement for applicant interview scores for any of the competencies assessed. After the course, faculty demonstrated significant improvements (p < 0.01) on the knowledge assessment, more than doubling their scores on the pretest with average scores of 80% ± 9.35%. Faculty also improved their interrater agreement of applicant competency, with 80% of the applicant interview ratings within 2 points of each other. Implementation of a half-day course designed to teach principles and skills around structured interviewing and assessment demonstrated significant improvements in both interviewing knowledge and interrater agreement. These findings support the time and resources required to develop and implement a structured interview training program for surgeons for the postgraduate admissions process. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Fitness cost of reassortment in human influenza.
Villa, Mara; Lässig, Michael
2017-11-01
Reassortment, which is the exchange of genome sequence between viruses co-infecting a host cell, plays an important role in the evolution of segmented viruses. In the human influenza virus, reassortment happens most frequently between co-existing variants within the same lineage. This process breaks genetic linkage and fitness correlations between viral genome segments, but the resulting net effect on viral fitness has remained unclear. In this paper, we determine rate and average selective effect of reassortment processes in the human influenza lineage A/H3N2. For the surface proteins hemagglutinin and neuraminidase, reassortant variants with a mean distance of at least 3 nucleotides to their parent strains get established at a rate of about 10-2 in units of the neutral point mutation rate. Our inference is based on a new method to map reassortment events from joint genealogies of multiple genome segments, which is tested by extensive simulations. We show that intra-lineage reassortment processes are, on average, under substantial negative selection that increases in strength with increasing sequence distance between the parent strains. The deleterious effects of reassortment manifest themselves in two ways: there are fewer reassortment events than expected from a null model of neutral reassortment, and reassortant strains have fewer descendants than their non-reassortant counterparts. Our results suggest that influenza evolves under ubiquitous epistasis across proteins, which produces fitness barriers against reassortment even between co-circulating strains within one lineage.
Ferguson, David J.; Gonnermann, Helge M.; Ruprecht, Philipp; Plank, Terry; Hauri, Erik H.; Houghton, Bruce F.; Swanson, Donald A.
2016-01-01
The decompression rate of magma as it ascends during volcanic eruptions is an important but poorly constrained parameter that controls many of the processes that influence eruptive behavior. In this study, we quantify decompression rates for basaltic magmas using volatile diffusion in olivine-hosted melt tubes (embayments) for three contrasting eruptions of Kīlauea volcano, Hawaii. Incomplete exsolution of H2O, CO2, and S from the embayment melts during eruptive ascent creates diffusion profiles that can be measured using microanalytical techniques, and then modeled to infer the average decompression rate. We obtain average rates of ~0.05–0.45 MPa s−1 for eruptions ranging from Hawaiian style fountains to basaltic subplinian, with the more intense eruptions having higher rates. The ascent timescales for these magmas vary from around ~5 to ~36 min from depths of ~2 to ~4 km, respectively. Decompression-exsolution models based on the embayment data also allow for an estimate of the mass fraction of pre-existing exsolved volatiles within the magma body. In the eruptions studied, this varies from 0.1 to 3.2 wt% but does not appear to be the key control on eruptive intensity. Our results do not support a direct link between the concentration of pre-eruptive volatiles and eruptive intensity; rather, they suggest that for these eruptions, decompression rates are proportional to independent estimates of mass discharge rate. Although the intensity of eruptions is defined by the discharge rate, based on the currently available dataset of embayment analyses, it does not appear to scale linearly with average decompression rate. This study demonstrates the utility of the embayment method for providing quantitative constraints on magma ascent during explosive basaltic eruptions.
NASA Astrophysics Data System (ADS)
Ferguson, David J.; Gonnermann, Helge M.; Ruprecht, Philipp; Plank, Terry; Hauri, Erik H.; Houghton, Bruce F.; Swanson, Donald A.
2016-10-01
The decompression rate of magma as it ascends during volcanic eruptions is an important but poorly constrained parameter that controls many of the processes that influence eruptive behavior. In this study, we quantify decompression rates for basaltic magmas using volatile diffusion in olivine-hosted melt tubes (embayments) for three contrasting eruptions of Kīlauea volcano, Hawaii. Incomplete exsolution of H2O, CO2, and S from the embayment melts during eruptive ascent creates diffusion profiles that can be measured using microanalytical techniques, and then modeled to infer the average decompression rate. We obtain average rates of ~0.05-0.45 MPa s-1 for eruptions ranging from Hawaiian style fountains to basaltic subplinian, with the more intense eruptions having higher rates. The ascent timescales for these magmas vary from around ~5 to ~36 min from depths of ~2 to ~4 km, respectively. Decompression-exsolution models based on the embayment data also allow for an estimate of the mass fraction of pre-existing exsolved volatiles within the magma body. In the eruptions studied, this varies from 0.1 to 3.2 wt% but does not appear to be the key control on eruptive intensity. Our results do not support a direct link between the concentration of pre-eruptive volatiles and eruptive intensity; rather, they suggest that for these eruptions, decompression rates are proportional to independent estimates of mass discharge rate. Although the intensity of eruptions is defined by the discharge rate, based on the currently available dataset of embayment analyses, it does not appear to scale linearly with average decompression rate. This study demonstrates the utility of the embayment method for providing quantitative constraints on magma ascent during explosive basaltic eruptions.
Microbial hotspots and hot moments in soil
NASA Astrophysics Data System (ADS)
Kuzyakov, Yakov; Blagodatskaya, Evgenia
2015-04-01
Soils are the most heterogeneous parts of the biosphere, with an extremely high differentiation of properties and processes within nano- to macroscales. The spatial and temporal heterogeneity of input of labile organics by plants creates microbial hotspots over short periods of time - the hot moments. We define microbial hotspots as small soil volumes with much faster process rates and much more intensive interactions compared to the average soil conditions. Such hotspots are found in the rhizosphere, detritusphere, biopores (including drilosphere) and on aggregate surfaces, but hotspots are frequently of mixed origin. Hot moments are short-term events or sequences of events inducing accelerated process rates as compared to the averaged rates. Thus, hotspots and hot moments are defined by dynamic characteristics, i.e. by process rates. For this hotspot concept we extensively reviewed and examined the localization and size of hotspots, spatial distribution and visualization approaches, transport of labile C to and from hotspots, lifetime and process intensities, with a special focus on process rates and microbial activities. The fraction of active microorganisms in hotspots is 2-20 times higher than in the bulk soil, and their specific activities (i.e. respiration, microbial growth, mineralization potential, enzyme activities, RNA/DNA ratio) may also be much higher. The duration of hot moments in the rhizosphere is limited and is controlled by the length of the input of labile organics. It can last a few hours up to a few days. In the detritusphere, however, the duration of hot moments is regulated by the output - by decomposition rates of litter - and lasts for weeks and months. Hot moments induce succession in microbial communities and intense intra- and interspecific competition affecting C use efficiency, microbial growth and turnover. The faster turnover and lower C use efficiency in hotspots counterbalances the high C inputs, leading to the absence of strong increases in C stocks. Consequently, the intensification of fluxes is much stronger than the increase of pools. Maintenance of stoichiometric ratios by accelerated microbial growth in hotspots requires additional nutrients (e.g. N and P), causing their microbial mining from soil organic matter, i.e. priming effects. Consequently, priming effects are localized in microbial hotspots and are consequences of hot moments. Finally, we estimated the contribution of the hotspots to the whole soil profile and suggested that, irrespective of their volume, the hotspots are mainly responsible for the ecologically relevant processes in soil.
Assessing Chemical Retention Process Controls in Ponds
NASA Astrophysics Data System (ADS)
Torgersen, T.; Branco, B.; John, B.
2002-05-01
Small ponds are a ubiquitous component of the landscape and have earned a reputation as effective chemical retention devices. The most common characterization of pond chemical retention is the retention coefficient, Ri= ([Ci]inflow-[Ci] outflow)/[Ci]inflow. However, this parameter varies widely in one pond with time and among ponds. We have re-evaluated literature reported (Borden et al., 1998) monthly average retention coefficients for two ponds in North Carolina. Employing a simple first order model that includes water residence time, the first order process responsible for species removal have been separated from the water residence time over which it acts. Assuming the rate constant for species removal is constant within the pond (arguable at least), the annual average rate constant for species removal is generated. Using the annual mean rate constant for species removal and monthly water residence times results in a significantly enhanced predictive capability for Davis Pond during most months of the year. Predictive ability remains poor in Davis Pond during winter/unstratified periods when internal loading of P and N results in low to negative chemical retention. Predictive ability for Piedmont Pond (which has numerous negative chemical retention periods) is improved but not to the same extent as Davis Pond. In Davis Pond, the rate constant for sediment removal (each month) is faster than the rate constant for water and explains the good predictability for sediment retention. However, the removal rate constant for P and N is slower than the removal rate constant for sediment (longer water column residence time for P,N than for sediment). Thus sedimentation is not an overall control on nutrient retention. Additionally, the removal rate constant for P is slower than for TOC (TOC is not the dominate removal process for P) and N is removed slower than P (different in pond controls). For Piedmont Pond, sediment removal rate constants are slower than the removal rate constant for water indicating significant sediment resuspension episodes. It appears that these sediment resuspension events are aperiodic and control the loading and the chemical retention capability of Piedmont Pond for N,P,TOC. These calculated rate constants reflect the differing internal loading processes for each component and suggest means and mechanisms for the use of ponds in water quality management.
Spatial patterns of erosion in a bedrock gorge
NASA Astrophysics Data System (ADS)
Beer, Alexander. R.; Turowski, Jens M.; Kirchner, James W.
2017-01-01
Understanding the physical processes driving bedrock channel formation is essential for interpreting and predicting the evolution of mountain landscapes. Here we analyze bedrock erosion patterns measured at unprecedented spatial resolution (mm) over 2 years in a natural bedrock gorge. These spatial patterns show that local bedrock erosion rates depend on position in the channel cross section, height above the streambed, and orientation relative to the main streamflow and sediment path. These observations are consistent with the expected spatial distribution of impacting particles (the tools effect) and shielding by sediment on the bed (the cover effect). Vertical incision by bedrock abrasion averaged 1.5 mm/a, lateral abrasion averaged 0.4 mm/a, and downstream directed abrasion of flow obstacles averaged 2.6 mm/a. However, a single plucking event locally exceeded these rates by orders of magnitude (˜100 mm/a), and accounted for one third of the eroded volume in the studied gorge section over the 2 year study period. Hence, if plucking is spatially more frequent than we observed in this study period, it may contribute substantially to long-term erosion rates, even in the relatively massive bedrock at our study site. Our observations demonstrate the importance of bedrock channel morphology and the spatial distribution of moving and static sediment in determining local erosion rates.
1990-12-01
data rate to the electronics would be much lower on the average and the data much "richer" in information. Intelligent use of...system bottleneck, a high data rate should be provided by I/O systems. 2. machines with intelligent storage management specially designed for logic...management information processing, surveillance sensors, intelligence data collection and handling, solid state sciences, electromagnetics, and propagation, and electronic reliability/maintainability and compatibility.
NASA Astrophysics Data System (ADS)
Dolan, K. A.
2015-12-01
Disturbance plays a critical role in shaping the structure and function of forested ecosystems as well as the ecosystem services they provide, including but not limited to: carbon storage, biodiversity habitat, water quality and flow, and land atmosphere exchanges of energy and water. In addition, recent studies suggest that disturbance rates may increase in the future under altered climate and land use scenarios. Thus understanding how vulnerable forested ecosystems are to potential changes in disturbance rates is of high importance. This study calculated the theoretical threshold rate of disturbance for which forest ecosystems could no longer be sustained (λ*) across the Coterminous U.S. using an advanced process based ecosystem model (ED). Published rates of disturbance (λ) in 50 study sites were obtained from the North American Forest Disturbance (NAFD) program. Disturbance distance (λ* - λ) was calculated for each site by differencing the model based threshold under current climate conditions and average observed rates of disturbance over the last quarter century. Preliminary results confirm all sample forest sites have current average rates of disturbance below λ*, but there were interesting patterns in the recorded disturbance distances. In general western sites had much smaller disturbance distances, suggesting higher vulnerability to change, while eastern sites showed larger buffers. Ongoing work is being conducted to assess the vulnerability of these sites in the context of potential future changes by propagating scenarios of future climate and land-use change through the analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herman, D.
The Savannah River Site (SRS) Actinide Removal Process has been processing salt waste since 2008. This process includes a filtration step in the 512-S facility. Initial operations included the addition, or strike, of monosodium titanate (MST) to remove soluble actinides and strontium. The added MST and any entrained sludge solids were then separated from the supernate by cross flow filtration. During this time, the filter operations have, on many occasions, been the bottleneck process limiting the rate of salt processing. Recently, 512-S- has started operations utilizing “No-MST” where the MST actinide removal strike was not performed and the supernate wasmore » simply pre-filtered prior to Cs removal processing. Direct filtration of decanted tank supernate, as demonstrated in 512-S, is the proposed method of operation for the Hanford Low Activity Waste Pretreatment System (LAWPS) facility. Processing decanted supernate without MST solids has been demonstrated for cross flow filtration to provide a significant improvement in production with the SRS Salt Batches 8 and 9 feed chemistries. The average filtration rate for the first 512-S batch processing cycle using No-MST has increased filtrate production by over 35% of the historical average. The increase was sustained for more than double the amount of filtrate batches processed before cleaning of the filter was necessary. While there are differences in the design of the 512-S and Hanford filter systems, the 512-S system should provide a reasonable indication of LAWPS filter performance with similar feed properties. Based on the data from the 512-S facility and with favorable feed properties, the LAWPS filter, as currently sized at over twice the size of the 512-S filter (532 square feet filtration area versus 235 square feet), has the potential to provide sustained filtrate production at the upper range of the planned LAWPS production rate of 17 gpm.« less
NASA Astrophysics Data System (ADS)
Hildebrandt Ruiz, L.; Bean, J. K.; Bilotto, A.
2017-12-01
The use of hydraulic fracturing for production of petroleum and natural gas has increased dramatically in the last decade, but the environmental impacts of this technology remain unclear. Experiments were conducted to quantify airborne emissions from twelve samples of hydraulic fracturing flowback wastewater collected in the Permian Basin, as well as the photochemical processing of these emissions leading to the formation of particulate matter. The concentration of total volatile carbon (TVC, hydrocarbons evaporating at room temperature) averaged 29 milligrams of carbon per liter (mgC/L) and the TVC evaporation rate averaged 1357 mgC/L-m2-min. After photochemical oxidation under high NOx conditions the amount of organic particulate matter formed per milliliter of wastewater evaporated averaged 24 micrograms (µg); the amount of ammonium nitrate formed averaged 262 µg. In the state of Texas, the potential formation of PM from evaporated flowback wastewater is similar to the estimated PM emissions from diesel engines used in oil rigs, emphasizing the need to quantify wastewater evaporation and atmospheric processing of these emissions.
How we remember the emotional intensity of past musical experiences
Schäfer, Thomas; Zimmermann, Doreen; Sedlmeier, Peter
2014-01-01
Listening to music usually elicits emotions that can vary considerably in their intensity over the course of listening. Yet, after listening to a piece of music, people are easily able to evaluate the music's overall emotional intensity. There are two different hypotheses about how affective experiences are temporally processed and integrated: (1) all moments' intensities are integrated, resulting in an averaged value; (2) the overall evaluation is built from specific single moments, such as the moments of highest emotional intensity (peaks), the end, or a combination of these. Here we investigated what listeners do when building an overall evaluation of a musical experience. Participants listened to unknown songs and provided moment-to-moment ratings of experienced intensity of emotions. Subsequently, they evaluated the overall emotional intensity of each song. Results indicate that participants' evaluations were predominantly influenced by their average impression but that, in addition, the peaks and end emotional intensities contributed substantially. These results indicate that both types of processes play a role: All moments are integrated into an averaged value but single moments might be assigned a higher value in the calculation of this average. PMID:25177311
The Influence of Reading on Vocabulary Growth: A Case for a Matthew Effect
Tomblin, J. Bruce; Catts, Hugh
2015-01-01
Purpose Individual differences in vocabulary development may affect academic or social opportunities. It has been proposed that individual differences in word reading could affect the rate of vocabulary growth, mediated by the amount of reading experience, a process referred to as a Matthew effect (Stanovich, 1986). Method In the current study, assessments of written word–reading skills in the 4th grade and oral vocabulary knowledge collected in kindergarten and in the 4th, 8th, and 10th grades from a large epidemiologically based sample (n = 485) allowed a test of the relationship of early word-reading skills and the subsequent rate of vocabulary growth. Results Consistent with the hypothesis, multilevel modeling revealed the rate of vocabulary growth after the 4th grade to be significantly related to 4th-grade word reading after controlling for kindergarten vocabulary level, that is, above average readers experienced a higher rate of vocabulary growth than did average readers. Conclusions Vocabulary growth rate differences accumulated over time such that the effect on vocabulary size was large. PMID:25812175
Cortical processing of dynamic sound envelope transitions.
Zhou, Yi; Wang, Xiaoqin
2010-12-08
Slow envelope fluctuations in the range of 2-20 Hz provide important segmental cues for processing communication sounds. For a successful segmentation, a neural processor must capture envelope features associated with the rise and fall of signal energy, a process that is often challenged by the interference of background noise. This study investigated the neural representations of slowly varying envelopes in quiet and in background noise in the primary auditory cortex (A1) of awake marmoset monkeys. We characterized envelope features based on the local average and rate of change of sound level in envelope waveforms and identified envelope features to which neurons were selective by reverse correlation. Our results showed that envelope feature selectivity of A1 neurons was correlated with the degree of nonmonotonicity in their static rate-level functions. Nonmonotonic neurons exhibited greater feature selectivity than monotonic neurons in quiet and in background noise. The diverse envelope feature selectivity decreased spike-timing correlation among A1 neurons in response to the same envelope waveforms. As a result, the variability, but not the average, of the ensemble responses of A1 neurons represented more faithfully the dynamic transitions in low-frequency sound envelopes both in quiet and in background noise.
Forde, C G; Leong, C; Chia-Ming, E; McCrickerd, K
2017-02-22
The structural properties of foods have a functional role to play in oral processing behaviours and sensory perception, and also impact on meal size and the experience of fullness. This study adopted a new approach by using behavioural coding analysis of eating behaviours to explore how a range of food textures manifest as the microstructural properties of eating and expectations of fullness. A selection of 47 Asian foods were served in fixed quantities to a panel of participants (N = 12) and their eating behaviours were captured via web-camera recordings. Behavioural coding analysis was completed on the recordings to extract total bites, chews and swallows and cumulative time of the food spent in the mouth. From these measurements a series of microstructural properties including average bite size (g), chews per bite, oro-sensory exposure time (seconds) and average eating rate (g min -1 ) were derived per food. The sensory and macronutrient properties of each food were correlated with the microstructure of eating to compare the differences in eating behaviour on a gram for gram basis. There were strong relationships between the perceived food textural properties and its eating behaviours and a food's total water content was the best predictor of its eating rate. Foods that were eaten at a slower eating rate, with smaller bites and more chews per bite were rated as higher in the expected fullness. These relationships are important as oral processing behaviours and beliefs about the potential satiating value of food influence portion decisions and moderate meal size. These data support the idea that naturally occurring differences in the food structure and texture could be used to design meals that slow the rate of eating and maximise fullness.
Multitemporal ALSM change detection, sediment delivery, and process mapping at an active earthflow
DeLong, Stephen B.; Prentice, Carol S.; Hilley, George E.; Ebert, Yael
2012-01-01
Remote mapping and measurement of surface processes at high spatial resolution is among the frontiers in Earth surface process research. Remote measurements that allow meter-scale mapping of landforms and quantification of landscape change can revolutionize the study of landscape evolution on human timescales. At Mill Gulch in northern California, USA, an active earthflow was surveyed in 2003 and 2007 by airborne laser swath mapping (ALSM), enabling meter-scale quantification of landscape change. We calculate four-year volumetric flux from the earthflow and compare it to long-term catchment average erosion rates from cosmogenic radionuclide inventories from adjacent watersheds. We also present detailed maps of changing features on the earthflow, from which we can derive velocity estimates and infer dominant process. These measurements rely on proper digital elevation model (DEM) generation and a simple surface-matching technique to align the multitemporal data in a manner that eliminates systematic error in either dataset. The mean surface elevation of the earthflow and an opposite slope that was directly influenced by the earthflow decreased 14 ± 1 mm/yr from 2003 to 2007. By making the conservative assumption that these features were the dominant contributor of sediment flux from the entire Mill Gulch drainage basin during this time interval, we calculate a minimum catchment-averaged erosion rate of 0·30 ± 0·02 mm/yr. Analysis of beryllium-10 (10Be) concentrations in fluvial sand from nearby Russian Gulch and the South Fork Gualala River provide catchment averaged erosion rates of 0·21 ± 0·04 and 0·23 ± 0·03 mm/yr respectively. From translated landscape features, we can infer surface velocities ranging from 0·5 m/yr in the wide upper ‘source’ portion of the flow to 5 m/yr in the narrow middle ‘transport’ portion of the flow. This study re-affirms the importance of mass wasting processes in the sediment budgets of uplifting weak lithologies.
NASA Astrophysics Data System (ADS)
Al-Jader, M. A.; Cullen, J. D.; Shaw, Andy; Al-Shamma'a, A. I.
2011-08-01
Currently there are about 4300 weld points on the average steel vehicle. Errors and problems due to tip damage and wear can cause great losses due to production line downtime. Current industrial monitoring systems check the quality of the nugget after processing 15 cars average once every two weeks. The nuggets are examined off line using a destructive process, which takes approximately 10 days to complete causing a long delay in the production process. In this paper a simulation results using software package, SORPAS, will be presented to determined the sustainability factors in spot welding process including Voltage, Current, Force, Water cooling rates, Material thicknesses and usage. The experimental results of various spot welding processes will be investigated and reported. The correlation of experimental results shows that SORPAS simulations can be used as an off line measurement to reduce factory energy usage. This paper also provides an overview of electrode current selection and its variance over the lifetime of the electrode tip, and describes the proposed analysis system for the selection of welding parameters for the spot welding process, as the electrode tip wears.
12 CFR 1070.22 - Fees for processing requests for CFPB records.
Code of Federal Regulations, 2012 CFR
2012-01-01
... of grades typically involved may be established. This charge shall include transportation of...), an average rate for the range of grades typically involved may be established. Fees shall be charged... research. (iii) Non-commercial scientific institution refers to an institution that is not operated on a...
Use of Anthropogenic Radioisotopes to Estimate Rates of Soil Redistribution by Wind
USDA-ARS?s Scientific Manuscript database
Wind erosion results in soil degradation and fugitive dust emissions. The temporal and spatial variability of aeolian processes makes local estimates of long-term average erosion costly and time consuming. Atmospheric testing of nuclear weapons during the 1950s and 1960s resulted in previously non...
Gould, J L
1980-02-01
In both their navigation and dance communication, bees are able to compensate for the sun's movement. When foragers are prevented from seeing the sun for 2 hours, they compensate by extrapolation, using the sun's rate of movement when last observed. These and other data suggest a time-averaging processing strategy in honey bee orientation.
Longitudinal Rater Modeling with Splines
ERIC Educational Resources Information Center
Dobria, Lidia
2011-01-01
Performance assessments rely on the expert judgment of raters for the measurement of the quality of responses, and raters unavoidably introduce error in the scoring process. Defined as the tendency of a rater to assign higher or lower ratings, on average, than those assigned by other raters, even after accounting for differences in examinee…
Flame-spreading phenomena in the fin-slot region of a solid rocket motor
NASA Astrophysics Data System (ADS)
Kuo, K. K.; Kokal, R. A.; Paulauskas, M.; Alaksin, P.; Lee, L. S.
1993-06-01
Flame-spreading processes in the fin-slot regions of solid-propellant motor grains have the potential to influence the behavior of the overall ignition transient. The work being done on this project is aimed at obtaining a better understanding of the flame-spreading processes in rocket motors with aft-end fin slots. Non-intrusive optical diagnostic methods were employed to acquire flame-spreading measurements in the fin-slot region of a subscale rocket motor. Highly non-uniform flame-spreading processes were observed in both the deep and shallow fin regions of the test rig. The average flame-spreading rates in the fin-slot region were found to be two orders of magnitude less than those in the circular port region of a typical rocket motor. The flame-spreading interval was found to correlate well with the local pressurization rates. A higher pressurization rate produces a shorter flame-spreading time interval.
Binary collision rates of relativistic thermal plasmas. I Theoretical framework
NASA Technical Reports Server (NTRS)
Dermer, C. D.
1985-01-01
Binary collision rates for arbitrary scattering cross sections are derived in the case of a beam of particles interacting with a Maxwell-Boltzmann (MB) plasma, or in the case of two MB plasmas interacting at generally different temperatures. The expressions are valid for all beam energies and plasma temperatures, from the nonrelativistic to the extreme relativistic limits. The calculated quantities include the reaction rate, the energy exchange rate, and the average rate of change of the squared transverse momentum component of a monoenergetic particle beam as a result of scatterings with particles of a MB plasma. Results are specialized to elastic scattering processes, two-temperature reaction rates, or the cold plasma limit, reproducing previous work.
Wang, Zhi; Liang, Jiabin; Rong, Xing; Zhou, Hao; Duan, Chuanwei; Du, Weijia; Liu, Yimin
2015-12-01
To investigate noise hazard and its influence on hearing loss in workers in the automotive component manufacturing industry. Noise level in the workplace of automotive component manufacturing enterprises was measured and hearing examination was performed for workers to analyze the features and exposure levels of noise in each process, as well as the influence on hearing loss in workers. In the manufacturing processes for different products in this industry, the manufacturing processes of automobile hub and suspension and steering systems had the highest degrees of noise hazard, with over-standard rates of 79.8% and 57.1%, respectively. In the different technical processes for automotive component manufacturing, punching and casting had the highest degrees of noise hazard, with over-standard rates of 65.0% and 50%, respectively. The workers engaged in the automotive air conditioning system had the highest rate of abnormal hearing ability (up to 3.1%). In the automotive component manufacturing industry, noise hazard exceeds the standard seriously. Although the rate of abnormal hearing is lower than the average value of the automobile manufacturing industry in China, this rate tends to increase gradually. Enough emphasis should be placed on the noise hazard in this industry.
26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Definition of weighted average exchange rate. 1... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Export Trade Corporations § 1.989(b)-1 Definition of weighted average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate...
Guest, J F; Vowden, K; Vowden, P
2017-06-02
To estimate the patterns of care and related resource use attributable to managing acute and chronic wounds among a catchment population of a typical clinical commissioning group (CCG)/health board and corresponding National Health Service (NHS) costs in the UK. This was a sub-analysis of a retrospective cohort analysis of the records of 2000 patients in The Health Improvement Network (THIN) database. Patients' characteristics, wound-related health outcomes and health-care resource use were quantified for an average CCG/health board with a catchment population of 250,000 adults ≥18 years of age, and the corresponding NHS cost of patient management was estimated at 2013/2014 prices. An average CCG/health board was estimated to be managing 11,200 wounds in 2012/2013. Of these, 40% were considered to be acute wounds, 48% chronic and 12% lacking any specific diagnosis. The prevalence of acute, chronic and unspecified wounds was estimated to be growing at the rate of 9%, 12% and 13% per annum respectively. Our analysis indicated that the current rate of wound healing must increase by an average of at least 1% per annum across all wound types in order to slow down the increasing prevalence. Otherwise, an average CCG/health board is predicted to manage ~23,200 wounds per annum by 2019/2020 and is predicted to spend a discounted (the process of determining the present value of a payment that is to be received in the future) £50 million on managing these wounds and associated comorbidities. Real-world evidence highlights the substantial burden that acute and chronic wounds impose on an average CCG/health board. Strategies are required to improve the accuracy of diagnosis and healing rates.
Fast EEG spike detection via eigenvalue analysis and clustering of spatial amplitude distribution
NASA Astrophysics Data System (ADS)
Fukami, Tadanori; Shimada, Takamasa; Ishikawa, Bunnoshin
2018-06-01
Objective. In the current study, we tested a proposed method for fast spike detection in electroencephalography (EEG). Approach. We performed eigenvalue analysis in two-dimensional space spanned by gradients calculated from two neighboring samples to detect high-amplitude negative peaks. We extracted the spike candidates by imposing restrictions on parameters regarding spike shape and eigenvalues reflecting detection characteristics of individual medical doctors. We subsequently performed clustering, classifying detected peaks by considering the amplitude distribution at 19 scalp electrodes. Clusters with a small number of candidates were excluded. We then defined a score for eliminating spike candidates for which the pattern of detected electrodes differed from the overall pattern in a cluster. Spikes were detected by setting the score threshold. Main results. Based on visual inspection by a psychiatrist experienced in EEG, we evaluated the proposed method using two statistical measures of precision and recall with respect to detection performance. We found that precision and recall exhibited a trade-off relationship. The average recall value was 0.708 in eight subjects with the score threshold that maximized the F-measure, with 58.6 ± 36.2 spikes per subject. Under this condition, the average precision was 0.390, corresponding to a false positive rate 2.09 times higher than the true positive rate. Analysis of the required processing time revealed that, using a general-purpose computer, our method could be used to perform spike detection in 12.1% of the recording time. The process of narrowing down spike candidates based on shape occupied most of the processing time. Significance. Although the average recall value was comparable with that of other studies, the proposed method significantly shortened the processing time.
Measurement of the Electron Density and the Attachment Rate Coefficient in Silane/Helium Discharges.
1986-09-01
materials -- in this case hydrogenated amorphous silicon . One of the biggest problems in such a task is the fact that the discharge creates complex radicals...electron density is enhanced -- even on a time-averaged basis, and the silicon deposition rate is also increased. The physical process for the density...etching and deposition of semiconductor materials. Plasma etching (also known as dry etching) Of silicon using flourine bearing gases has made it possible
Global lake response to the recent warming hiatus
NASA Astrophysics Data System (ADS)
Winslow, Luke A.; Leach, Taylor H.; Rose, Kevin C.
2018-05-01
Understanding temporal variability in lake warming rates over decadal scales is important for understanding observed change in aquatic systems. We analyzed a global dataset of lake surface water temperature observations (1985‑2009) to examine how lake temperatures responded to a recent global air temperature warming hiatus (1998‑2012). Prior to the hiatus (1985‑1998), surface water temperatures significantly increased at an average rate of 0.532 °C decade‑1 (±0.214). In contrast, water temperatures did not change significantly during the hiatus (average rate ‑0.087 °C decade‑1 ±0.223). Overall, 83% of lakes in our dataset (129 of 155) had faster warming rates during the pre-hiatus period than during the hiatus period. These results demonstrate that lakes have exhibited decadal-scale variability in warming rates coherent with global air temperatures and represent an independent line of evidence for the recent warming hiatus. Our analyses provide evidence that lakes are sentinels of broader climatological processes and indicate that warming rates based on datasets where a large proportion of observations were collected during the hiatus period may underestimate longer-term trends.
Data systems trade studies for a next generation sensor
NASA Astrophysics Data System (ADS)
Masuoka, Edward J.; Fleig, Albert J.
1997-01-01
Processing system designers must make substantial changes to accommodate current and anticipated improvements in remote sensing instruments.Increases in the spectral, radiometric and geometric resolution lead to data rates, processing loads and storage volumes which far exceed the ability of most current computer systems. To accommodate user expectations, the data must be processed and made available quickly in a convenient and easy to use form. This paper describes design trade-offs made in developing the processing system for the moderate resolution imaging spectroradiometer, MODIS, which will fly on the Earth Observing System's, AM-1 spacecraft to be launched in 1998. MODIS will have an average continuous date rate of 6.2 Mbps and require processing at 6.5 GFLOPS to produce 600GB of output products per day. Specific trade-offs occur in the areas of science software portability and usability of science products versus overall system performance and throughput.
Colometer: a real-time quality feedback system for screening colonoscopy.
Filip, Dobromir; Gao, Xuexin; Angulo-Rodríguez, Leticia; Mintchev, Martin P; Devlin, Shane M; Rostom, Alaa; Rosen, Wayne; Andrews, Christopher N
2012-08-28
To investigate the performance of a new software-based colonoscopy quality assessment system. The software-based system employs a novel image processing algorithm which detects the levels of image clarity, withdrawal velocity, and level of the bowel preparation in a real-time fashion from live video signal. Threshold levels of image blurriness and the withdrawal velocity below which the visualization could be considered adequate have initially been determined arbitrarily by review of sample colonoscopy videos by two experienced endoscopists. Subsequently, an overall colonoscopy quality rating was computed based on the percentage of the withdrawal time with adequate visualization (scored 1-5; 1, when the percentage was 1%-20%; 2, when the percentage was 21%-40%, etc.). In order to test the proposed velocity and blurriness thresholds, screening colonoscopy withdrawal videos from a specialized ambulatory colon cancer screening center were collected, automatically processed and rated. Quality ratings on the withdrawal were compared to the insertion in the same patients. Then, 3 experienced endoscopists reviewed the collected videos in a blinded fashion and rated the overall quality of each withdrawal (scored 1-5; 1, poor; 3, average; 5, excellent) based on 3 major aspects: image quality, colon preparation, and withdrawal velocity. The automated quality ratings were compared to the averaged endoscopist quality ratings using Spearman correlation coefficient. Fourteen screening colonoscopies were assessed. Adenomatous polyps were detected in 4/14 (29%) of the collected colonoscopy video samples. As a proof of concept, the Colometer software rated colonoscope withdrawal as having better visualization than the insertion in the 10 videos which did not have any polyps (average percent time with adequate visualization: 79% ± 5% for withdrawal and 50% ± 14% for insertion, P < 0.01). Withdrawal times during which no polyps were removed ranged from 4-12 min. The median quality rating from the automated system and the reviewers was 3.45 [interquartile range (IQR), 3.1-3.68] and 3.00 (IQR, 2.33-3.67) respectively for all colonoscopy video samples. The automated rating revealed a strong correlation with the reviewer's rating (ρ coefficient= 0.65, P = 0.01). There was good correlation of the automated overall quality rating and the mean endoscopist withdrawal speed rating (Spearman r coefficient= 0.59, P = 0.03). There was no correlation of automated overall quality rating with mean endoscopists image quality rating (Spearman r coefficient= 0.41, P = 0.15). The results from a novel automated real-time colonoscopy quality feedback system strongly agreed with the endoscopists' quality assessments. Further study is required to validate this approach.
Exact solutions for kinetic models of macromolecular dynamics.
Chemla, Yann R; Moffitt, Jeffrey R; Bustamante, Carlos
2008-05-15
Dynamic biological processes such as enzyme catalysis, molecular motor translocation, and protein and nucleic acid conformational dynamics are inherently stochastic processes. However, when such processes are studied on a nonsynchronized ensemble, the inherent fluctuations are lost, and only the average rate of the process can be measured. With the recent development of methods of single-molecule manipulation and detection, it is now possible to follow the progress of an individual molecule, measuring not just the average rate but the fluctuations in this rate as well. These fluctuations can provide a great deal of detail about the underlying kinetic cycle that governs the dynamical behavior of the system. However, extracting this information from experiments requires the ability to calculate the general properties of arbitrarily complex theoretical kinetic schemes. We present here a general technique that determines the exact analytical solution for the mean velocity and for measures of the fluctuations. We adopt a formalism based on the master equation and show how the probability density for the position of a molecular motor at a given time can be solved exactly in Fourier-Laplace space. With this analytic solution, we can then calculate the mean velocity and fluctuation-related parameters, such as the randomness parameter (a dimensionless ratio of the diffusion constant and the velocity) and the dwell time distributions, which fully characterize the fluctuations of the system, both commonly used kinetic parameters in single-molecule measurements. Furthermore, we show that this formalism allows calculation of these parameters for a much wider class of general kinetic models than demonstrated with previous methods.
Antarctic Firn Compaction Rates from Repeat-Track Airborne Radar Data: I. Methods
NASA Technical Reports Server (NTRS)
Medley, B.; Ligtenberg, S. R. M.; Joughin, I.; Van Den Broeke, M. R.; Gogineni, S.; Nowicki, S.
2015-01-01
While measurements of ice-sheet surface elevation change are increasingly used to assess mass change, the processes that control the elevation fluctuations not related to ice-flow dynamics (e.g. firn compaction and accumulation) remain difficult to measure. Here we use radar data from the Thwaites Glacier (West Antarctica) catchment to measure the rate of thickness change between horizons of constant age over different time intervals: 2009-10, 2010-11 and 2009-11. The average compaction rate to approximately 25m depth is 0.33ma(exp -1), with largest compaction rates near the surface. Our measurements indicate that the accumulation rate controls much of the spatio-temporal variations in the compaction rate while the role of temperature is unclear due to a lack of measurements. Based on a semi-empirical, steady-state densification model, we find that surveying older firn horizons minimizes the potential bias resulting from the variable depth of the constant age horizon. Our results suggest that the spatiotemporal variations in the firn compaction rate are an important consideration when converting surface elevation change to ice mass change. Compaction rates varied by up to 0.12ma(exp -1) over distances less than 6km and were on average greater than 20% larger during the 2010-11 interval than during 2009-10.
NASA Astrophysics Data System (ADS)
Pan, Ying; Yan, Shi-Wei; Li, Ruo-Zhu; Hu, Yi-Wen; Chang, Xue-Xiu
2017-01-01
Although the well-known antibiotic norfloxacin (NOR) is recognized as an important environmental pollutant, little is known about its impacts on ecological processes, particularly on species interactions. In this paper, we quantified Daphnia magna (Crustacea, Cladocera) responses in mortality rate at lethal NOR concentrations (0, 25, 50, 100, 200, 300 and 400 mg L-1), and in heartbeat rate, swimming behavior and feeding rate (on the green alga Chlorella pyrenoidosa) at sublethal NOR concentrations (0, 25, 50 and 100 mg L-1) to determine the effects of this antibiotic in plankton systems. In 96-h-long lethal experiment, mortality rates of D. magna increased significantly with increasing NOR concentration and exposure time. In sublethal experiments, heartbeat rate decreased, while time ratio of vertical to horizontal swimming (TVH) and the duration of quiescence increased in D. magna individuals exposed to increasing NOR concentrations after 4 and 12 h of exposure. These collectively led to decreases in both average swimming ability and feeding rate, consistent with the positive relationship between average swimming ability and feeding rate. Overall, results indicate that, by affecting zooplankton heartbeat rate and behavior, NOR decreased feeding efficiency of D. magna even at low doses, therefore, it might seriously compromise ecosystem health and function.
Mechanical Properties of Lightweight Porous Magnesium Processed Through Powder Metallurgy
NASA Astrophysics Data System (ADS)
Zou, Ning; Li, Qizhen
2018-02-01
Porous magnesium (Mg) samples with various overall porosities (28.4 ± 1.8%, 35.5 ± 2.5%, 45.4 ± 1.9%, and 62.4 ± 2.2%) were processed through powder metallurgy and characterized to study their mechanical properties. Different porosities were obtained by utilizing different mass fractions of space holder camphene. Camphene was removed by sublimation before sintering and contributed to processing porous Mg with high purity and small average pore size. The average pore size increased from 5.2 µm to 15.1 µm with increase of the porosity from 28.4 ± 1.8% to 62.4 ± 2.2%. Compressive strain-stress data showed that the strain hardening rate, yield strength, and ultimate compressive strength decreased with increase of the porosity. The theoretical yield strength of porous Mg obtained using the Gibson-Ashby model agreed with experimental data.
[Relevance of the sentinel lymph node biopsy in breast multifocal and multicentric cancer].
Mosbah, R; Raimond, E; Pelissier, A; Hocedez, C; Graesslin, O
2015-05-01
The sentinel lymph node biopsy is a gold standard in the management of breast cancer. Its role in multifocal or multicentric tumors is still evolving. The aim of this study is to assess the feasibility and pertinence of sentinel lymph node biopsy in multifocal and multicentric tumors based on a systematic review of literature. A systematic review was conducted searching in the following electronic databases PubMed using "sentinel lymph node biopsy", "breast cancer", "multifocal tumor", "multicentric tumor" and "multiple tumor" as keywords. We included original articles published between 2000 and 2014, both French and English, studying feasibility of sentinel lymph node biopsy in invasive breast cancer, multicentric and/or multifocal tumors. The first end point was success rate and false negative rate. Twenty-six articles were included in this literature review, with 2212 cases (782 multifocal, 737 multicentric and 693 multiple tumors). Percentage of tumors whose stage was higher than stage T2 ranged from 0 to 86.3%. Success rate average was 83.1%. False negative average was 8.2%. False negative rate was less than 10% in 15 articles. Mean of sentinel lymph node biopsy was 2 (1-9). The average rate of sentinel lymph node positive was 50.6%. Axillary recurrence rate was 0.5%. Despite the methodological biases of the studies included in this review of literature, the false negative rate of sentinel node biopsy in multifocal and multicentric breast cancers are less than 10% with a low rate of axillary recurrence. Despite the lack of randomized study, this procedure can be routinely performed in accordance with rigorous technical process. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
42 CFR 447.255 - Related information.
Code of Federal Regulations, 2011 CFR
2011-10-01
... assurances described in § 447.253(a), the following information: (a) The amount of the estimated average... which that estimated average rate increased or decreased relative to the average payment rate in effect... and, to the extent feasible, long-term effect the change in the estimated average rate will have on...
Data on inelastic processes in low-energy potassium-hydrogen and rubidium-hydrogen collisions
NASA Astrophysics Data System (ADS)
Yakovleva, S. A.; Barklem, P. S.; Belyaev, A. K.
2018-01-01
Two sets of rate coefficients for low-energy inelastic potassium-hydrogen and rubidium-hydrogen collisions were computed for each collisional system based on two model electronic structure calculations, performed by the quantum asymptotic semi-empirical and the quantum asymptotic linear combinations of atomic orbitals (LCAO) approaches, followed by quantum multichannel calculations for the non-adiabatic nuclear dynamics. The rate coefficients for the charge transfer (mutual neutralization, ion-pair formation), excitation and de-excitation processes are calculated for all transitions between the five lowest lying covalent states and the ionic states for each collisional system for the temperature range 1000-10 000 K. The processes involving higher lying states have extremely low rate coefficients and, hence, are neglected. The two model calculations both single out the same partial processes as having large and moderate rate coefficients. The largest rate coefficients correspond to the mutual neutralization processes into the K(5s 2S) and Rb(4d 2D) final states and at temperature 6000 K have values exceeding 3 × 10-8 cm3 s-1 and 4 × 10-8 cm3 s-1, respectively. It is shown that both the semi-empirical and the LCAO approaches perform equally well on average and that both sets of atomic data have roughly the same accuracy. The processes with large and moderate rate coefficients are likely to be important for non-LTE modelling in atmospheres of F, G and K-stars, especially metal-poor stars.
Mutation-selection equilibrium in games with multiple strategies.
Antal, Tibor; Traulsen, Arne; Ohtsuki, Hisashi; Tarnita, Corina E; Nowak, Martin A
2009-06-21
In evolutionary games the fitness of individuals is not constant but depends on the relative abundance of the various strategies in the population. Here we study general games among n strategies in populations of large but finite size. We explore stochastic evolutionary dynamics under weak selection, but for any mutation rate. We analyze the frequency dependent Moran process in well-mixed populations, but almost identical results are found for the Wright-Fisher and Pairwise Comparison processes. Surprisingly simple conditions specify whether a strategy is more abundant on average than 1/n, or than another strategy, in the mutation-selection equilibrium. We find one condition that holds for low mutation rate and another condition that holds for high mutation rate. A linear combination of these two conditions holds for any mutation rate. Our results allow a complete characterization of nxn games in the limit of weak selection.
40 CFR 63.7525 - What are my monitoring, installation, operation, and maintenance requirements?
Code of Federal Regulations, 2013 CFR
2013-07-01
... fossil fuel subcategory or the unit designed to burn heavy liquid subcategory and has an average annual heat input rate greater than 250 MMBtu per hour from solid fossil fuel and/or heavy liquid, and you... Major Sources: Industrial, Commercial, and Institutional Boilers and Process Heaters Testing, Fuel...
40 CFR 63.7525 - What are my monitoring, installation, operation, and maintenance requirements?
Code of Federal Regulations, 2014 CFR
2014-07-01
... fossil fuel subcategory or the unit designed to burn heavy liquid subcategory and has an average annual heat input rate greater than 250 MMBtu per hour from solid fossil fuel and/or heavy liquid, and you... Major Sources: Industrial, Commercial, and Institutional Boilers and Process Heaters Testing, Fuel...
40 CFR 463.24 - New source performance standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
... usage flow rate for cleaning processes at a new source times the following pollutant concentrations: Subpart B [Cleaning water] Concentration used to calculate NSPS Pollutant or pollutant property Maximum for any 1 day (mg/l) Maximum for monthly average (mg/l) BOD5 49 22 Oil and Grease 71 17 TSS 117 36 pH...
40 CFR 463.24 - New source performance standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... usage flow rate for cleaning processes at a new source times the following pollutant concentrations: Subpart B [Cleaning water] Concentration used to calculate NSPS Pollutant or pollutant property Maximum for any 1 day (mg/l) Maximum for monthly average (mg/l) BOD5 49 22 Oil and Grease 71 17 TSS 117 36 pH...
Microbial biomass and productivity in seagrass beds
NASA Technical Reports Server (NTRS)
Moriarty, D. J.; Boon, P. I.; Hansen, J. A.; Hunt, W. G.; Poiner, I. R.; Pollard, P. C.; Skyring, G. W.; White, D. C.
1985-01-01
Different methods for measuring the rates of processes mediated by bacteria in sediments and the rates of bacterial cell production have been compared. In addition, net production of the seagrass Zostera capricorni and bacterial production have been compared and some interrelationships with the nitrogen cycle discussed. Seagrass productivity was estimated by measuring the plastochrone interval using a leaf stapling technique. The average productivity over four seasons was 1.28 +/- 0.28 g C m-2 day-1 (mean +/- standard deviation, n = 4). Bacterial productivity was measured five times throughout a year using the rate of tritiated thymidine incorporated into DNA. Average values were 33 +/- 12 mg C m-2 day-1 for sediment and 23 +/- 4 for water column (n = 5). Spatial variability between samples was greater than seasonal variation for both seagrass productivity and bacterial productivity. On one occasion, bacterial productivity was measured using the rate of 32P incorporated into phospholipid. The values were comparable to those obtained with tritiated thymidine. The rate of sulfate reduction was 10 mmol SO4(-2) m-2 day-1. The rate of methanogenesis was low, being 5.6 mg CH4 produced m-2 day-1. A comparison of C flux measured using rates of sulfate reduction and DNA synthesis indicated that anaerobic processes were predominant in these sediments. An analysis of microbial biomass and community structure, using techniques of phospholipid analysis, showed that bacteria were predominant members of the microbial biomass and that of these, strictly anaerobic bacteria were the main components. Ammonia concentration in interstitial water varied from 23 to 71 micromoles. Estimates of the amount of ammonia required by seagrass showed that the ammonia would turn over about once per day. Rapid recycling of nitrogen by bacteria and bacterial grazers is probably important.
The good pharmacy practice on Einstein Program at Paraisópolis Community
de Oliveira, Lara Tânia de Assumpção Domingues Gonçalves; da Silva, Camila Pontes; Guedes, Maria das Vitorias; Sousa, Ana Célia de Oliveira; Sarno, Flávio
2016-01-01
ABSTRACT Objectives: To describe indicators and processes developed and implemented for pharmaceutical assistance at the Einstein Program at Paraisópolis Community pharmacy. Methods: This was a descriptive study of retrospective data from January 2012 to December 2015. Data were obtained from spreadsheets developed for monitoring the productivity and care quality provided at the pharmacy. The evaluated variables were pharmaceutical assistance to prescription, pharmaceutical intervention, orientation (standard and pharmaceutical) and pharmaceutical orientation rate. Results: The pharmacy assisted, on average, 2,308 prescriptions monthly, dispensing 4,871 items, including medications, materials and food supplements. Since March 2015, virtually, the pharmacist analyzed all prescriptions, prior to dispensing. In the analyzed period, there was an increase in monthly pharmaceutical interventions from 7 to 32 on average, and, although there was a decrease in the number of standard orientation, the pharmaceutical orientation had an increase, causing a rise of pharmaceutical orientation rate from 4 to 11%. Conclusion: The processes developed and implemented at the program pharmacy sought to follow the good pharmacy practice, and help patients to make the best use of their medications. PMID:27759833
Rock-weathering rates as functions of time
Colman, Steven M.
1981-01-01
The scarcity of documented numerical relations between rock weathering and time has led to a common assumption that rates of weathering are linear. This assumption has been strengthened by studies that have calculated long-term average rates. However, little theoretical or empirical evidence exists to support linear rates for most chemical-weathering processes, with the exception of congruent dissolution processes. The few previous studies of rock-weathering rates that contain quantitative documentation of the relation between chemical weathering and time suggest that the rates of most weathering processes decrease with time. Recent studies of weathering rinds on basaltic and andesitic stones in glacial deposits in the western United States also clearly demonstrate that rock-weathering processes slow with time. Some weathering processes appear to conform to exponential functions of time, such as the square-root time function for hydration of volcanic glass, which conforms to the theoretical predictions of diffusion kinetics. However, weathering of mineralogically heterogeneous rocks involves complex physical and chemical processes that generally can be expressed only empirically, commonly by way of logarithmic time functions. Incongruent dissolution and other weathering processes produce residues, which are commonly used as measures of weathering. These residues appear to slow movement of water to unaltered material and impede chemical transport away from it. If weathering residues impede weathering processes then rates of weathering and rates of residue production are inversely proportional to some function of the residue thickness. This results in simple mathematical analogs for weathering that imply nonlinear time functions. The rate of weathering becomes constant only when an equilibrium thickness of the residue is reached. Because weathering residues are relatively stable chemically, and because physical removal of residues below the ground surface is slight, many weathering features require considerable time to reach constant rates of change. For weathering rinds on volcanic stones in the western United States, this time is at least 0.5 my. ?? 1981.
Detection and rate discrimination of amplitude modulation in electrical hearing.
Chatterjee, Monita; Oberzut, Cherish
2011-09-01
Three experiments were designed to examine temporal envelope processing by cochlear implant (CI) listeners. In experiment 1, the hypothesis that listeners' modulation sensitivity would in part determine their ability to discriminate between temporal modulation rates was examined. Temporal modulation transfer functions (TMTFs) obtained in an amplitude modulation detection (AMD) task were compared to threshold functions obtained in an amplitude modulation rate discrimination (AMRD) task. Statistically significant nonlinear correlations were observed between the two measures. In experiment 2, results of loudness-balancing showed small increases in the loudness of modulated over unmodulated stimuli beyond a modulation depth of 16%. Results of experiment 3 indicated small but statistically significant effects of level-roving on the overall gain of the TMTF, but no impact of level-roving on the average shape of the TMTF across subjects. This suggested that level-roving simply increased the task difficulty for most listeners, but did not indicate increased use of intensity cues under more challenging conditions. Data obtained with one subject, however, suggested that the most sensitive listeners may derive some benefit from intensity cues in these tasks. Overall, results indicated that intensity cues did not play an important role in temporal envelope processing by the average CI listener. © 2011 Acoustical Society of America
Optical Vector Receiver Operating Near the Quantum Limit
NASA Astrophysics Data System (ADS)
Vilnrotter, V. A.; Lau, C.-W.
2005-05-01
An optical receiver concept for binary signals with performance approaching the quantum limit at low average-signal energies is developed and analyzed. A conditionally nulling receiver that reaches the quantum limit in the absence of background photons has been devised by Dolinar. However, this receiver requires ideal optical combining and complicated real-time shaping of the local field; hence, it tends to be difficult to implement at high data rates. A simpler nulling receiver that approaches the quantum limit without complex optical processing, suitable for high-rate operation, had been suggested earlier by Kennedy. Here we formulate a vector receiver concept that incorporates the Kennedy receiver with a physical beamsplitter, but it also utilizes the reflected signal component to improve signal detection. It is found that augmenting the Kennedy receiver with classical coherent detection at the auxiliary beamsplitter output, and optimally processing the vector observations, always improves on the performance of the Kennedy receiver alone, significantly so at low average-photon rates. This is precisely the region of operation where modern codes approach channel capacity. It is also shown that the addition of background radiation has little effect on the performance of the coherent receiver component, suggesting a viable approach for near-quantum-limited performance in high background environments.
NASA Astrophysics Data System (ADS)
Renner, Maik; Hassler, Sibylle; Blume, Theresa; Weiler, Markus; Hildebrandt, Anke; Guderle, Marcus; Schymanski, Stan; Kleidon, Axel
2016-04-01
Roberts (1983) found that forest transpiration is relatively uniform across different climatic conditions and suggested that forest transpiration is a conservative process compensating for environmental heterogeneity. Here we test this hypothesis at a steep valley cross-section composed of European Beech in the Attert basin in Luxemburg. We use sapflow, soil moisture, biometric and meteorological data from 6 sites along a transect to estimate site scale transpiration rates. Despite opposing hillslope orientation, different slope angles and forest stand structures, we estimated relatively similar transpiration responses to atmospheric demand and seasonal transpiration totals. This similarity is related to a negative correlation between sap velocity and site-average sapwood area. At the south facing sites with an old, even-aged stand structure and closed canopy layer, we observe significantly lower sap velocities but similar stand-average transpiration rates compared to the north-facing sites with open canopy structure, tall dominant trees and dense understorey. This suggests that plant hydraulic co-ordination allows for flexible responses to environmental conditions leading to similar transpiration rates close to the water and energy limits despite the apparent heterogeneity in exposition, stand density and soil moisture. References Roberts, J. (1983). Forest transpiration: A conservative hydrological process? Journal of Hydrology 66, 133-141.
Fitness cost of reassortment in human influenza
Lässig, Michael
2017-01-01
Reassortment, which is the exchange of genome sequence between viruses co-infecting a host cell, plays an important role in the evolution of segmented viruses. In the human influenza virus, reassortment happens most frequently between co-existing variants within the same lineage. This process breaks genetic linkage and fitness correlations between viral genome segments, but the resulting net effect on viral fitness has remained unclear. In this paper, we determine rate and average selective effect of reassortment processes in the human influenza lineage A/H3N2. For the surface proteins hemagglutinin and neuraminidase, reassortant variants with a mean distance of at least 3 nucleotides to their parent strains get established at a rate of about 10−2 in units of the neutral point mutation rate. Our inference is based on a new method to map reassortment events from joint genealogies of multiple genome segments, which is tested by extensive simulations. We show that intra-lineage reassortment processes are, on average, under substantial negative selection that increases in strength with increasing sequence distance between the parent strains. The deleterious effects of reassortment manifest themselves in two ways: there are fewer reassortment events than expected from a null model of neutral reassortment, and reassortant strains have fewer descendants than their non-reassortant counterparts. Our results suggest that influenza evolves under ubiquitous epistasis across proteins, which produces fitness barriers against reassortment even between co-circulating strains within one lineage. PMID:29112968
A Self-Replication Model for Long Channelized Lava Flows on the Mars Plains
NASA Technical Reports Server (NTRS)
Baloga, S. M.; Glaze, L. S.
2008-01-01
A model is presented for channelized lava flows emplaced by a self-replicating, levee-building process over long distances on the plains of Mars. Such flows may exhibit morphologic evidence of stagnation, overspills, and upstream breakouts. However, these processes do not inhibit the formation and persistence of a prominent central channel that can often be traced for more than 100 km. The two central assumptions of the self-replication model are (1) the flow advances at the average upstream velocity of the molten core and (2) the fraction of the lava that travels faster than the average upstream velocity forms stationary margins in the advancing distal zone to preserve the self-replication process. For an exemplary 300 km long flow north of Pavonis Mons, the model indicates that 8 m of crust must have formed during emplacement, as determined from the channel and levee dimensions. When combined with independent thermal dynamic estimates for the crustal growth rate, relatively narrow constraints are obtained for the flow rate (2250 m3 s 1), emplacement duration (600 d), and the lava viscosity of the molten interior (106 Pa s). Minor, transient overspills and breakouts increase the emplacement time by only a factor of 2. The primary difference between the prodigious channelized Martian flows and their smaller terrestrial counterparts is that high volumetric flow rates must have persisted for many hundreds of days on Mars, in contrast to a few hours or days on Earth.
Speyer, Barbara E; Pizzey, Arnold R; Abramov, Benjamin; Saab, Wael; Doshi, Alpesh; Sarna, Urvashi; Harper, Joyce C; Serhal, Paul
2015-01-01
The sperm chromatin structure assay (SCSA) has been proposed as a useful addition to the battery of tests routinely used to explore semen quality and hence to give an indication of the likelihood of a successful pregnancy. As usually performed at present, the assay yields two main sperm variables, the DNA fragmentation index (DFI) and the high DNA stainability (HDS). In the present study 275 patients undergoing 215 in vitro fertilization (IVF) and 215 intracytoplasmic sperm injection (ICSI) cycles were studied with the purpose of defining the clinical significance of HDS in IVF and ICSI cycles. Using the Spearman correlation test there were no significant statistical relationships between %HDS and fertilization rate, rate of embryo growth, blastocyst rate, implantation rate, or live birth rate. Rate of pregnancy loss showed a negative relationship significant at the 0.05 level which is unexplained. It is not known whether the normal practice of using processed sperm for fertilization plays any part in this lack of a negative effect of HDS level upon the stages of the cycle. A total of 16 patients with HDS levels >28% had an average live birth rate of 47.8% and an average pregnancy loss of 8.7%, which compared favourably with the group of patients as a whole.
Klein, Thomas; Wieser, Wolfgang; Reznicek, Lukas; Neubauer, Aljoscha; Kampik, Anselm; Huber, Robert
2013-01-01
We analyze the benefits and problems of in vivo optical coherence tomography (OCT) imaging of the human retina at A-scan rates in excess of 1 MHz, using a 1050 nm Fourier-domain mode-locked (FDML) laser. Different scanning strategies enabled by MHz OCT line rates are investigated, and a simple multi-volume data processing approach is presented. In-vivo OCT of the human ocular fundus is performed at different axial scan rates of up to 6.7 MHz. High quality non-mydriatic retinal imaging over an ultra-wide field is achieved by a combination of several key improvements compared to previous setups. For the FDML laser, long coherence lengths and 72 nm wavelength tuning range are achieved using a chirped fiber Bragg grating in a laser cavity at 419.1 kHz fundamental tuning rate. Very large data sets can be acquired with sustained data transfer from the data acquisition card to host computer memory, enabling high-quality averaging of many frames and of multiple aligned data sets. Three imaging modes are investigated: Alignment and averaging of 24 data sets at 1.68 MHz axial line rate, ultra-dense transverse sampling at 3.35 MHz line rate, and dual-beam imaging with two laser spots on the retina at an effective line rate of 6.7 MHz.
Klein, Thomas; Wieser, Wolfgang; Reznicek, Lukas; Neubauer, Aljoscha; Kampik, Anselm; Huber, Robert
2013-01-01
We analyze the benefits and problems of in vivo optical coherence tomography (OCT) imaging of the human retina at A-scan rates in excess of 1 MHz, using a 1050 nm Fourier-domain mode-locked (FDML) laser. Different scanning strategies enabled by MHz OCT line rates are investigated, and a simple multi-volume data processing approach is presented. In-vivo OCT of the human ocular fundus is performed at different axial scan rates of up to 6.7 MHz. High quality non-mydriatic retinal imaging over an ultra-wide field is achieved by a combination of several key improvements compared to previous setups. For the FDML laser, long coherence lengths and 72 nm wavelength tuning range are achieved using a chirped fiber Bragg grating in a laser cavity at 419.1 kHz fundamental tuning rate. Very large data sets can be acquired with sustained data transfer from the data acquisition card to host computer memory, enabling high-quality averaging of many frames and of multiple aligned data sets. Three imaging modes are investigated: Alignment and averaging of 24 data sets at 1.68 MHz axial line rate, ultra-dense transverse sampling at 3.35 MHz line rate, and dual-beam imaging with two laser spots on the retina at an effective line rate of 6.7 MHz. PMID:24156052
Gearhardt, Ashley N; Rizk, Marianne T; Treat, Teresa A
2014-08-01
Craving and liking are related to eating-related problems, but less is known about the association of specific food characteristics (e.g., sugar, fat) with craving/liking. The relation of individual differences in eating behavior with these craving and liking patterns is also relatively unknown. We examine the nomothetic impact of sugar, fat and processing on food craving and liking and the moderation of these effects by idiographic factors (e.g., Body Mass Index [BMI], hunger). One hundred and five overweight and obese women completed craving and liking ratings on 180 foods that differed in levels of sugar, fat and processing. Food craving was linked positively to fat content, but negatively to sugar. Food liking was associated negatively with sugar content and processing level. Addictive-like eating predicted elevated overall food craving and liking, and increased craving and liking for processed foods. Attempted restriction efforts were unrelated to craving and liking. BMI was associated with less craving for fattier foods and lower liking for the average food. Hunger was associated with increased craving for the average food. These findings highlight the role of fat in cravings and differences in craving and liking based on BMI, loss of control over eating, and hunger. These findings are relevant to theories of problematic eating and the development of eating-related interventions. Copyright © 2014 Elsevier Ltd. All rights reserved.
DHMI: dynamic holographic microscopy interface
NASA Astrophysics Data System (ADS)
He, Xuefei; Zheng, Yujie; Lee, Woei Ming
2016-12-01
Digital holographic microscopy (DHM) is a powerful in-vitro biological imaging tool. In this paper, we report a fully automated off-axis digital holographic microscopy system completed with a graphical user interface in the Matlab environment. The interface primarily includes Fourier domain processing, phase reconstruction, aberration compensation and autofocusing. A variety of imaging operations such as region of interest selection, de-noising mode (filtering and averaging), low frame rate imaging for immediate reconstruction and high frame rate imaging routine ( 27 fps) are implemented to facilitate ease of use.
Study of Liquid Breakup Process in Solid Rocket Motor Nozzle
2016-02-16
liquid film flow with the gas flow. The rate of the wave breakup was characterized by introducing Breakup-length, Ohnesorge Number (Oh) and Weber Number... liquid film that flows along the wall of a strraight test channel while a relatively higher-speed gas moves over it. We have used an unsteady-flow...Reynolds- Averaged Navier-Stokes code (URANS) to investigate the interaction of the liquid film flow with the gas flow. The rate of the wave breakup was
Estimating Perturbation and Meta-Stability in the Daily Attendance Rates of Six Small High Schools
NASA Astrophysics Data System (ADS)
Koopmans, Matthijs
This paper discusses the daily attendance rates in six small high schools over a ten-year period and evaluates how stable those rates are. “Stability” is approached from two vantage points: pulse models are fitted to estimate the impact of sudden perturbations and their reverberation through the series, and Autoregressive Fractionally Integrated Moving Average (ARFIMA) techniques are used to detect dependencies over the long range of the series. The analyses are meant to (1) exemplify the utility of time series approaches in educational research, which lacks a time series tradition, (2) discuss some time series features that seem to be particular to daily attendance rate trajectories such as the distinct downward pull coming from extreme observations, and (3) present an analytical approach to handle the important yet distinct patterns of variability that can be found in these data. The analysis also illustrates why the assumption of stability that underlies the habitual reporting of weekly, monthly and yearly averages in the educational literature is questionable, as it reveals dynamical processes (perturbation, meta-stability) that remain hidden in such summaries.
Utility of an Algorithm to Increase the Accuracy of Medication History in an Obstetrical Setting.
Corbel, Aline; Baud, David; Chaouch, Aziz; Beney, Johnny; Csajka, Chantal; Panchaud, Alice
2016-01-01
In an obstetrical setting, inaccurate medication histories at hospital admission may result in failure to identify potentially harmful treatments for patients and/or their fetus(es). This prospective study was conducted to assess average concordance rates between (1) a medication list obtained with a one-page structured medication history algorithm developed for the obstetrical setting and (2) the medication list reported in medical records and obtained by open-ended questions based on standard procedures. Both lists were converted into concordance rate using a best possible medication history approach as the reference (information obtained by patients, prescribers and community pharmacists' interviews). The algorithm-based method obtained a higher average concordance rate than the standard method, with respectively 90.2% [CI95% 85.8-94.3] versus 24.6% [CI95%15.3-34.4] concordance rates (p<0.01). Our algorithm-based method strongly enhanced the accuracy of the medication history in our obstetric population, without using substantial resources. Its implementation is an effective first step to the medication reconciliation process, which has been recognized as a very important component of patients' drug safety.
Code of Federal Regulations, 2012 CFR
2012-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging set...
Code of Federal Regulations, 2013 CFR
2013-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging set...
Code of Federal Regulations, 2011 CFR
2011-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging set...
Code of Federal Regulations, 2014 CFR
2014-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging set...
Yang, Qingling; Zhang, Nan; Zhao, Feifei; Zhao, Wanli; Dai, Shanjun; Liu, Jinhao; Bukhari, Ihtisham; Xin, Hang; Niu, Wenbing; Sun, Yingpu
2015-07-01
The ends of eukaryotic chromosomes contain specialized chromatin structures called telomeres, the length of which plays a key role in early human embryonic development. Although the effect of sperm preparation techniques on major sperm characteristics, such as concentration, motility and morphology have been previously documented, the possible status of telomere length and its relation with sperm preparation techniques is not well-known for humans. The aim of this study was to investigate the role of density gradient centrifugation in the selection of spermatozoa with longer telomeres for use in assisted reproduction techniques in 105 samples before and after sperm processing. After density gradient centrifugation, the average telomere length of the sperm was significantly longer (6.51 ± 2.54 versus 5.16 ± 2.29, P < 0.01), the average motile sperm rate was significantly higher (77.9 ± 11.8 versus 44.6 ± 11.2, P < 0.01), but average DNA fragmentation rate was significantly lower (11.1 ± 5.9 versus 25.9 ± 12.9, P < 0.01) compared with raw semen. Additionally, telomere length was positively correlated with semen sperm count (rs = 0.58; P < 0.01). In conclusion, density gradient centrifugation is a useful technique for selection of sperm with longer telomeres. Copyright © 2015 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.
Averaging, passage through resonances, and capture into resonance in two-frequency systems
NASA Astrophysics Data System (ADS)
Neishtadt, A. I.
2014-10-01
Applying small perturbations to an integrable system leads to its slow evolution. For an approximate description of this evolution the classical averaging method prescribes averaging the rate of evolution over all the phases of the unperturbed motion. This simple recipe does not always produce correct results, because of resonances arising in the process of evolution. The phenomenon of capture into resonance consists in the system starting to evolve in such a way as to preserve the resonance property once it has arisen. This paper is concerned with application of the averaging method to a description of evolution in two-frequency systems. It is assumed that the trajectories of the averaged system intersect transversally the level surfaces of the frequency ratio and that certain other conditions of general position are satisfied. The rate of evolution is characterized by a small parameter \\varepsilon. The main content of the paper is a proof of the following result: outside a set of initial data with measure of order \\sqrt \\varepsilon the averaging method describes the evolution to within O(\\sqrt \\varepsilon \\vert\\ln\\varepsilon\\vert) for periods of time of order 1/\\varepsilon. This estimate is sharp. The exceptional set of measure \\sqrt \\varepsilon contains the initial data for phase points captured into resonance. A description of the motion of such phase points is given, along with a survey of related results on averaging. Examples of capture into resonance are presented for some problems in the dynamics of charged particles. Several open problems are stated. Bibliography: 65 titles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao
Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less
Van Nguyen; Javaid, Abdul Q; Weitnauer, Mary Ann
2014-01-01
We introduce the Spectrum-averaged Harmonic Path (SHAPA) algorithm for estimation of heart rate (HR) and respiration rate (RR) with Impulse Radio Ultrawideband (IR-UWB) radar. Periodic movement of human torso caused by respiration and heart beat induces fundamental frequencies and their harmonics at the respiration and heart rates. IR-UWB enables capture of these spectral components and frequency domain processing enables a low cost implementation. Most existing methods of identifying the fundamental component either in frequency or time domain to estimate the HR and/or RR lead to significant error if the fundamental is distorted or cancelled by interference. The SHAPA algorithm (1) takes advantage of the HR harmonics, where there is less interference, and (2) exploits the information in previous spectra to achieve more reliable and robust estimation of the fundamental frequency in the spectrum under consideration. Example experimental results for HR estimation demonstrate how our algorithm eliminates errors caused by interference and produces 16% to 60% more valid estimates.
Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao
2017-10-03
Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less
Thermal diffusivity and nuclear spin relaxation: a continuous wave free precession NMR study.
Venâncio, Tiago; Engelsberg, Mario; Azeredo, Rodrigo B V; Colnago, Luiz A
2006-07-01
Continuous wave free precession (CWFP) nuclear magnetic resonance is capable of yielding quantitative and easily obtainable information concerning the kinetics of processes that change the relaxation rates of the nuclear spins through the action of some external agent. In the present application, heat flow from a natural rubber sample to a liquid nitrogen thermal bath caused a large temperature gradient leading to a non-equilibrium temperature distribution. The ensuing local changes in the relaxation rates could be monitored by the decay of the CWFP signals and, from the decays, it was possible to ascertain the prevalence of a diffusive process and to obtain an average value for the thermal diffusivity.
Böhlke, J.K.; Harvey, J.W.; Voytek, M.A.
2004-01-01
We conducted an in-stream tracer experiment with Br and 15N-enriched NO3- to determine the rates of denitrification and related processes in a gaining NO3- -rich stream in an agricultural watershed in the upper Mississippi basin in September 2001. We determined reach-averaged rates of N fluxes and reactions from isotopic analyses of NO3-, NO2-, N2, and suspended particulate N in conjunction with other data in a 1.2-km reach by using a forward time-stepping numerical simulation that included groundwater discharge, denitrification, nitrification, assimilation, and air-water gas exchange with changing temperature. Denitrification was indicated by a systematic downstream increase in the d15N values of dissolved N2. The reach-averaged rate of denitrification of surface-water NO3- indicated by the isotope tracer was approximately 120 ± 20 µmol m-2 h-1 (corresponding to zero- and first-order rate constants of 0.63 µmol L-1 h-1 and 0.009 h-1, respectively). The overall rate of NO3- loss by processes other than denitrification (between 0 and about 200 µmol m-2 h-1) probably was less than the denitrification rate but had a large relative uncertainty because the NO3- load was large and was increasing through the reach. The rates of denitrification and other losses would have been sufficient to reduce the stream NO3- load substantially in the absence of NO3- sources, but the losses were more than offset by nitrification and groundwater NO3- inputs at a combined rate of about 500-700 µmol m-2 h-1. Despite the importance of denitrification, the overall mass fluxes of N2 were dominated by discharge of denitrified groundwater and air-water gas exchange in response to changing temperature, whereas the flux of N2 attributed to denitrification was relatively small. The in-stream isotope tracer experiment provided a sensitive direct reach-scale measurement of denitrification and related processes in a NO3- -rich stream where other mass-balance methods were not suitable because of insufficient sensitivity or offsetting sources and sinks. Despite the increasing NO3- load in the experimental reach, the isotope tracer data indicate that denitrification was a substantial permanent sink for N leaving this agricultural watershed during low-flow conditions.
When do glaciated landscapes form?
NASA Astrophysics Data System (ADS)
Koppes, M. N.
2015-12-01
Glacial erosion is a fundamental link between climate and the tectonic and surface processes that create topography. Mountain ranges worldwide have undergone large-scale modification due the erosive action of ice masses, yet the mechanisms that control the timing of this modification and the rate by which ice erodes remain poorly understood. Available data report a wide range of erosion rates from individual ice masses over varying timescales, from the modern to orogenic. Recent numerical modeling efforts have focused on replicating the processes that produce the geomorphic signatures of glacial landscapes. Central to these models is a simple index that relates erosion rate to ice dynamics. To provide a quantitative test of the links between glacial erosion, sliding and ice discharge, we examined explicitly the factors controlling modern glacier erosion rates across climatic regimes, from Patagonia to the Antarctic Peninsula. We find that modern, basin-averaged erosion rates vary by three orders of magnitude, from 1->10 mm yr-1 in Patagonia to 0.01-<0.1 mm yr-1 in the AP, largely as a function of temperature and basal thermal regime. Erosion rates also increase non-linearly with both the sliding speed and the ice flux through the ELA, in accord with theories of glacial erosion. Notably, erosion rates decrease by over two orders of magnitude between temperate and polar glaciers with similar discharge rates. The difference in erosion rates between temperate and colder glaciers of similar shape and size is primarily related to the abundance of meltwater accessing the bed. Since all glaciers worldwide have experienced colder than current climatic conditions, the 100-fold decrease in long-term relative to modern erosion rates may in part reflect the temporal averaging of temperate and polar conditions over the lifecycle of these glaciers. Hence, climatic variation, more than the extent of ice cover or tectonic changes, controls the pace at which glaciers shape mountains.
Matthews, Jennifer L; Murphy, Joy M; Carmichael, Carrie; Yang, Huiping; Tiersch, Terrence; Westerfield, Monte; Varga, Zoltan M
2018-01-25
Sperm cryopreservation is a highly efficient method for preserving genetic resources. It extends the reproductive period of males and significantly reduces costs normally associated with maintenance of live animal colonies. However, previous zebrafish (Danio rerio) cryopreservation methods have produced variable outcomes and low post-thaw fertilization rates. To improve post-thaw fertilization rates after cryopreservation, we developed a new extender and cryoprotective medium (CPM), introduced quality assessment (QA), determined the optimal cooling rate, and improved the post-thaw in vitro fertilization process. We found that the hypertonic extender E400 preserved motility of sperm held on ice for at least 6 h. We implemented QA by measuring sperm cell densities with a NanoDrop spectrophotometer and sperm motility with computer-assisted sperm analysis (CASA). We developed a CPM, RMMB, which contains raffinose, skim milk, methanol, and bicine buffer. Post-thaw motility indicated that the optimal cooling rate in two types of cryogenic vials was between 10 and 15°C/min. Test thaws from this method produced average motility of 20% ± 13% and an average post-thaw fertilization rate of 68% ± 16%.
A high-efficiency real-time digital signal averager for time-of-flight mass spectrometry.
Wang, Yinan; Xu, Hui; Li, Qingjiang; Li, Nan; Huang, Zhengxu; Zhou, Zhen; Liu, Husheng; Sun, Zhaolin; Xu, Xin; Yu, Hongqi; Liu, Haijun; Li, David D-U; Wang, Xi; Dong, Xiuzhen; Gao, Wei
2013-05-30
Analog-to-digital converter (ADC)-based acquisition systems are widely applied in time-of-flight mass spectrometers (TOFMS) due to their ability to record the signal intensity of all ions within the same pulse. However, the acquisition system raises the requirement for data throughput, along with increasing the conversion rate and resolution of the ADC. It is therefore of considerable interest to develop a high-performance real-time acquisition system, which can relieve the limitation of data throughput. We present in this work a high-efficiency real-time digital signal averager, consisting of a signal conditioner, a data conversion module and a signal processing module. Two optimization strategies are implemented using field programmable gate arrays (FPGAs) to enhance the efficiency of the real-time processing. A pipeline procedure is used to reduce the time consumption of the accumulation strategy. To realize continuous data transfer, a high-efficiency transmission strategy is developed, based on a ping-pong procedure. The digital signal averager features good responsiveness, analog bandwidth and dynamic performance. The optimal effective number of bits reaches 6.7 bits. For a 32 µs record length, the averager can realize 100% efficiency with an extraction frequency below 31.23 kHz by modifying the number of accumulation steps. In unit time, the averager yields superior signal-to-noise ratio (SNR) compared with data accumulation in a computer. The digital signal averager is combined with a vacuum ultraviolet single-photon ionization time-of-flight mass spectrometer (VUV-SPI-TOFMS). The efficiency of the real-time processing is tested by analyzing the volatile organic compounds (VOCs) from ordinary printed materials. In these experiments, 22 kinds of compounds are detected, and the dynamic range exceeds 3 orders of magnitude. Copyright © 2013 John Wiley & Sons, Ltd.
Astier-Peña, M P; Barrasa-Villar, I; García-Mata, J R; Aranaz-Andrés, J; Enriquez-Martín, N; Vela-Marquina, M L
2010-01-01
The experience and learning process of preparing a scientific conference programme, organising and conducting a conference ccompletes the quality circle with the quantitative and qualitative assessment of the process and results. The transmission of this experience and learning process through this paper will improve the performance of committees of future conference venues, partners and participants and collaborators. The method for performing this evaluation is the assessment of the activities of both the scientific and organising committees of the XXVI Conference of the Spanish Society of Quality Healthcare in October 2008 in Zaragoza. The methodology evaluated the observance of the timetable and tasks assigned to the committees in the Congress Manual of the society along with the presentation of final results of the congress concerning scientific participation and overall satisfaction. There were a total of 1211 communications with a rejection rate of 9.1%. Of the total, 577 communications were presented in oral format and 544 in poster format. Aragon was the community of origin of 24% of communications. By subject areas, those of most interest were patient safety, organisational and management processes, and patient perspectives. A total of 83 participants attended 7 of the 11 workshops offered. The average attendance for each workshop was 12 people. The response rate to the assessment of workshops questionnaire was 54.2% with an average score of 4 (scale of 1 to 5). A total of 1131 people attended the conference of which 17% (193) were SECA associates. Out of a total of 1075 overall satisfaction conference questionnaires distributed there was a response rate of 9.30% (100). The scientific content was assessed with an average score of 3.6 and the organization with 3.87, both on a total score of 5 points. According to the number of abstracts received, number of conferences, level of satisfaction with the scientific program and organisation, we can conclude that the XXVI Conference of the Society has been a success, although we are still in our continuous quality improvement circle that will make conferences even better. Copyright © 2010 SECA. Published by Elsevier Espana. All rights reserved.
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Sintering activation energy MoSi2-WSi2-Si3N4 ceramic
NASA Astrophysics Data System (ADS)
Titov, D. D.; Lysenkov, A. S.; Kargin, Yu F.; Frolova, M. G.; Gorshkov, V. A.; Perevislov, S. N.
2018-04-01
The activation energy of sintering process was calculated based on dilatometric studies of shrinkage processes (Mo,W)Si2 + Si3N4 composite ceramic. (Mo,W)Si2 powders was obtained by solid-phase solutions of 70 wt% MoSi2 and 30 wt% WSi2 by SHS in the ISMAN RAS. The concentration rate Si3N4 was from 1 to 15 wt.%. The sintering was carried out to 1850°C in Ar atmosphere the heating rate of 5, 10, 12 and 15°C/min by the way of dilatometer tests. Based on the differential kinetic analysis method (Friedman’s method), the sintering process activation energy of (Mo,W)Si2 + Si3N4 were calculated. The two-stage sintering process and the dependence of the activation energy on the Si3N4 content was shown. Average value of 370 kJ/mol for Q was obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorin, Thomas, E-mail: thomas.dorin@deakin.edu.au; Wood, Kathleen; Taylor, Adam
2016-02-15
A high strength low alloy steel composition has been melted and processed by two different routes: simulated direct strip casting and slow cooled ingot casting. The microstructures were examined with scanning and transmission electron microscopy, atom probe tomography and small angle neutron scattering (SANS). The formation of cementite (Fe{sub 3}C), manganese sulphides (MnS) and niobium carbo-nitrides (Nb(C,N)) was investigated in both casting conditions. The sulphides were found to be significantly refined by the higher cooling rate, and developed an average diameter of only 100 nm for the fast cooled sample, and a diameter too large to be measured with SANSmore » in the slow cooled condition (> 1.1 μm). Slow cooling resulted in the development of classical Nb(C,N) precipitation, with an average diameter of 7.2 nm. However, after rapid cooling both the SANS and atom probe tomography data indicated that the Nb was retained in the matrix as a random solid solution. There was also some evidence that O, N and S are also retained in solid solution in levels not found during conventional processing. - Highlights: • The influence of cooling rate on microstructure is investigated in a HSLA steel. • SANS, TEM and APT are used to characterise the sulphides and Nb(C,N) precipitates. • The slow cooling rate result in the formation of Nb(C,N) precipitates. • The fast cooling rate results in a microstructure supersaturated in Nb, C and N. • The sulphides are 100 nm in the fast cooled sample and > 1 μm in the slow cooled one.« less
Ruiter-Ligeti, Jacob; Agbo, Chioma; Dahan, Michael
2017-06-01
The objective of this retrospective study was to evaluate the effect of semen processing on computer analyzed semen parameters and pregnancy rates after intrauterine insemination (IUI). Over a two-year period, a total of 981 couples undergoing 2231 IUI cycles were evaluated and the freshly collected non-donor semen was analyzed before and after density gradient centrifugation (DGC). DGC led to significant increases in sperm concentration by 66±74 ×106/mL (P=0.0001), percentage of motile sperm by 24±22% (P=0.0001), concentration motile by 27±58 ×106/mL (P=0.0001), and forward sperm progression by 18±14 µ/s (P=0.0001). In 95% of cases, there was a decrease in the total motile sperm count (TMSC), with an average decrease of 50±124% compared to pre-processed samples (P=0.0001). Importantly, the decrease in TMSC did not negatively affect pregnancy rates (P=0.45). This study proves that DGC leads to significant increases in most sperm parameters, with the exception of TMSC. Remarkably, the decrease in TMSC did not affect the pregnancy rate. This should reassure clinicians when the TMSC is negatively affected by processing.
Observations on the predictive value of short-term stake tests
Stan Lebow; Bessie Woodward; Patricia Lebow
2008-01-01
This paper compares average ratings of test stakes after 3, 4, 5, and 7 years exposure to their subsequent ratings after 11 years. Average ratings from over 200 treatment groups exposed in plots in southern Mississippi were compared to average ratings of a reference preservative. The analysis revealed that even perfect ratings after three years were not a reliable...
Conversion of cardiac performance data in analog form for digital computer entry
NASA Technical Reports Server (NTRS)
Miller, R. L.
1972-01-01
A system is presented which will reduce analog cardiac performance data and convert the results to digital form for direct entry into a commercial time-shared computer. Circuits are discussed which perform the measurement and digital conversion of instantaneous systolic and diastolic parameters from the analog blood pressure waveform. Digital averaging over a selected number of heart cycles is performed on these measurements, as well as those of flow and heart rate. The determination of average cardiac output and peripheral resistance, including trends, is the end result after processing by digital computer.
Rogers, Katherine H; Biesanz, Jeremy C
2015-12-01
There are strong differences between individuals in the tendency to view the personality of others as similar to the average person. That is, some people tend to form more normatively accurate impressions than do others. However, the process behind the formation of normatively accurate first impressions is not yet fully understood. Given that the average individual's personality is highly socially desirable (Borkenau & Zaltauskas, 2009; Wood, Gosling & Potter, 2007), individuals may achieve high normative accuracy by viewing others as similar to the average person or by viewing them in an overly socially desirable manner. The average self-reported personality profile and social desirability, despite being strongly correlated, independently and strongly predict first impressions. Further, some individuals have a more accurate understanding of the average individual's personality than do others. Perceivers with more accurate knowledge about the average individual's personality rated the personality of specific others more normatively accurately (more similar to the average person), suggesting that individual differences in normative judgments include a component of accurate knowledge regarding the average personality. In contrast, perceivers who explicitly evaluated others more positively formed more socially desirable impressions, but not more normatively accurate impressions. (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Hellaby, Charles
2012-01-01
A new method for constructing exact inhomogeneous universes is presented, that allows variation in 3 dimensions. The resulting spacetime may be statistically uniform on average, or have random, non-repeating variation. The construction utilises the Darmois junction conditions to join many different component spacetime regions. In the initial simple example given, the component parts are spatially flat and uniform, but much more general combinations should be possible. Further inhomogeneity may be added via swiss cheese vacuoles and inhomogeneous metrics. This model is used to explore the proposal, that observers are located in bound, non-expanding regions, while the universe is actually in the process of becoming void dominated, and thus its average expansion rate is increasing. The model confirms qualitatively that the faster expanding components come to dominate the average, and that inhomogeneity results in average parameters which evolve differently from those of any one component, but more realistic modelling of the effect will need this construction to be generalised.
Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.
Brezis, Noam; Bronfman, Zohar Z; Usher, Marius
2015-06-04
We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.
Spheroidization of molybdenum powder by radio frequency thermal plasma
NASA Astrophysics Data System (ADS)
Liu, Xiao-ping; Wang, Kuai-she; Hu, Ping; Chen, Qiang; Volinsky, Alex A.
2015-11-01
To control the morphology and particle size of dense spherical molybdenum powder prepared by radio frequency (RF) plasma from irregular molybdenum powder as a precursor, plasma process parameters were optimized in this paper. The effects of the carrier gas flow rate and molybdenum powder feeding rate on the shape and size of the final products were studied. The molybdenum powder morphology was examined using high-resolution scanning electron microscopy. The powder phases were analyzed by X-ray diffraction. The tap density and apparent density of the molybdenum powder were investigated using a Hall flow meter and a Scott volumeter. The optimal process parameters for the spherical molybdenum powder preparation are 50 g/min powder feeding rate and 0.6 m3/h carrier gas rate. In addition, pure spherical molybdenum powder can be obtained from irregular powder, and the tap density is enhanced after plasma processing. The average size is reduced from 72 to 62 µm, and the tap density is increased from 2.7 to 6.2 g/cm3. Therefore, RF plasma is a promising method for the preparation of high-density and high-purity spherical powders.
Adaptive model predictive process control using neural networks
Buescher, K.L.; Baum, C.C.; Jones, R.D.
1997-08-19
A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.
Adaptive model predictive process control using neural networks
Buescher, Kevin L.; Baum, Christopher C.; Jones, Roger D.
1997-01-01
A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data.
Towards a standard for the dynamic measurement of pressure based on laser absorption spectroscopy
Douglass, K O; Olson, D A
2016-01-01
We describe an approach for creating a standard for the dynamic measurement of pressure based on the measurement of fundamental quantum properties of molecular systems. From the linewidth and intensities of ro-vibrational transitions we plan on making an accurate determination of pressure and temperature. The goal is to achieve an absolute uncertainty for time-varying pressure of 5 % with a measurement rate of 100 kHz, which will in the future serve as a method for the traceable calibration of pressure sensors used in transient processes. To illustrate this concept we have used wavelength modulation spectroscopy (WMS), due to inherent advantages over direct absorption spectroscopy, to perform rapid measurements of carbon dioxide in order to determine the pressure. The system records the full lineshape profile of a single ro-vibrational transition of CO2 at a repetition rate of 4 kHz and with a systematic measurement uncertainty of 12 % for the linewidth measurement. A series of pressures were measured at a rate of 400 Hz (10 averages) and from these measurements the linewidth was determined with a relative uncertainty of about 0.5 % on average. The pressures measured using WMS have an average difference of 0.6 % from the absolute pressure measured with a capacitance diaphragm sensor. PMID:27881884
Predicting Psychotherapy Dropouts: A Multilevel Approach.
Kegel, Alexander F; Flückiger, Christoph
2015-01-01
The role of therapeutic processes in predicting premature termination of psychotherapy has been a particular focus of recent research. The purpose of this study was to contrast outpatients who completed therapy and those who dropped out with respect to their self-reported in-session experiences of self-esteem, mastery, clarification and the therapeutic alliance. The 296 patients with mixed disorders were treated with an integrative form of cognitive-behavioural therapy without pre-determined time limit (M = 20.2 sessions). Multilevel analyses indicated that patients who did not completetreatment reported, on average, lower levels of self-esteem, mastery and clarification and lower ratings of their therapeutic alliance in treatment in contrast to patients who completed therapy. Patient-reported change in self-esteem experiences over the course of treatment turned out to be the strongest predictor of dropout from psychotherapy or successful completion. When dropout occurred before the average treatment length was reached, patients reported fewer clarifying experiences as early as the first session and their ratings of the therapeutic alliance were characterized by an absence of positive development. Both of these aspects seem to be involved in patients' decisions to leave treatment early. The findings underscore the importance of the therapeutic process in understanding the mechanisms behind treatment dropout. Analyses data from 296 patients at a private outpatient clinic in a routine practice setting (CBT). Completer/dropout definition: presence or absence of measurement battery at post-assessment. Focuses on change in therapy processes by investigating post-session reports. Finds that positive changes in self-esteem experiences is the most robust predictor of dropout, followed by ratings of clarification experiences and the global alliance. In line with recent dropout research, these process indicators might help to detect therapeutic situations that are connected with psychotherapy dropouts. Copyright © 2014 John Wiley & Sons, Ltd.
Baudisch, M; Hemmer, M; Pires, H; Biegert, J
2014-10-15
The performance of potassium niobate (KNbO₃), MgO-doped periodically poled lithium niobate (MgO:PPLN), and potassium titanyl arsenate (KTA) were experimentally compared for broadband mid-wave infrared parametric amplification at a high repetition rate. The seed pulses, with an energy of 6.5 μJ, were amplified using 410 μJ pump energy at 1064 nm to a maximum pulse energy of 28.9 μJ at 3 μm wavelength and at a 160 kHz repetition rate in MgO:PPLN while supporting a transform limited duration of 73 fs. The high average powers of the interacting beams used in this study revealed average power-induced processes that limit the scaling of optical parametric amplification in MgO:PPLN; the pump peak intensity was limited to 3.8 GW/cm² due to nonpermanent beam reshaping, whereas in KNbO₃ an absorption-induced temperature gradient in the crystal led to permanent internal distortions in the crystal structure when operated above a pump peak intensity of 14.4 GW/cm².
Code of Federal Regulations, 2012 CFR
2012-07-01
....8, and shall conduct subsequent performance tests as requested by the Administrator, using the... Administrator when necessitated by process variables or other factors. (4) For Method 5 of appendix A of this... to the Administrator's satisfaction suitable methods to determine the average hourly heat input rate...
[Sociodemographic indicators of the Andean Pact countries].
1991-12-01
The Andean Pact, also known as the Cartagena Accord, was signed on May 26, 1969, with the goal of promoting the socioeconomic integration of the countries of the subregion (Venezuela, Colombia, Ecuador, Peru, and Bolivia). 1992 marked a new stage in the Andean Pact by the consolidation of the integration process with the establishment of the Andean Free Trade Zone, allowing the uninhibited flow of goods. The subregion's population was 91.8 million in 1990, the most populous being Colombia with 32.9 million people. 71.5% of the total population (65 million people) live in cities with a high level of urban growth. During the period of 1990-95 the average rate of population growth was the highest in Bolivia with 2.8% and the lowest in Colombia with 1.95%. By comparison, the rate of growth was 0.2% in western Europe and 0.7% in the US. According to projections of the UN, approximately 113 million people will be living in the subregion in the year 2000. The indices of life expectancy and infant mortality have improved in recent decades; however, they are still poor compared to those of developed countries. The highest rate of infant mortality was registered in Bolivia with 93/1000 live births, followed by Peru with 76/1000, Ecuador with 53/1000, Colombia with 37/1000, and Venezuela with 33/1000 live births. The average rate of European countries is 7/1000 live births. Life expectancy increased from an average of 50 years in 1950 to 65.4 years in 1990. In 1990, average life expectancy was 76 years in the US, indicating that there are significant differences in medical care and social security between the countries of the region and developed countries.
NASA Astrophysics Data System (ADS)
Meichsner, Jürgen; Wegner, Thomas
2018-05-01
Inductively coupled RF plasmas (ICP) in oxygen at low pressure have been intensively studied as a molecular and electronegative model system in the last funding period of the Collaborative Research Centre 24 "Fundamentals of Complex Plasmas". The ICP configuration consists of a planar coil inside a quartz cylinder as dielectric barrier which is immersed in a large stainless steel vacuum chamber. In particular, the E-H mode transition has been investigated, combining experimental results from comprehensive plasma diagnostics as input for analytical rate equation calculation of a volume averaged global model. The averaged density was determined for electrons, negative ions O-, molecular oxygen ground state O2(X3 Σg-) and singlet metastable state O2(a1 Δg) from line-integrated measurements using 160 GHz Gaussian beam microwave interferometry coupled with laser photodetachment experiment and VUV absorption spectroscopy, respectively. Taking into account the relevant elementary processes and rate coefficients from literature together with the measured temperatures and averaged density of electrons, O2(X3 Σg-) and O2(a1 Δg) the steady state density was calculated for O(3P), O2(b1 Σg+), O(1D), O(1S), O3, O-, O2-, and O3-, respectively. The averaged density of negative ions O- from the rate equation calculation is compared with the measured one. The normalized source and loss rates are discussed for O(3P), O2(b1 Σg+) and O-. Contribution to the Topical Issue "Fundamentals of Complex Plasmas", edited by Jürgen Meichsner, Michael Bonitz, Holger Fehske, Alexander Piel.
Automatic user customization for improving the performance of a self-paced brain interface system.
Fatourechi, Mehrdad; Bashashati, Ali; Birch, Gary E; Ward, Rabab K
2006-12-01
Customizing the parameter values of brain interface (BI) systems by a human expert has the advantage of being fast and computationally efficient. However, as the number of users and EEG channels grows, this process becomes increasingly time consuming and exhausting. Manual customization also introduces inaccuracies in the estimation of the parameter values. In this paper, the performance of a self-paced BI system whose design parameter values were automatically user customized using a genetic algorithm (GA) is studied. The GA automatically estimates the shapes of movement-related potentials (MRPs), whose features are then extracted to drive the BI. Offline analysis of the data of eight subjects revealed that automatic user customization improved the true positive (TP) rate of the system by an average of 6.68% over that whose customization was carried out by a human expert, i.e., by visually inspecting the MRP templates. On average, the best improvement in the TP rate (an average of 9.82%) was achieved for four individuals with spinal cord injury. In this case, the visual estimation of the parameter values of the MRP templates was very difficult because of the highly noisy nature of the EEG signals. For four able-bodied subjects, for which the MRP templates were less noisy, the automatic user customization led to an average improvement of 3.58% in the TP rate. The results also show that the inter-subject variability of the TP rate is also reduced compared to the case when user customization is carried out by a human expert. These findings provide some primary evidence that automatic user customization leads to beneficial results in the design of a self-paced BI for individuals with spinal cord injury.
Temporal variation in the importance of a dominant consumer to stream nutrient cycling
Griffiths, Natalie A.; Hill, Walter
2014-06-19
Animal excretion can be a significant nutrient flux within ecosystems, where it supports primary production and facilitates microbial decomposition of organic matter. The effects of excretory products on nutrient cycling have been documented for various species and ecosystems, but temporal variation in these processes is poorly understood. We examined variation in excretion rates of a dominant grazing snail, Elimia clavaeformis, and its contribution to nutrient cycling, over the course of 14 months in a well-studied, low-nutrient stream (Walker Branch, east Tennessee, USA). Biomass-specific excretion rates of ammonium varied over twofold during the study, coinciding with seasonal changes in food availabilitymore » (measured as gross primary production) and water temperature (multiple linear regression, R 2 = 0.57, P = 0.053). The contribution of ammonium excretion to nutrient cycling varied with seasonal changes in both biological (that is, nutrient uptake rate) and physical (that is, stream flow) variables. On average, ammonium excretion accounted for 58% of stream water ammonium concentrations, 26% of whole-stream nitrogen demand, and 66% of autotrophic nitrogen uptake. Phosphorus excretion by Elimia was contrastingly low throughout the year, supplying only 1% of total dissolved phosphorus concentrations. The high average N:P ratio (89:1) of snail excretion likely exacerbated phosphorus limitation in Walker Branch. To fully characterize animal excretion rates and effects on ecosystem processes, multiple measurements through time are necessary, especially in ecosystems that experience strong seasonality.« less
Temporal variation in the importance of a dominant consumer to stream nutrient cycling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffiths, Natalie A.; Hill, Walter
Animal excretion can be a significant nutrient flux within ecosystems, where it supports primary production and facilitates microbial decomposition of organic matter. The effects of excretory products on nutrient cycling have been documented for various species and ecosystems, but temporal variation in these processes is poorly understood. We examined variation in excretion rates of a dominant grazing snail, Elimia clavaeformis, and its contribution to nutrient cycling, over the course of 14 months in a well-studied, low-nutrient stream (Walker Branch, east Tennessee, USA). Biomass-specific excretion rates of ammonium varied over twofold during the study, coinciding with seasonal changes in food availabilitymore » (measured as gross primary production) and water temperature (multiple linear regression, R 2 = 0.57, P = 0.053). The contribution of ammonium excretion to nutrient cycling varied with seasonal changes in both biological (that is, nutrient uptake rate) and physical (that is, stream flow) variables. On average, ammonium excretion accounted for 58% of stream water ammonium concentrations, 26% of whole-stream nitrogen demand, and 66% of autotrophic nitrogen uptake. Phosphorus excretion by Elimia was contrastingly low throughout the year, supplying only 1% of total dissolved phosphorus concentrations. The high average N:P ratio (89:1) of snail excretion likely exacerbated phosphorus limitation in Walker Branch. To fully characterize animal excretion rates and effects on ecosystem processes, multiple measurements through time are necessary, especially in ecosystems that experience strong seasonality.« less
Resnick, Cory M; Daniels, Kimberly M; Flath-Sporn, Susan J; Doyle, Michael; Heald, Ronald; Padwa, Bonnie L
2016-11-01
To determine the effects on time, cost, and complication rates of integrating physician assistants (PAs) into the procedural components of an outpatient oral and maxillofacial surgery practice. This is a prospective cohort study of patients from the Department of Plastic and Oral Surgery at Boston Children's Hospital who underwent removal of 4 impacted third molars with intravenous sedation in our outpatient facility. Patients were separated into the "no PA group" and PA group. Process maps were created to capture all activities from room preparation to patient discharge, and all activities were timed for each case. A time-driven activity-based costing method was used to calculate the average times and costs from the provider's perspective for each group. Complication rates were calculated during the periods for both groups. Descriptive statistics were calculated, and significance was set at P < .05. The total process time did not differ significantly between groups, but the average total procedure cost decreased by $75.08 after the introduction of PAs (P < .001). The time that the oral and maxillofacial surgeon was directly involved in the procedure decreased by an average of 19.2 minutes after the introduction of PAs (P < .001). No significant differences in postoperative complications were found. The addition of PAs into the procedural components of an outpatient oral and maxillofacial surgery practice resulted in decreased costs whereas complication rates remained constant. The increased availability of the oral and maxillofacial surgeon after the incorporation of PAs allows for more patients to be seen during a clinic session, which has the potential to further increase efficiency and revenue. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Cell-intrinsic mechanisms of temperature compensation in a grasshopper sensory receptor neuron
Roemschied, Frederic A; Eberhard, Monika JB; Schleimer, Jan-Hendrik; Ronacher, Bernhard; Schreiber, Susanne
2014-01-01
Changes in temperature affect biochemical reaction rates and, consequently, neural processing. The nervous systems of poikilothermic animals must have evolved mechanisms enabling them to retain their functionality under varying temperatures. Auditory receptor neurons of grasshoppers respond to sound in a surprisingly temperature-compensated manner: firing rates depend moderately on temperature, with average Q10 values around 1.5. Analysis of conductance-based neuron models reveals that temperature compensation of spike generation can be achieved solely relying on cell-intrinsic processes and despite a strong dependence of ion conductances on temperature. Remarkably, this type of temperature compensation need not come at an additional metabolic cost of spike generation. Firing rate-based information transfer is likely to increase with temperature and we derive predictions for an optimal temperature dependence of the tympanal transduction process fostering temperature compensation. The example of auditory receptor neurons demonstrates how neurons may exploit single-cell mechanisms to cope with multiple constraints in parallel. DOI: http://dx.doi.org/10.7554/eLife.02078.001 PMID:24843016
Invasive advance of an advantageous mutation: nucleation theory.
O'Malley, Lauren; Basham, James; Yasi, Joseph A; Korniss, G; Allstadt, Andrew; Caraco, Thomas
2006-12-01
For sedentary organisms with localized reproduction, spatially clustered growth drives the invasive advance of a favorable mutation. We model competition between two alleles where recurrent mutation introduces a genotype with a rate of local propagation exceeding the resident's rate. We capture ecologically important properties of the rare invader's stochastic dynamics by assuming discrete individuals and local neighborhood interactions. To understand how individual-level processes may govern population patterns, we invoke the physical theory for nucleation of spatial systems. Nucleation theory discriminates between single-cluster and multi-cluster dynamics. A sufficiently low mutation rate, or a sufficiently small environment, generates single-cluster dynamics, an inherently stochastic process; a favorable mutation advances only if the invader cluster reaches a critical radius. For this mode of invasion, we identify the probability distribution of waiting times until the favored allele advances to competitive dominance, and we ask how the critical cluster size varies as propagation or mortality rates vary. Increasing the mutation rate or system size generates multi-cluster invasion, where spatial averaging produces nearly deterministic global dynamics. For this process, an analytical approximation from nucleation theory, called Avrami's Law, describes the time-dependent behavior of the genotype densities with remarkable accuracy.
Charging and Transport Dynamics of a Flow-Through Electrode Capacitive Deionization System.
Qu, Yatian; Campbell, Patrick G; Hemmatifar, Ali; Knipe, Jennifer M; Loeb, Colin K; Reidy, John J; Hubert, Mckenzie A; Stadermann, Michael; Santiago, Juan G
2018-01-11
We present a study of the interplay among electric charging rate, capacitance, salt removal, and mass transport in "flow-through electrode" capacitive deionization (CDI) systems. We develop two models describing coupled transport and electro-adsorption/desorption which capture salt removal dynamics. The first model is a simplified, unsteady zero-dimensional volume-averaged model which identifies dimensionless parameters and figures of merits associated with cell performance. The second model is a higher fidelity area-averaged model which captures both spatial and temporal responses of charging. We further conducted an experimental study of these dynamics and considered two salt transport regimes: (1) advection-limited regime and (2) dispersion-limited regime. We use these data to validate models. The study shows that, in the advection-limited regime, differential charge efficiency determines the salt adsorption at the early stage of the deionization process. Subsequently, charging transitions to a quasi-steady state where salt removal rate is proportional to applied current scaled by the inlet flow rate. In the dispersion-dominated regime, differential charge efficiency, cell volume, and diffusion rates govern adsorption dynamics and flow rate has little effect. In both regimes, the interplay among mass transport rate, differential charge efficiency, cell capacitance, and (electric) charging current governs salt removal in flow-through electrode CDI.
New approach to calibrating bed load samplers
Hubbell, D.W.; Stevens, H.H.; Skinner, J.V.; Beverage, J.P.
1985-01-01
Cyclic variations in bed load discharge at a point, which are an inherent part of the process of bed load movement, complicate calibration of bed load samplers and preclude the use of average rates to define sampling efficiencies. Calibration curves, rather than efficiencies, are derived by two independent methods using data collected with prototype versions of the Helley‐Smith sampler in a large calibration facility capable of continuously measuring transport rates across a 9 ft (2.7 m) width. Results from both methods agree. Composite calibration curves, based on matching probability distribution functions of samples and measured rates from different hydraulic conditions (runs), are obtained for six different versions of the sampler. Sampled rates corrected by the calibration curves agree with measured rates for individual runs.
Inhomogeneous diffusion and ergodicity breaking induced by global memory effects
NASA Astrophysics Data System (ADS)
Budini, Adrián A.
2016-11-01
We introduce a class of discrete random-walk model driven by global memory effects. At any time, the right-left transitions depend on the whole previous history of the walker, being defined by an urnlike memory mechanism. The characteristic function is calculated in an exact way, which allows us to demonstrate that the ensemble of realizations is ballistic. Asymptotically, each realization is equivalent to that of a biased Markovian diffusion process with transition rates that strongly differs from one trajectory to another. Using this "inhomogeneous diffusion" feature, the ergodic properties of the dynamics are analytically studied through the time-averaged moments. Even in the long-time regime, they remain random objects. While their average over realizations recovers the corresponding ensemble averages, departure between time and ensemble averages is explicitly shown through their probability densities. For the density of the second time-averaged moment, an ergodic limit and the limit of infinite lag times do not commutate. All these effects are induced by the memory effects. A generalized Einstein fluctuation-dissipation relation is also obtained for the time-averaged moments.
The influence of droplet evaporation on fuel-air mixing rate in a burner
NASA Technical Reports Server (NTRS)
Komiyama, K.; Flagan, R. C.; Heywood, J. B.
1977-01-01
Experiments involving combustion of a variety of hydrocarbon fuels in a simple atmospheric pressure burner were used to evaluate the role of droplet evaporation in the fuel/air mixing process in liquid fuel spray flames. Both air-assist atomization and pressure atomization processes were studied; fuel/air mixing rates were determined on the basis of cross-section average oxygen concentrations for stoichiometric overall operation. In general, it is concluded that droplets act as point sources of fuel vapor until evaporation, when the fuel jet length scale may become important in determining nonuniformities of the fuel vapor concentration. In addition, air-assist atomizers are found to have short droplet evaporation times with respect to the duration of the fuel/air mixing process, while for the pressure jet atomizer the characteristic evaporation and mixing times are similar.
Chen, H L; Wang, J K; Zhang, L L; Wu, Z Y
2000-04-01
Determining and comparing the contents of general flavonoides in four kinds of differently-processed products of Epimedium acuminatum. Determining the contents by ultraviolet spectrophotometry. The contents were found in the following seguence: unprocessed product, clearly-fried product, alcohol-broiled product, salt-broiled product, sheep-fat-broiled product. The average recovery rate was 96.01%, with a 0.74% RSD(n = 5). Heating causes the contents of general flavonoides in the processed products to decrease. These processed products are still often used in clinical treatment, for the reason that the adjuvant features certain coordinating and promoting functions. The study is to be pursued further.
NASA Astrophysics Data System (ADS)
Taheriniya, Shabnam; Parhizgar, Sara Sadat; Sari, Amir Hossein
2018-06-01
To study the alumina template pore size distribution as a function of Al thin film grain size distribution, porous alumina templates were prepared by anodizing sputtered aluminum thin films. To control the grain size the aluminum samples were sputtered with the rate of 0.5, 1 and 2 Å/s and the substrate temperature was either 25, 75 or 125 °C. All samples were anodized for 120 s in 1 M sulfuric acid solution kept at 1 °C while a 15 V potential was being applied. The standard deviation value for samples deposited at room temperature but with different rates is roughly 2 nm in both thin film and porous template form but it rises to approximately 4 nm with substrate temperature. Samples with the average grain size of 13, 14, 18.5 and 21 nm respectively produce alumina templates with an average pore size of 8.5, 10, 15 and 16 nm in that order which shows the average grain size limits the average pore diameter in the resulting template. Lateral correlation length and grain boundary effect are other factors that affect the pore formation process and pore size distribution by limiting the initial current density.
Acephate and buprofezin residues in olives and olive oil.
Cabras, P; Angioni, A; Garau, V L; Pirisi, F M; Cabitza, F; Pala, M
2000-10-01
Field trials were carried out to study the persistence of acephate and buprofezin on olives. Two cultivars, pizz'e carroga and pendolino, with very large and small fruits respectively were used. After treatment, no difference was found between the two pesticide deposits on the olives. The disappearance rates, calculated as pseudo first order kinetics, were similar for both pesticides (on average 12 days). Methamidophos, the acephate metabolite, was always present on all olives, and in some pendolino samples it showed higher residues than the maximum residue limit (MRL). During washing, the first step of olive processing, the residue level of both pesticides on the olives did not decrease. After processing of the olives into oil, no residues of acephate or methamidophos were found in the olive oil, while the residues of buprofezin were on average four times higher than on olives.
Bednarczyk, Robert A.; Richards, Jennifer L.; Allen, Kristen E.; Warraich, Gohar J.; Omer, Saad B.
2017-01-01
Objectives. To evaluate trends in rates of personal belief exemptions (PBEs) to immunization requirements for private kindergartens in California that practice alternative educational methods. Methods. We used California Department of Public Health data on kindergarten PBE rates from 2000 to 2014 to compare annual average increases in PBE rates between schools. Results. Alternative schools had an average PBE rate of 8.7%, compared with 2.1% among public schools. Waldorf schools had the highest average PBE rate of 45.1%, which was 19 times higher than in public schools (incidence rate ratio = 19.1; 95% confidence interval = 16.4, 22.2). Montessori and holistic schools had the highest average annual increases in PBE rates, slightly higher than Waldorf schools (Montessori: 8.8%; holistic: 7.1%; Waldorf: 3.6%). Conclusions. Waldorf schools had exceptionally high average PBE rates, and Montessori and holistic schools had higher annual increases in PBE rates. Children in these schools may be at higher risk for spreading vaccine-preventable diseases if trends are not reversed. PMID:27854520
Brennan, Julia M; Bednarczyk, Robert A; Richards, Jennifer L; Allen, Kristen E; Warraich, Gohar J; Omer, Saad B
2017-01-01
To evaluate trends in rates of personal belief exemptions (PBEs) to immunization requirements for private kindergartens in California that practice alternative educational methods. We used California Department of Public Health data on kindergarten PBE rates from 2000 to 2014 to compare annual average increases in PBE rates between schools. Alternative schools had an average PBE rate of 8.7%, compared with 2.1% among public schools. Waldorf schools had the highest average PBE rate of 45.1%, which was 19 times higher than in public schools (incidence rate ratio = 19.1; 95% confidence interval = 16.4, 22.2). Montessori and holistic schools had the highest average annual increases in PBE rates, slightly higher than Waldorf schools (Montessori: 8.8%; holistic: 7.1%; Waldorf: 3.6%). Waldorf schools had exceptionally high average PBE rates, and Montessori and holistic schools had higher annual increases in PBE rates. Children in these schools may be at higher risk for spreading vaccine-preventable diseases if trends are not reversed.
Löpprich, Martin; Krauss, Felix; Ganzinger, Matthias; Senghas, Karsten; Riezler, Stefan; Knaup, Petra
2016-08-05
In the Multiple Myeloma clinical registry at Heidelberg University Hospital, most data are extracted from discharge letters. Our aim was to analyze if it is possible to make the manual documentation process more efficient by using methods of natural language processing for multiclass classification of free-text diagnostic reports to automatically document the diagnosis and state of disease of myeloma patients. The first objective was to create a corpus consisting of free-text diagnosis paragraphs of patients with multiple myeloma from German diagnostic reports, and its manual annotation of relevant data elements by documentation specialists. The second objective was to construct and evaluate a framework using different NLP methods to enable automatic multiclass classification of relevant data elements from free-text diagnostic reports. The main diagnoses paragraph was extracted from the clinical report of one third randomly selected patients of the multiple myeloma research database from Heidelberg University Hospital (in total 737 selected patients). An EDC system was setup and two data entry specialists performed independently a manual documentation of at least nine specific data elements for multiple myeloma characterization. Both data entries were compared and assessed by a third specialist and an annotated text corpus was created. A framework was constructed, consisting of a self-developed package to split multiple diagnosis sequences into several subsequences, four different preprocessing steps to normalize the input data and two classifiers: a maximum entropy classifier (MEC) and a support vector machine (SVM). In total 15 different pipelines were examined and assessed by a ten-fold cross-validation, reiterated 100 times. For quality indication the average error rate and the average F1-score were conducted. For significance testing the approximate randomization test was used. The created annotated corpus consists of 737 different diagnoses paragraphs with a total number of 865 coded diagnosis. The dataset is publicly available in the supplementary online files for training and testing of further NLP methods. Both classifiers showed low average error rates (MEC: 1.05; SVM: 0.84) and high F1-scores (MEC: 0.89; SVM: 0.92). However the results varied widely depending on the classified data element. Preprocessing methods increased this effect and had significant impact on the classification, both positive and negative. The automatic diagnosis splitter increased the average error rate significantly, even if the F1-score decreased only slightly. The low average error rates and high average F1-scores of each pipeline demonstrate the suitability of the investigated NPL methods. However, it was also shown that there is no best practice for an automatic classification of data elements from free-text diagnostic reports.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soudackov, Alexander V.; Hammes-Schiffer, Sharon
2015-11-21
Rate constant expressions for vibronically nonadiabatic proton transfer and proton-coupled electron transfer reactions are presented and analyzed. The regimes covered include electronically adiabatic and nonadiabatic reactions, as well as high-frequency and low-frequency proton donor-acceptor vibrational modes. These rate constants differ from previous rate constants derived with the cumulant expansion approach in that the logarithmic expansion of the vibronic coupling in terms of the proton donor-acceptor distance includes a quadratic as well as a linear term. The analysis illustrates that inclusion of this quadratic term in the framework of the cumulant expansion framework may significantly impact the rate constants at highmore » temperatures for proton transfer interfaces with soft proton donor-acceptor modes that are associated with small force constants and weak hydrogen bonds. The effects of the quadratic term may also become significant in these regimes when using the vibronic coupling expansion in conjunction with a thermal averaging procedure for calculating the rate constant. In this case, however, the expansion of the coupling can be avoided entirely by calculating the couplings explicitly for the range of proton donor-acceptor distances sampled. The effects of the quadratic term for weak hydrogen-bonding systems are less significant for more physically realistic models that prevent the sampling of unphysical short proton donor-acceptor distances. Additionally, the rigorous relation between the cumulant expansion and thermal averaging approaches is clarified. In particular, the cumulant expansion rate constant includes effects from dynamical interference between the proton donor-acceptor and solvent motions and becomes equivalent to the thermally averaged rate constant when these dynamical effects are neglected. This analysis identifies the regimes in which each rate constant expression is valid and thus will be important for future applications to proton transfer and proton-coupled electron transfer in chemical and biological processes.« less
NASA Astrophysics Data System (ADS)
Lee, Minhee; Wang, Sookyun; Kim, Seyoon; Park, Jinyoung
2015-04-01
Lab scale experiments were performed to investigate the property changes of sandstone slabs and cores, resulting from the scCO2-rock-groundwater reaction for 180 days under CO2 sequestration conditions (100 bar and 50 °C). The geochemical reactions, including the surface roughness change of minerals in the slab, resulted from the dissolution and the secondary mineral precipitation for the sandstone reservoir of the Gyeongsang basin, Korea were reproduced in laboratory scale experiments and the relationship between the geochemical reaction and the physical rock property change was derived, for the consideration of successful subsurface CO2 sequestration. The use of the surface roughness value (SRrms) change rate and the physical property change rate to quantify scCO2-rock-groundwater reaction is the novel approach on the study area for CO2 sequestration in the subsurface. From the results of SPM (Scanning Probe Microscope) analyses, the SRrms for each sandstone slab was calculated at different reaction time. The average SRrms increased more than 3.5 times during early 90 days reaction and it continued to be steady after 90 days, suggesting that the surface weathering process of sandstone occurred in the early reaction time after CO2 injection into the subsurface reservoir. The average porosity of sandstone cores increased by 8.8 % and the average density decreased by 0.5 % during 90 days reaction and these values slightly changed after 90 days. The average P and S wave velocities of sandstone cores also decreased by 10 % during 90 days reaction. The trend of physical rock property change during the geochemical reaction showed in a logarithmic manner and it was also correlated to the logarithmic increase in SRrms, suggesting that the physical property change of reservoir rocks originated from scCO2 injection directly comes from the geochemical reaction process. Results suggested that the long-term estimation of the physical property change for reservoir rocks in CO2 injection site could be possible from the extrapolation process of SRrms and rocks property change rates, acquired from laboratory scale experiments. It will be aslo useful to determine the favorite CO2 injection site from the viewpoint of the safety.
Sulter, A M; Wit, H P
1996-11-01
Glottal volume velocity waveform characteristics of 224 subjects, categorized in four groups according to gender and vocal training, were determined, and their relations to sound-pressure level, fundamental frequency, intra-oral pressure, and age were analyzed. Subjects phonated at three intensity conditions. The glottal volume velocity waveforms were obtained by inverse filtering the oral flow. Glottal volume velocity waveforms were parameterized with flow-based (minimum flow, ac flow, average flow, maximum flow declination rate) and time-based parameters (closed quotient, closing quotient, speed quotient), as well as with derived parameters (vocal efficiency and glottal resistance). Higher sound-pressure levels, intra-oral pressures, and flow-parameter values (ac flow, maximum flow declination rate) were observed, when compared with previous investigations. These higher values might be the result of the specific phonation tasks (stressed /ae/ vowel in a word and a sentence) or filtering processes. Few statistically significant (p < 0.01) differences in parameters were found between untrained and trained subjects [the maximum flow declination rate and the closing quotient were higher in trained women (p < 0.001), and the speed quotient was higher in trained men (p < 0.005)]. Several statistically significant parameter differences were found between men and women [minimum flow, ac flow, average flow, maximum flow declination rate, closing quotient, glottal resistance (p < 0.001), and closed quotient (p < 0.005)]. Significant effects of intensity condition were observed on ac flow, maximum flow declination rate, closing quotient, and vocal efficiency in women (p < 0.005), and on minimum flow, ac flow, average flow, maximum flow declination rate, closed quotient, and vocal efficiency in men (p < 0.01).
Fluctuation Dynamics of Exchange Rates on Indian Financial Market
NASA Astrophysics Data System (ADS)
Sarkar, A.; Barat, P.
Here we investigate the scaling behavior and the complexity of the average daily exchange rate returns of the Indian Rupee against four foreign currencies namely US Dollar, Euro, Great Britain Pound and Japanese Yen. Our analysis revealed that the average daily exchange rate return of the Indian Rupee against the US Dollar exhibits a persistent scaling behavior and follow Levy stable distribution. On the contrary the average daily exchange rate returns of the other three foreign currencies show randomness and follow Gaussian distribution. Moreover, it is seen that the complexity of the average daily exchange rate return of the Indian Rupee against US Dollar is less than the other three exchange rate returns.
Safety of clinical and non-clinical decision makers in telephone triage: a narrative review.
Wheeler, Sheila Q; Greenberg, Mary E; Mahlmeister, Laura; Wolfe, Nicole
2015-09-01
Patient safety is a persistent problem in telephone triage research; however, studies have not differentiated between clinicians' and non-clinicians' respective safety. Currently, four groups of decision makers perform aspects of telephone triage: clinicians (physicians, nurses), and non-clinicians (emergency medical dispatchers (EMD) and clerical staff). Using studies published between 2002-2012, we applied Donabedian's structure-process-outcome model to examine groups' systems for evidence of system completeness (a minimum measure of structure and quality). We defined system completeness as the presence of a decision maker and four additional components: guidelines, documentation, training, and standards. Defining safety as appropriate referrals (AR) - (right time, right place with the right person), we measured each groups' corresponding AR rate percentages (outcomes). We analyzed each group's respective decision-making process as a safe match to the telephone triage task, based on each group's system structure completeness, process and AR rates (outcome). Studies uniformly noted system component presence: nurses (2-4), physicians (1), EMDs (2), clerical staff (1). Nurses had the highest average appropriate referral (AR) rates (91%), physicians' AR (82% average). Clerical staff had no system and did not perform telephone triage by standard definitions; EMDs may represent the use of the wrong system. Telephone triage appears least safe after hours when decision makers with the least complete systems (physicians, clerical staff) typically manage calls. At minimum, telephone triage decision makers should be clinicians; however, clinicians' safety calls for improvement. With improved training, standards and CDSS quality, the 24/7 clinical call center has potential to represent the national standard. © The Author(s) 2015.
Ward, G.A.; Smith, T. J.; Whelan, K.R.T.; Doyle, T.W.
2006-01-01
Physiological processes and local-scale structural dynamics of mangroves are relatively well studied. Regional-scale processes, however, are not as well understood. Here we provide long-term data on trends in structure and forest turnover at a large scale, following hurricane damage in mangrove ecosystems of South Florida, U.S.A. Twelve mangrove vegetation plots were monitored at periodic intervals, between October 1992 and March 2005. Mangrove forests of this region are defined by a -1.5 scaling relationship between mean stem diameter and stem density, mirroring self-thinning theory for mono-specific stands. This relationship is reflected in tree size frequency scaling exponents which, through time, have exhibited trends toward a community average that is indicative of full spatial resource utilization. These trends, together with an asymptotic standing biomass accumulation, indicate that coastal mangrove ecosystems do adhere to size-structured organizing principles as described for upland tree communities. Regenerative dynamics are different between areas inside and outside of the primary wind-path of Hurricane Andrew which occurred in 1992. Forest dynamic turnover rates, however, are steady through time. This suggests that ecological, more-so than structural factors, control forest productivity. In agreement, the relative mean rate of biomass growth exhibits an inverse relationship with the seasonal range of porewater salinities. The ecosystem average in forest scaling relationships may provide a useful investigative tool of mangrove community biomass relationships, as well as offer a robust indicator of general ecosystem health for use in mangrove forest ecosystem management and restoration. ?? Springer 2006.
Development of the mare regolith: some model considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quaide, W.; Oberbeck, V.
1975-05-01
Mare regolith is fragmental debris of variable thickness that lies upon fractured bedrock. Its origin by impact comminution of primarily local basaltic rocks is widely accepted, but the consequences of such an origin are not appreciated fully. This investigation uses results obtained in an earlier Monte Carlo study by Oberbeck et al. (1973) to shed light on those consequences by evaluating regolith growth and mixing as a function of time. Results reported are for average cases and must be used with caution. The study demonstrates that regolith growth is self regulated and has the same trend and nearly the samemore » terminal growth rates whatever the history of bombardment: rapid initial accumulation followed by diminishing rates of growth. Mixing and all other processes investigated are growth regulated. Mixing increases as growth slows, but never to the extent that the regolith is homogenized. Because the average regolith is never homogenized, products of growth regulated processes are preserved in the stratigraphy. Differences in material properties are to be expected in vertical sections of the regolith, therefore, this model is not sufficiently refined to permit prediction of all possible trends. It does indicate, however, that deeper levels contain thinner depositional units, lesser quantities of meteoritic and exotic components, and more debris derived from shallow levels in the mare basalts than material in near surface layers. Additionally, neutron fluence production is regulated by the growth process, but because rates of growth do not differ much over the last aeon, whatever the total age or early bombardment history, values of surface fluence may be similar in many areas whatever their age. (NL)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iwahana, Go; Uchida, Masao; Liu, Lin
Thermokarst is the process of ground subsidence caused by either the thawing of ice-rich permafrost or the melting of massive ground ice. The consequences of permafrost degradation associated with thermokarst for surface ecology, landscape evolution, and hydrological processes have been of great scientific interest and social concern. Part of a tundra patch affected by wildfire in northern Alaska (27.5 km 2) was investigated here, using remote sensing and in situ surveys to quantify and understand permafrost thaw dynamics after surface disturbances. A two-pass differential InSAR technique using L-band ALOS-PALSAR has been shown capable of capturing thermokarst subsidence triggered by amore » tundra fire at a spatial resolution of tens of meters, with supporting evidence from field data and optical satellite images. We have introduced a calibration procedure, comparing burned and unburned areas for InSAR subsidence signals, to remove the noise due to seasonal surface movement. In the first year after the fire, an average subsidence rate of 6.2 cm/year (vertical) was measured. Subsidence in the burned area continued over the following two years, with decreased rates. The mean rate of subsidence observed in our interferograms (from 24 July 2008 to 14 September 2010) was 3.3 cm/year, a value comparable to that estimated from field surveys at two plots on average (2.2 cm/year) for the six years after the fire. These results suggest that this InSAR-measured ground subsidence is caused by the development of thermokarst, a thawing process supported by surface change observations from high-resolution optical images and in situ ground level surveys.« less
Cherif, Alhaji; Barley, Kamal
2010-01-01
Quantification of historical sociological processes have recently gained attention among theoreticians in the effort of providing a solid theoretical understanding of the behaviors and regularities present in socio-political dynamics. Here we present a reliability theory of polity processes with emphases on individual political dynamics of African countries. We found that the structural properties of polity failure rates successfully capture the risk of political vulnerability and instabilities in which , , , and of the countries with monotonically increasing, unimodal, U-shaped and monotonically decreasing polity failure rates, respectively, have high level of state fragility indices. The quasi-U-shape relationship between average polity duration and regime types corroborates historical precedents and explains the stability of the autocracies and democracies. PMID:21206911
Noise shaping in populations of coupled model neurons.
Mar, D J; Chow, C C; Gerstner, W; Adams, R W; Collins, J J
1999-08-31
Biological information-processing systems, such as populations of sensory and motor neurons, may use correlations between the firings of individual elements to obtain lower noise levels and a systemwide performance improvement in the dynamic range or the signal-to-noise ratio. Here, we implement such correlations in networks of coupled integrate-and-fire neurons using inhibitory coupling and demonstrate that this can improve the system dynamic range and the signal-to-noise ratio in a population rate code. The improvement can surpass that expected for simple averaging of uncorrelated elements. A theory that predicts the resulting power spectrum is developed in terms of a stochastic point-process model in which the instantaneous population firing rate is modulated by the coupling between elements.
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.
2018-02-01
Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.
McCartan, L.; Morse, D.E.; Plunkert, P.A.; Sibley, S.F.
2004-01-01
The average annual growth rate of real gross domestic product (GDP) from the third quarter of 2001 through the second quarter of 2003 in the United States was about 2.6 percent. GDP growth rates in the third and fourth quarters of 2003 were about 8 percent and 4 percent, respectively. The upward trends in many sectors of the U.S. economy in 2003, however, were shared by few of the mineral materials industries. Annual output declined in most nonfuel mining and mineral processing industries, although there was an upward turn toward yearend as prices began to increase.
Kitaev, E M; Pimenova, M N
1980-12-01
The rat oocytes extracted from the rat ovaries and cultivated for 42-46 hours were compared with ovulated oocytes by the chromosomal aberration rate. The chromosomal aberration rate in the population of "follicular" oocytes was 8.2% on the average whereas in ovulated oocytes, it did not exceed 1.8%. Analysis of the chromosomal aberrations depending on the phase of the estral cycle suggests that the main portion of chromosomal aberrations in cultivated oocytes occurs during the physiological process of follicular atresia.
Kinetics of formation of nanoparticles from first group metal carboxylates
NASA Astrophysics Data System (ADS)
Solov'ev, M. E.; Irzhak, T. F.; Irzhak, V. I.
2015-09-01
A kinetic model of the formation of metal nanoparticles via reduction of their carboxylates under conditions of clustering is proposed. It is found that the kinetics of the process is characterized by an induction period in carboxylate consumption and by almost linear growth of the average size of nanoparticles with conversion. It is shown that the maximum rate of nanoparticle formation grows along with the rate of ternary associate formation, the induction period becomes longer, and the particle size decreases. At the same time, it is characterized by a narrow size distribution.
NASA Technical Reports Server (NTRS)
Estes, N. A. 3rd; Michaud, G.; Zipes, D. P.; El-Sherif, N.; Venditti, F. J.; Rosenbaum, D. S.; Albrecht, P.; Wang, P. J.; Cohen, R. J.
1997-01-01
This investigation was performed to evaluate the feasibility of detecting repolarization alternans with the heart rate elevated with a bicycle exercise protocol. Sensitive spectral signal-processing techniques are able to detect beat-to-beat alternation of the amplitude of the T wave, which is not visible on standard electrocardiogram. Previous animal and human investigations using atrial or ventricular pacing have demonstrated that T-wave alternans is a marker of vulnerability to ventricular arrhythmias. Using a spectral analysis technique incorporating noise reduction signal-processing software, we evaluated electrical alternans at rest and with the heart rate elevated during a bicycle exercise protocol. In this study we defined optimal criteria for electrical alternans to separate patients from those without inducible arrhythmias. Alternans and signal-averaged electrocardiographic results were compared with the results of vulnerability to ventricular arrhythmias as defined by induction of sustained ventricular tachycardia or fibrillation at electrophysiologic evaluation. In 27 patients alternans recorded at rest and with exercise had a sensitivity of 89%, specificity of 75%, and overall clinical accuracy of 80% (p <0.003). In this patient population the signal-averaged electrocardiogram was not a significant predictor of arrhythmia vulnerability. This is the first study to report that repolarization alternans can be detected with heart rate elevated with a bicycle exercise protocol. Alternans measured using this technique is an accurate predictor of arrhythmia inducibility.
Memorial Hermann: high reliability from board to bedside.
Shabot, M Michael; Monroe, Douglas; Inurria, Juan; Garbade, Debbi; France, Anne-Claire
2013-06-01
In 2006 the Memorial Hermann Health System (MHHS), which includes 12 hospitals, began applying principles embraced by high reliability organizations (HROs). Three factors support its HRO journey: (1) aligned organizational structure with transparent management systems and compressed reporting processes; (2) Robust Process Improvement (RPI) with high-reliability interventions; and (3) cultural establishment, sustainment, and evolution. The Quality and Safety strategic plan contains three domains, each with a specific set of measures that provide goals for performance: (1) "Clinical Excellence;" (2) "Do No Harm;" and (3) "Saving Lives," as measured by the Serious Safety Event rate. MHHS uses a uniform approach to performance improvement--RPI, which includes Six Sigma, Lean, and change management, to solve difficult safety and quality problems. The 9 acute care hospitals provide multiple opportunities to integrate high-reliability interventions and best practices across MHHS. For example, MHHS partnered with the Joint Commission Center for Transforming Healthcare in its inaugural project to establish reliable hand hygiene behaviors, which improved MHHS's average hand hygiene compliance rate from 44% to 92% currently. Soon after compliance exceeded 85% at all 12 hospitals, the average rate of central line-associated bloodstream and ventilator-associated pneumonias decreased to essentially zero. MHHS's size and diversity require a disciplined approach to performance improvement and systemwide achievement of measurable success. The most significant cultural change at MHHS has been the expectation for 100% compliance with evidence-based quality measures and 0% incidence of patient harm.
Le, Nicole K; García-Molina, Carla; Kumar, Ambuj; Griffin, Lisa; Dayicioglu, Deniz
2017-01-01
The tissue expansion process is done after mastectomies to increase the submuscular space in preparation for the placement of permanent breast implant. The process is often believed to be painful by patients who are often intimidated by the prospect of mechanically stretching out their skin and muscle. This study aims to quantify the pain experienced by patients and determine the different pain management techniques used. We used a case series approach, in which patients who were undergoing serial tissue expansion process were asked to rate their pain and anxiety on a scale from 1 to 10, using a questionnaire and the visual analog scale. Pain was rated during and after the expansion procedure, and patients were also surveyed to find the most commonly used and most effective pain management technique. Patients typically reported very little pain during and after the procedure, with an average of 0.4 to 2.5 pain experienced out of 10. The pain did not last, on average, longer than 1 day. Furthermore, the most widely used and most helpful pain medication was ibuprofen. During the tissue expansion procedure, the mean anxiety level was 0.64 (1.3). The findings show that tissue expansion process is a relatively low pain procedure and is not a contraindication for undergoing breast reconstruction. Ibuprofen, a mild treatment with few side effects, was efficacious in pain relief though most patients required no pain relief.
Recursive processes in self-affirmation: intervening to close the minority achievement gap.
Cohen, Geoffrey L; Garcia, Julio; Purdie-Vaughns, Valerie; Apfel, Nancy; Brzustoski, Patricia
2009-04-17
A 2-year follow-up of a randomized field experiment previously reported in Science is presented. A subtle intervention to lessen minority students' psychological threat related to being negatively stereotyped in school was tested in an experiment conducted three times with three independent cohorts (N = 133, 149, and 134). The intervention, a series of brief but structured writing assignments focusing students on a self-affirming value, reduced the racial achievement gap. Over 2 years, the grade point average (GPA) of African Americans was, on average, raised by 0.24 grade points. Low-achieving African Americans were particularly benefited. Their GPA improved, on average, 0.41 points, and their rate of remediation or grade repetition was less (5% versus 18%). Additionally, treated students' self-perceptions showed long-term benefits. Findings suggest that because initial psychological states and performance determine later outcomes by providing a baseline and initial trajectory for a recursive process, apparently small but early alterations in trajectory can have long-term effects. Implications for psychological theory and educational practice are discussed.
Shah, Ajit
2009-07-01
Suicides may be misclassified as accidental deaths in countries with strict legal definitions of suicide, with cultural and religious factors leading to poor registration of suicide and stigma attached to suicide. The concordance between four different definitions of suicides was evaluated by examining the relationship between pure suicide and accidental death rates, gender differences, age-associated trends and potential distil risk and protective factors by conducting secondary analysis of the latest World Health Organisation data on elderly death rates. The four definitions of suicide were: (i) one-year pure suicides rates; one-year combined suicide rates (pure suicide rates combined with accidental death rates); (iii) five-year average pure suicide rates; and (iv) five-year average combined suicides rates (pure suicides rates combined with accidental death rates). The predicted negative correlation between pure suicide and accidental death rates was not observed. Gender differences were similar for all four definitions of suicide. There was a highly significant concordance for the findings of age-associated trends between one-year pure and combined suicide rates, one-year and five-year average pure suicide rates, and five-year average pure and combined suicide rates. There was poor concordance between pure and combined suicide rates for both one-year and five-year average data for the 14 potential distil risk and protective factors, but this concordance between one-year and five-year average pure suicide rates was highly significant. The use of one-year pure suicide rates in cross-national ecological studies examining gender differences, age-associated trends and potential distil risk and protective factors is likely to be practical, pragmatic and resource-efficient.
Mutti-Packer, Seema; Hodgins, David C; El-Guebaly, Nady; Casey, David M; Currie, Shawn R; Williams, Robert J; Smith, Garry J; Schopflocher, Don P
2017-06-01
The objective of the current study was to examine the possible temporal associations between alcohol misuse and problem gambling symptomatology from adolescence through to young adulthood. Parallel-process latent growth curve modeling was used to examine the trajectories of alcohol misuse and symptoms of problem gambling over time. Data were from a sample of adolescents recruited for the Leisure, Lifestyle, and Lifecycle Project in Alberta, Canada (n = 436), which included 4 assessments over 5 years. There was an average decline in problem gambling symptoms followed by an accelerating upward trend as the sample reached the legal age to gamble. There was significant variation in the rate of change in problem gambling symptoms over time; not all respondents followed the same trajectory. There was an average increase in alcohol misuse over time, with significant variability in baseline levels of use and the rate of change over time. The unconditional parallel process model indicated that higher baseline levels of alcohol misuse were associated with higher baseline levels of problem gambling symptoms. In addition, higher baseline levels of alcohol misuse were associated with steeper declines in problem gambling symptoms over time. However, these between-process correlations did not retain significance when covariates were added to the model, indicating that one behavior was not a risk factor for the other. The lack of mutual influence in the problem gambling symptomatology and alcohol misuse processes suggest that there are common risk factors underlying these two behaviors, supporting the notion of a syndrome model of addiction. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Enhancement of biogas production by co-digestion of potato pulp with cow manure in a CSTR system.
Sanaei-Moghadam, Akbar; Abbaspour-Fard, Mohammad Hossein; Aghel, Hasan; Aghkhani, Mohammad Hossein; Abedini-Torghabeh, Javad
2014-08-01
Anaerobic digestion (AD) process is a well-established method to generate energy from the organic wastes both from the environmental and economical perspectives. The purpose of present study is to evaluate energy production from potato wastes by incorporating cow manure into the process. Firstly, a laboratory pilot of one-stage biogas production was designed and built according to continuously stirred tank reactor (CSTR) system. The setup was able to automatically control the environmental conditions of the process including temperature, duration, and rate of stirring. AD experiment was exclusively performed on co-digestion of potato peel (PP) and cow manure (CM) in three levels of mixing ratio including 20:80, 50:50, 80:20 (PP:CM), and 0:100 as control treatment based on the volatile solid (VS) weight without adding initial inoculums. After hydraulic retention time (HRT) of 50 days on average 193, 256, 348, and 149 norm liter (LN) (kg VS)(-1), methane was produced for different mixing ratios, respectively. Statistical analysis shows that these gas productions are significantly different. The average energy was determined based on the produced methane which was about 2.8 kWh (kg VS)(-1), implying a significant energy production potential. The average chemical oxygen demand (COD) removal of treatments was about 61%, showing that it can be leached significantly with high organic matter by the employed pilot. The energy efficiency of 92% of the process also showed the optimum control of the process by the pilot.
Heilweil, Victor M.; Solomon, D. Kip; Gardner, Philip M.; Stonestrom, David A.; Constantz, Jim; Ferré, Ty P.A.; Leake, Stanley A.
2007-01-01
Permeable bedrock aquifers in arid regions of the southwestern United States are being used increasingly as a source of water for rapidly growing populations, yet in many areas little is known about recharge processes and amounts available for sustainable development. Environmental tracers were used in this study to investigate infiltration and recharge to the Navajo Sandstone at Sand Hollow in the eastern Mojave Desert of southwestern Utah. Average annual precipitation is about 210 millimeters per year. Tracers included bromide, chloride, deuterium, oxygen-18, and tritium. The basin-wide average recharge rate, based on ground-water chloride mass balance, is about 8 millimeters per year, or 4 percent of precipitation. However, infiltration and recharge are highly variable spatially within Sand Hollow. Recharge primarily occurs both as focused infiltration of runoff from areas of outcropping bedrock and as direct infiltration beneath coarse surficial soils. Locations with higher rates generally have lower vadose-zone and ground-water chloride concentrations, smaller vadose-zone oxygen-18 evaporative shifts, and higher ground-water tritium concentrations. Infiltration rates estimated from vadose-zone tritium concentrations at borehole sites within Sand Hollow range from 1 to more than 57 millimeters per year; rates calculated from average vadose-zone chloride concentrations between land surface and the bottom of the chloride bulge range from 0 to 9 millimeters per year; rates calculated from average vadose-zone chloride concentrations below the chloride bulge range from 0.5 to 15 millimeters per year; and rates calculated from ground-water chloride concentrations range from 3 to 60 millimeters per year. A two-end-member deuterium-mixing model indicates that about 85 percent of ground-water recharge in Sand Hollow occurs in the 50 percent of the basin covered by coarser soils and bedrock. Vadose-zone chloride concentrations at individual boreholes represent as much as 12,000 years of accumulation, whereas vadose-zone tritium has only been accumulating during the past 50 years. Environmental tracers at Sand Hollow indicate the possibility of a cyclical recharge pattern from higher infiltration rates earlier in the Holocene to lower rates later in the Holocene, back again to higher infiltration rates during the past 50 years.
Accuracy improvement of the ice flow rate measurements on Antarctic ice sheet by DInSAR method
NASA Astrophysics Data System (ADS)
Shiramizu, Kaoru; Doi, Koichiro; Aoyama, Yuichi
2015-04-01
DInSAR (Differential Interferometric Synthetic Aperture Radar) is an effective tool to measure the flow rate of slow flowing ice streams on Antarctic ice sheet with high resolution. In the flow rate measurement by DInSAR method, we use Digital Elevation Model (DEM) at two times in the estimating process. At first, we use it to remove topographic fringes from InSAR images. And then, it is used to project obtained displacements along Line-Of-Sight (LOS) direction to the actual flow direction. ASTER-GDEM widely-used for InSAR prosessing of the data of polar region has a lot of errors especially in the inland ice sheet area. Thus the errors yield irregular flow rates and directions. Therefore, quality of DEM has a substantial influence on the ice flow rate measurement. In this study, we created a new DEM (resolution 10m; hereinafter referred to as PRISM-DEM) based on ALOS/PRISM images, and compared PRISM-DEM and ASTER-GDEM. The study area is around Skallen, 90km south from Syowa Station, in the southern part of Sôya Coast, East Antarctica. For making DInSAR images, we used ALOS/PALSAR data of 13 pairs (Path633, Row 571-572), observed during the period from November 23, 2007 through January 16, 2011. PRISM-DEM covering the PALSAR scene was created from nadir and backward view images of ALOS/PRISM (Observation date: 2009/1/18) by applying stereo processing with a digital mapping equipment, and then the automatically created a primary DEM was corrected manually to make a final DEM. The number of irregular values of actual ice flow rate was reduced by applying PRISM-DEM compared with that by applying ASTER-GDEM. Additionally, an averaged displacement of approximately 0.5cm was obtained by applying PRISM-DEM over outcrop area, where no crustal displacement considered to occur during the recurrence period of ALOS/PALSAR (46days), while an averaged displacement of approximately 1.65 cm was observed by applying ASTER-GDEM. Since displacements over outcrop area are considered to be apparent ones, the average could be a measure of flow rate estimation accuracy by DInSAR. Therefore, it is concluded that the accuracy of the ice flow rate measurement can be improved by using PRISM-DEM. In this presentation, we will show the results of the estimated flow rate of ice streams in the region of interest, and discuss the additional accuracy improvement of this method.
Family Support and Early Childhood Education and Care in Cyprus: Existing Policies and Challenges
ERIC Educational Resources Information Center
Rentzou, Konstantina
2018-01-01
Although wide family support policies are available to Cypriot families, Cyprus is among the countries with the least developed ECEC systems and the processes taken to address ECEC deficits is slow. Although female employment rates are slightly below the EU averages, there is a gap in the availability of childcare, an underinvestment in public…
Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450
Falat, Lukas; Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.
Kiepper, B H; Merka, W C; Fletcher, D L
2008-12-01
Experiments were conducted to compare the effects of tertiary microscreen gap size on the proximate composition and rate of recovery of particulate matter from poultry processing wastewater (PPW). A high-speed vibratory screen was installed within the wastewater treatment area of a southeast US broiler slaughter plant after the existing primary and secondary mechanical rotary screens. Microscreen panels with nominal gap size openings of 212, 106 and 45mum were investigated. The particulate matter samples recovered were subjected to proximate analysis to determine percent moisture, fat, protein, crude fiber and ash. The average percent wet weight moisture (%WW) content for all samples was 79.1. The average percent dry matter (%DM) fat, protein, crude fiber and ash were 63.5, 17.5, 4.8 and 1.5, respectively. The mean concentration of total solids (TS) recovered from all microscreen runs was 668mg/L, which represents a potential additional daily offal recovery rate of 12.1metric tons (MT) per 3.78 million L (1.0 million gallons US) of PPW. There was no significant difference in the performance of the three microscreen gap sizes with regard to proximate composition or mass of particulate matter recovered.
Importance sampling studies of helium using the Feynman-Kac path integral method
NASA Astrophysics Data System (ADS)
Datta, S.; Rejcek, J. M.
2018-05-01
In the Feynman-Kac path integral approach the eigenvalues of a quantum system can be computed using Wiener measure which uses Brownian particle motion. In our previous work on such systems we have observed that the Wiener process numerically converges slowly for dimensions greater than two because almost all trajectories will escape to infinity. One can speed up this process by using a generalized Feynman-Kac (GFK) method, in which the new measure associated with the trial function is stationary, so that the convergence rate becomes much faster. We thus achieve an example of "importance sampling" and, in the present work, we apply it to the Feynman-Kac (FK) path integrals for the ground and first few excited-state energies for He to speed up the convergence rate. We calculate the path integrals using space averaging rather than the time averaging as done in the past. The best previous calculations from variational computations report precisions of 10-16 Hartrees, whereas in most cases our path integral results obtained for the ground and first excited states of He are lower than these results by about 10-6 Hartrees or more.
Xiang, Jianping; Damiano, Robert J.; Lin, Ning; Snyder, Kenneth V.; Siddiqui, Adnan H.; Levy, Elad I.; Meng, Hui
2016-01-01
Object Flow diversion via Pipeline Embolization Device (PED) represents the most recent advancement in endovascular therapy of intracranial aneurysms. This exploratory study aims at a proof of concept for an advanced device-modeling tool in conjunction with computational fluid dynamics (CFD) to evaluate flow modification effects by PED in real treatment cases. Methods We performed computational modeling of three PED-treated complex aneurysm cases. Case I had a fusiform vertebral aneurysm treated with a single PED. Case II had a giant internal carotid artery (ICA) aneurysm treated with 2 PEDs. Case III consisted of two tandem ICA aneurysms (a and b) treated by a single PED. Our recently developed high fidelity virtual stenting (HiFiVS) technique was used to recapitulate the clinical deployment process of PEDs in silico for these three cases. Pre- and post-treatment aneurysmal hemodynamics using CFD simulation was analyzed. Changes in aneurysmal flow velocity, inflow rate, and wall shear stress (WSS) (quantifying flow reduction) and turnover time (quantifying stasis) were calculated and compared with clinical outcome. Results In Case I (occluded within the first 3 months), the aneurysm experienced the most drastic aneurysmal flow reduction after PED placement, where the aneurysmal average velocity, inflow rate and average WSS was decreased by 76.3%, 82.5% and 74.0%, respectively, while the turnover time was increased to 572.1% of its pre-treatment value. In Case II (occluded at 6 months), aneurysmal average velocity, inflow rate and average WSS were decreased by 39.4%, 38.6%, and 59.1%, respectively, and turnover time increased to 163.0%. In Case III, Aneurysm III-a (occluded at 6 months) experienced decrease by 38.0%, 28.4%, and 50.9% in aneurysmal average velocity, inflow rate and average WSS, respectively and increase to 139.6% in turnover time, which was quite similar to Aneurysm II. Surprisingly, the adjacent Aneurysm III-b experienced more substantial flow reduction (decrease by 77.7%, 53.0%, and 84.4% in average velocity, inflow rate and average WSS, respectively, and increase to 213.0% in turnover time) than Aneurysm III-a, which qualitatively agreed with angiographic observation at 3-month follow-up. However, Aneurysm III-b remained patent at both 6 months and 9 months. A closer examination of the vascular anatomy of Case III revealed blood draining to the ophthalmic artery off Aneurysm III-b, which may have prevented its complete thrombosis. Conclusion This proof-of-concept study demonstrates that HiFiVE modeling of flow diverter deployment enables detailed characterization of hemodynamic alteration by PED placement. Post-treatment aneurysmal flow reduction may be correlated with aneurysm occlusion outcome. However, predicting aneurysm treatment outcome by flow diverters also requires consideration of other factors including vascular anatomy. PMID:26090829
Practical remarks on the heart rate and saturation measurement methodology
NASA Astrophysics Data System (ADS)
Kowal, M.; Kubal, S.; Piotrowski, P.; Staniec, K.
2017-05-01
A surface reflection-based method for measuring heart rate and saturation has been introduced as one having a significant advantage over legacy methods in that it lends itself for use in special applications such as those where a person’s mobility is of prime importance (e.g. during a miner’s work) and excluding the use of traditional clips. Then, a complete ATmega1281-based microcontroller platform has been described for performing computational tasks of signal processing and wireless transmission. In the next section remarks have been provided regarding the basic signal processing rules beginning with raw voltage samples of converted optical signals, their acquisition, storage and smoothing. This chapter ends with practical remarks demonstrating an exponential dependence between the minimum measurable heart rate and the readout resolution at different sampling frequencies for different cases of averaging depth (in bits). The following section is devoted strictly to the heart rate and hemoglobin oxygenation (saturation) measurement with the use of the presented platform, referenced to measurements obtained with a stationary certified pulsoxymeter.
Forde, C G; van Kuijk, N; Thaler, T; de Graaf, C; Martin, N
2013-01-01
The modern food supply is often dominated by a large variety of energy dense, softly textured foods that can be eaten quickly. Previous studies suggest that particular oral processing characteristics such as large bite size and lack of chewing activity contribute to the low satiating efficiency of these foods. To better design meals that promote greater feelings of satiation, we need an accurate picture of the oral processing characteristics of a range of solid food items that could be used to replace softer textures during a normal hot meal. The primary aim of this study was to establish an accurate picture of the oral processing characteristics of a set of solid savoury meal components. The secondary aim was to determine the associations between oral processing characteristics, food composition, sensory properties, and expected satiation. In a within subjects design, 15 subjects consumed 50 g of 35 different savoury food items over 5 sessions. The 35 foods represented various staples, vegetables and protein rich foods such a meat and fish. Subjects were video-recorded during consumption and measures included observed number of bites, number of chews, number of swallows and derived measures such as chewing rate, eating rate, bite size, and oral exposure time. Subjects rated expected satiation for a standard 200 g portion of each food using a 100mm and the sensory differences between foods were quantified using descriptive analysis with a trained sensory panel. Statistical analysis focussed on the oral processing characteristics and associations between nutritional, sensory and expected satiation parameters of each food. Average number of chews for 50 g of food varied from 27 for mashed potatoes to 488 for tortilla chips. Oral exposure time was highly correlated with the total number of chews, and varied from 27 s for canned tomatoes to 350 s for tortilla chips. Chewing rate was relatively constant with an overall average chewing rate of approximately 1 chew/s. Differences in oral processing were not correlated with any macronutrients specifically. Expected satiation was positively related to protein and the sensory attributes chewiness and saltiness. Foods that consumed in smaller bites, were chewed more and for longer and expected to impart a higher satiation. This study shows a large and reliable variation in oral exposure time, number of required chews before swallowing and expected satiation across a wide variety of foods. We conclude that bite size and oral-sensory exposure time could contribute to higher satiation within a meal for equal calories. Copyright © 2012 Elsevier Ltd. All rights reserved.
Adaptive WTA with an analog VLSI neuromorphic learning chip.
Häfliger, Philipp
2007-03-01
In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long-term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system.
Precise Point Positioning with Partial Ambiguity Fixing.
Li, Pan; Zhang, Xiaohong
2015-06-10
Reliable and rapid ambiguity resolution (AR) is the key to fast precise point positioning (PPP). We propose a modified partial ambiguity resolution (PAR) method, in which an elevation and standard deviation criterion are first used to remove the low-precision ambiguity estimates for AR. Subsequently the success rate and ratio-test are simultaneously used in an iterative process to increase the possibility of finding a subset of decorrelated ambiguities which can be fixed with high confidence. One can apply the proposed PAR method to try to achieve an ambiguity-fixed solution when full ambiguity resolution (FAR) fails. We validate this method using data from 450 stations during DOY 021 to 027, 2012. Results demonstrate the proposed PAR method can significantly shorten the time to first fix (TTFF) and increase the fixing rate. Compared with FAR, the average TTFF for PAR is reduced by 14.9% for static PPP and 15.1% for kinematic PPP. Besides, using the PAR method, the average fixing rate can be increased from 83.5% to 98.2% for static PPP, from 80.1% to 95.2% for kinematic PPP respectively. Kinematic PPP accuracy with PAR can also be significantly improved, compared to that with FAR, due to a higher fixing rate.
Precise Point Positioning with Partial Ambiguity Fixing
Li, Pan; Zhang, Xiaohong
2015-01-01
Reliable and rapid ambiguity resolution (AR) is the key to fast precise point positioning (PPP). We propose a modified partial ambiguity resolution (PAR) method, in which an elevation and standard deviation criterion are first used to remove the low-precision ambiguity estimates for AR. Subsequently the success rate and ratio-test are simultaneously used in an iterative process to increase the possibility of finding a subset of decorrelated ambiguities which can be fixed with high confidence. One can apply the proposed PAR method to try to achieve an ambiguity-fixed solution when full ambiguity resolution (FAR) fails. We validate this method using data from 450 stations during DOY 021 to 027, 2012. Results demonstrate the proposed PAR method can significantly shorten the time to first fix (TTFF) and increase the fixing rate. Compared with FAR, the average TTFF for PAR is reduced by 14.9% for static PPP and 15.1% for kinematic PPP. Besides, using the PAR method, the average fixing rate can be increased from 83.5% to 98.2% for static PPP, from 80.1% to 95.2% for kinematic PPP respectively. Kinematic PPP accuracy with PAR can also be significantly improved, compared to that with FAR, due to a higher fixing rate. PMID:26067196
Being Sherlock Holmes: Can we sense empathy from a brief sample of behaviour?
Wu, Wenjie; Sheppard, Elizabeth; Mitchell, Peter
2016-02-01
Mentalizing (otherwise known as 'theory of mind') involves a special process that is adapted for predicting and explaining the behaviour of others (targets) based on inferences about targets' beliefs and character. This research investigated how well participants made inferences about an especially apposite aspect of character, empathy. Participants were invited to make inferences of self-rated empathy after watching or listening to an unfamiliar target for a few seconds telling a scripted joke (or answering questions about him/herself or reading aloud a paragraph of promotional material). Across three studies, participants were good at identifying targets with low and high self-rated empathy but not good at identifying those who are average. Such inferences, especially of high self-rated empathy, seemed to be based mainly on clues in the target's behaviour, presented either in a video, a still photograph or in an audio track. However, participants were not as effective in guessing which targets had low or average self-rated empathy from a still photograph showing a neutral pose or from an audio track. We conclude with discussion of the scope and the adaptive value of this inferential ability. © 2016 The British Psychological Society.
Kontosic, I; Vukelić, M; Pancić, M; Kunisek, J
1994-12-01
Physical work load was estimated in a female conveyor-belt worker in a bottling plant. Estimation was based on continuous measurement and on calculation of average heart rate values in three-minute and one-hour periods and during the total measuring period. The thermal component of the heart rate was calculated by means of the corrected effective temperature, for the one-hour periods. The average heart rate at rest was also determined. The work component of the heart rate was calculated by subtraction of the resting heart rate and the heart rate measured at 50 W, using a regression equation. The average estimated gross energy expenditure during the work was 9.6 +/- 1.3 kJ/min corresponding to the category of light industrial work. The average estimated oxygen uptake was 0.42 +/- 0.06 L/min. The average performed mechanical work was 12.2 +/- 4.2 W, i.e. the energy expenditure was 8.3 +/- 1.5%.
Effect of body mass and clothing on decomposition of pig carcasses.
Matuszewski, Szymon; Konwerski, Szymon; Frątczak, Katarzyna; Szafałowicz, Michał
2014-11-01
Carcass mass and carcass clothing are factors of potential high forensic importance. In casework, corpses differ in mass and kind or extent of clothing; hence, a question arises whether methods for post-mortem interval estimation should take these differences into account. Unfortunately, effects of carcass mass and clothing on specific processes in decomposition and related entomological phenomena are unclear. In this article, simultaneous effects of these factors are analysed. The experiment followed a complete factorial block design with four levels of carcass mass (small carcasses 5-15 kg, medium carcasses 15.1-30 kg, medium/large carcasses 35-50 kg, large carcasses 55-70 kg) and two levels of carcass clothing (clothed and unclothed). Pig carcasses (N = 24) were grouped into three blocks, which were separated in time. Generally, carcass mass revealed significant and frequently large effects in almost all analyses, whereas carcass clothing had only minor influence on some phenomena related to the advanced decay. Carcass mass differently affected particular gross processes in decomposition. Putrefaction was more efficient in larger carcasses, which manifested itself through earlier onset and longer duration of bloating. On the other hand, active decay was less efficient in these carcasses, with relatively low average rate, resulting in slower mass loss and later onset of advanced decay. The average rate of active decay showed a significant, logarithmic increase with an increase in carcass mass, but only in these carcasses on which active decay was driven solely by larval blowflies. If a blowfly-driven active decay was followed by active decay driven by larval Necrodes littoralis (Coleoptera: Silphidae), which was regularly found in medium/large and large carcasses, the average rate showed only a slight and insignificant increase with an increase in carcass mass. These results indicate that lower efficiency of active decay in larger carcasses is a consequence of a multi-guild and competition-related pattern of this process. Pattern of mass loss in large and medium/large carcasses was not sigmoidal, but rather exponential. The overall rate of decomposition was strongly, but not linearly, related to carcass mass. In a range of low mass decomposition rate increased with an increase in mass, then at about 30 kg, there was a distinct decrease in rate, and again at about 50 kg, the rate slightly increased. Until about 100 accumulated degree-days larger carcasses gained higher total body scores than smaller carcasses. Afterwards, the pattern was reversed; moreover, differences between classes of carcasses enlarged with the progress of decomposition. In conclusion, current results demonstrate that cadaver mass is a factor of key importance for decomposition, and as such, it should be taken into account by decomposition-related methods for post-mortem interval estimation.
Sun, Xiaojie; Zhang, Hongxia; Cheng, Zhaowen
2017-08-01
High concentrations of nitrate-nitrogen (NO 3 - -N) derived from ex situ nitrification phase can inhibit methane production during ex situ nitrification and in situ denitrification bioreactor landfill. A combined process comprised of ex situ simultaneous nitrification-denitrification (SND) in an aged refuse bioreactor (ARB) and in situ denitrification in a fresh refuse bioreactor (FRB) was conducted to reduce the negative effect of high concentrationsof NO 3 - -N. Ex situ SND can be achieved because NO 3 - -N concentration can be reduced and the removal rate of ammonium-nitrogen (NH 4 + -N) remains largely unchanged when the ventilation rate of ARB-A2 is controlled. The average NO 3 - -N concentrations of effluent were 470mg/L in ex situ nitrification ARB-A1 and 186mg/L in ex situ SND ARB-A2. The average NH 4 + -N removal rates of ARB-A1 and ARB-A2 were 98% and 94%, respectively. Based on the experimental data from week 4 to week 30, it is predicted that NH 4 + -N concentration in FRB-F1 of the ex situ nitrification and in situ denitrification process would reach 25mg/L after 63weeks, and about 40weeks for the FRB-F2 of ex situ SND and in situ denitrification process . Ex situ SND and in situ denitrification process can improve themethane production of FRB-F2. The lag phase time of methane production for the FRB-F2 was 11weeks. This phase was significantly shorter than the 15-week phases of FRB-F1 in ex situ nitrification and in situ denitrification process. A seven-week stabilizationphase was required to increase methane content from 5% to 50% for FRB-F2. Methane content in FRB-F1 did not reach 50% but reached the 45% peak after 20weeks. Copyright © 2017 Elsevier Ltd. All rights reserved.
Statistical process control charts for monitoring military injuries.
Schuh, Anna; Canham-Chervak, Michelle; Jones, Bruce H
2017-12-01
An essential aspect of an injury prevention process is surveillance, which quantifies and documents injury rates in populations of interest and enables monitoring of injury frequencies, rates and trends. To drive progress towards injury reduction goals, additional tools are needed. Statistical process control charts, a methodology that has not been previously applied to Army injury monitoring, capitalise on existing medical surveillance data to provide information to leadership about injury trends necessary for prevention planning and evaluation. Statistical process control Shewhart u-charts were created for 49 US Army installations using quarterly injury medical encounter rates, 2007-2015, for active duty soldiers obtained from the Defense Medical Surveillance System. Injuries were defined according to established military injury surveillance recommendations. Charts display control limits three standard deviations (SDs) above and below an installation-specific historical average rate determined using 28 data points, 2007-2013. Charts are available in Army strategic management dashboards. From 2007 to 2015, Army injury rates ranged from 1254 to 1494 unique injuries per 1000 person-years. Installation injury rates ranged from 610 to 2312 injuries per 1000 person-years. Control charts identified four installations with injury rates exceeding the upper control limits at least once during 2014-2015, rates at three installations exceeded the lower control limit at least once and 42 installations had rates that fluctuated around the historical mean. Control charts can be used to drive progress towards injury reduction goals by indicating statistically significant increases and decreases in injury rates. Future applications to military subpopulations, other health outcome metrics and chart enhancements are suggested. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
NASA Astrophysics Data System (ADS)
Sadeghimeresht, E.; Markocsan, N.; Nylén, P.
2016-12-01
Selection of the thermal spray process is the most important step toward a proper coating solution for a given application as important coating characteristics such as adhesion and microstructure are highly dependent on it. In the present work, a process-microstructure-properties-performance correlation study was performed in order to figure out the main characteristics and corrosion performance of the coatings produced by different thermal spray techniques such as high-velocity air fuel (HVAF), high-velocity oxy fuel (HVOF), and atmospheric plasma spraying (APS). Previously optimized HVOF and APS process parameters were used to deposit Ni, NiCr, and NiAl coatings and compare with HVAF-sprayed coatings with randomly selected process parameters. As the HVAF process presented the best coating characteristics and corrosion behavior, few process parameters such as feed rate and standoff distance (SoD) were investigated to systematically optimize the HVAF coatings in terms of low porosity and high corrosion resistance. The Ni and NiAl coatings with lower porosity and better corrosion behavior were obtained at an average SoD of 300 mm and feed rate of 150 g/min. The NiCr coating sprayed at a SoD of 250 mm and feed rate of 75 g/min showed the highest corrosion resistance among all investigated samples.
Xiao, Bo; Imel, Zac E; Georgiou, Panayiotis G; Atkins, David C; Narayanan, Shrikanth S
2015-01-01
The technology for evaluating patient-provider interactions in psychotherapy-observational coding-has not changed in 70 years. It is labor-intensive, error prone, and expensive, limiting its use in evaluating psychotherapy in the real world. Engineering solutions from speech and language processing provide new methods for the automatic evaluation of provider ratings from session recordings. The primary data are 200 Motivational Interviewing (MI) sessions from a study on MI training methods with observer ratings of counselor empathy. Automatic Speech Recognition (ASR) was used to transcribe sessions, and the resulting words were used in a text-based predictive model of empathy. Two supporting datasets trained the speech processing tasks including ASR (1200 transcripts from heterogeneous psychotherapy sessions and 153 transcripts and session recordings from 5 MI clinical trials). The accuracy of computationally-derived empathy ratings were evaluated against human ratings for each provider. Computationally-derived empathy scores and classifications (high vs. low) were highly accurate against human-based codes and classifications, with a correlation of 0.65 and F-score (a weighted average of sensitivity and specificity) of 0.86, respectively. Empathy prediction using human transcription as input (as opposed to ASR) resulted in a slight increase in prediction accuracies, suggesting that the fully automatic system with ASR is relatively robust. Using speech and language processing methods, it is possible to generate accurate predictions of provider performance in psychotherapy from audio recordings alone. This technology can support large-scale evaluation of psychotherapy for dissemination and process studies.
Description of waste pretreatment and interfacing systems dynamic simulation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garbrick, D.J.; Zimmerman, B.D.
1995-05-01
The Waste Pretreatment and Interfacing Systems Dynamic Simulation Model was created to investigate the required pretreatment facility processing rates for both high level and low level waste so that the vitrification of tank waste can be completed according to the milestones defined in the Tri-Party Agreement (TPA). In order to achieve this objective, the processes upstream and downstream of the pretreatment facilities must also be included. The simulation model starts with retrieval of tank waste and ends with vitrification for both low level and high level wastes. This report describes the results of three simulation cases: one based on suggestedmore » average facility processing rates, one with facility rates determined so that approximately 6 new DSTs are required, and one with facility rates determined so that approximately no new DSTs are required. It appears, based on the simulation results, that reasonable facility processing rates can be selected so that no new DSTs are required by the TWRS program. However, this conclusion must be viewed with respect to the modeling assumptions, described in detail in the report. Also included in the report, in an appendix, are results of two sensitivity cases: one with glass plant water recycle steams recycled versus not recycled, and one employing the TPA SST retrieval schedule versus a more uniform SST retrieval schedule. Both recycling and retrieval schedule appear to have a significant impact on overall tank usage.« less
NASA Astrophysics Data System (ADS)
Wiryanta, I. K. E. H.; Adiaksa, I. M. A.
2018-01-01
The purposes of this research was to investigate the temperature performance of tube and fins car radiator experimentally and numerically. The experiment research was carried out on a simulation design consists of a reservoir water tank, a heater, pump to circulate hot water to the radiator and a cooling fan. The hot water mass flow rate is 0.486 kg/s, and the cooling air velocity of the fan is 1 m/s. The heat transfer rate and the effectiveness of radiator were investigated. The results showed that the exhaust heat transfer rate from the radiator tended to increase over time, with an average heat transfer rate of 3974.3 Watt. The maximum heat transfer rate was 4680 Watt obtained at 6 minutes. The effectiveness of the radiator (ε) over time tends to increase with an average of ε = 0.3 and the maximum effectiveness value was obtained at 12 minutes i.e. 0.35. The numerical research conducted using CFD method. The geometry and meshing created using ANSYS Workbench and the post processing using Fluent. The simulation result showed the similarity with the experimental research. The temperatures of air-side radiator are about 45°C.
NASA Astrophysics Data System (ADS)
Kononenko, Taras V.; Freitag, Christian; Sovyk, Dmitry N.; Lukhter, Alexander B.; Skvortsov, Konstantin V.; Konov, Vitaly I.
2018-04-01
Percussion drilling of a Ti-based alloy with 8 ps laser pulses was investigated with emphasis on the influence of the pulse repetition rate (1-300 kHz) on the drilling process. Heat accumulation is found to be responsible for simultaneous significant rise of the average drilling rate, the occurrence of a rim around the hole entrance, as well as noticeable surface oxidation. This can be observed as soon as the repetition rate exceeds a certain critical value, which depends on the pulse energy applied. Dramatic rise of the recast layer thickness inside the growing hole was revealed for the limited range of the repetition rates around the lower boundary of the heat accumulation regime. Possible origins of this phenomenon are discussed.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the Postal Service files its notice of rate adjustment and dividing the sum by 12 (Recent Average... values immediately preceding the Recent Average and dividing the sum by 12 (Base Average). Finally, the full year limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1...
A not so happy day after all: excess death rates on birthdays in the U.S.
Peña, Pablo A
2015-02-01
This study estimates average excess death rates on and around birthdays, and explores differences between birthdays falling on weekends and birthdays falling on weekdays. Using records from the U.S. Social Security Administration for 25 million people who died during the period from 1998 to 2011, average excess death rates are estimated controlling for seasonality of births and deaths. The average excess death rate on birthdays is 6.7% (p < 0.0001). No evidence is found of dips in average excess death rates in a ±10 day neighborhood around birthdays that could offset the spikes on birthdays. Significant differences are found between age groups and between weekend and weekday birthdays. Younger people have greater average excess death rates on birthdays, reaching up to 25.4% (p < 0.0001) for ages 20-29. Younger people also show the largest differences between average excess death rates on weekend birthdays and weekday birthdays, reaching up to 64.5 percentage points (p = 0.0063) for ages 1-9. Over the 13-year period analyzed, the estimated excess deaths on birthdays are 4590. Copyright © 2014 Elsevier Ltd. All rights reserved.
Birdwell, Justin; Cook, Robert L; Thibodeaux, Louis J
2007-03-01
Resuspension of contaminated sediment can lead to the release of toxic compounds to surface waters where they are more bioavailable and mobile. Because the timeframe of particle resettling during such events is shorter than that needed to reach equilibrium, a kinetic approach is required for modeling the release process. Due to the current inability of common theoretical approaches to predict site-specific release rates, empirical algorithms incorporating the phenomenological assumption of biphasic, or fast and slow, release dominate the descriptions of nonpolar organic chemical release in the literature. Two first-order rate constants and one fraction are sufficient to characterize practically all of the data sets studied. These rate constants were compared to theoretical model parameters and functionalities, including chemical properties of the contaminants and physical properties of the sorbents, to determine if the trends incorporated into the hindered diffusion model are consistent with the parameters used in curve fitting. The results did not correspond to the parameter dependence of the hindered diffusion model. No trend in desorption rate constants, for either fast or slow release, was observed to be dependent on K(OC) or aqueous solubility for six and seven orders of magnitude, respectively. The same was observed for aqueous diffusivity and sediment fraction organic carbon. The distribution of kinetic rate constant values was approximately log-normal, ranging from 0.1 to 50 d(-1) for the fast release (average approximately 5 d(-1)) and 0.0001 to 0.1 d(-1) for the slow release (average approximately 0.03 d(-1)). The implications of these findings with regard to laboratory studies, theoretical desorption process mechanisms, and water quality modeling needs are presented and discussed.
Dykes, Thomas M; Bhargavan-Chatfield, Mythreyi; Dyer, Raymond B
2015-02-01
Establish 3 performance benchmarks for intravenous contrast extravasation during CT examinations: extravasation frequency, distribution of extravasation volumes, and severity of injury. Evaluate the effectiveness of implementing practice quality improvement (PQI) methodology in improving performance for these 3 benchmarks. The Society of Abdominal Radiology and ACR developed a registry collecting data for contrast extravasation events. The project includes a PQI initiative allowing for process improvement. As of December 2013, a total of 58 radiology practices have participated in this project, and 32 practices have completed the 2-cycle PQI. There were a total of 454,497 contrast-enhanced CT exams and 1,085 extravasation events. The average extravasation rate is 0.24%. The median extravasation rate is 0.21%. Most extravasations (82.9%) were between 10 mL and 99 mL. The majority of injuries, 94.6%, are mild in severity, with 4.7% having moderate and 0.8% having severe injuries. Data from practices that completed the PQI process showed a change in the average extravasation rate from 0.28% in the first 6 months to 0.23% in the second 6 months, and the median extravasation rate dropped from 0.25% to 0.16%, neither statistically significant. The distribution of extravasation volumes and the severity of injury did not change between the first and second measurement periods. National performance benchmarks for contrast extravasation rate, distribution of volumes of extravasate, and distribution of severity of injury are established through this multi-institutional practice registry. The application of PQI failed to have a statistically significant positive impact on any of the 3 benchmarks. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Rust, Nicole C.; DiCarlo, James J.
2012-01-01
While popular accounts suggest that neurons along the ventral visual processing stream become increasingly selective for particular objects, this appears at odds with the fact that inferior temporal cortical (IT) neurons are broadly tuned. To explore this apparent contradiction, we compared processing in two ventral stream stages (V4 and IT) in the rhesus macaque monkey. We confirmed that IT neurons are indeed more selective for conjunctions of visual features than V4 neurons, and that this increase in feature conjunction selectivity is accompanied by an increase in tolerance (“invariance”) to identity-preserving transformations (e.g. shifting, scaling) of those features. We report here that V4 and IT neurons are, on average, tightly matched in their tuning breadth for natural images (“sparseness”), and that the average V4 or IT neuron will produce a robust firing rate response (over 50% of its peak observed firing rate) to ~10% of all natural images. We also observed that sparseness was positively correlated with conjunction selectivity and negatively correlated with tolerance within both V4 and IT, consistent with selectivity-building and invariance-building computations that offset one another to produce sparseness. Our results imply that the conjunction-selectivity-building and invariance-building computations necessary to support object recognition are implemented in a balanced fashion to maintain sparseness at each stage of processing. PMID:22836252
39 CFR 3010.22 - Calculation of less than annual limitation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Postal Service files its notice of rate adjustment and dividing the sum by 12 (Recent Average). The partial year limitation is then calculated by dividing the Recent Average by the Recent Average from the most recent previous notice of rate adjustment (Previous Recent Average) applicable to each affected...
39 CFR 3010.22 - Calculation of less than annual limitation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Postal Service files its notice of rate adjustment and dividing the sum by 12 (Recent Average). The partial year limitation is then calculated by dividing the Recent Average by the Recent Average from the most recent previous notice of rate adjustment (Previous Recent Average) applicable to each affected...
39 CFR 3010.22 - Calculation of less than annual limitation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Postal Service files its notice of rate adjustment and dividing the sum by 12 (Recent Average). The partial year limitation is then calculated by dividing the Recent Average by the Recent Average from the most recent previous notice of rate adjustment (Previous Recent Average) applicable to each affected...
Nonintrusive performance measurement of a gas turbine engine in real time
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeSilva, Upul P.; Claussen, Heiko
Performance of a gas turbine engine is monitored by computing a mass flow rate through the engine. Acoustic time-of-flight measurements are taken between acoustic transmitters and receivers in the flow path of the engine. The measurements are processed to determine average speeds of sound and gas flow velocities along those lines-of-sound. A volumetric flow rate in the flow path is computed using the gas flow velocities together with a representation of the flow path geometry. A gas density in the flow path is computed using the speeds of sound and a measured static pressure. The mass flow rate is calculatedmore » from the gas density and the volumetric flow rate.« less
NASA Astrophysics Data System (ADS)
Wahyudi, A'an. J.; Wada, Shigeki; Aoki, Masakazu; Hama, Takeo
2015-06-01
Gaetice depressus is one of the most dominant macrozoobenthos species in boulder shores of intertidal coastal ecosystems in Japan. As recorded in previous studies, this species is also considered as having high density and biomass. Consequently, it is thought to be one of the more important species in the organic matter flow of boulder shores, especially through the food web. In this study, some taxonomic problems related to G. depressus were tackled and the autoecology and ecological processes in the intertidal ecosystem of G. depressus, such as organic matter flow, were investigated. Furthermore, in order to clarify the taxonomy description, resolve inconsistencies in the scientific name, and learn about the life history, a literature review was conducted. Seasonal changes in density, morphology pattern and population structure were determined based on the data obtained in Ebisu Island, Japan. Then, the role of G. depressus was determined by estimating the intake and emittance fluxes of organic carbon and nitrogen through ingestion and egestion process in the boulder shores of Ebisu Island. A feeding rate experiment was also conducted in order to estimate the intake flux by using the catch-release-recapture method. Meanwhile, to estimate the emittance flux, a defecation rate experiment was conducted by catching some individuals of G. depressus, and then incubating them in the laboratory. The feeding rate measured by the speed of diet consumption of G. depressus was about 12.6 mg ind-1 h-1. Considering the average density, the intake flux through the feeding process could be estimated as 25.2 mgC m-2 h-1 and 2.6 mgN m-2 h-1. On the other hand, G. depressus egested fecal pellet at the rate of 5.4 mg ind-1 h-1. The average emittance flux through the fecal pellet egesting process is estimated at 5.6 mgC m-2 h-1 and 0.7 mgN m-2 h-1. Therefore, it can be estimated that about 25% of organic matter from diet is egested as fecal pellet, which means that about 75% of the intake flux of organic carbon and nitrogen is used for the total assimilation of G. depressus. Intake flux was also considered as affecting the high dynamism of primary producer consumption. The total population of G. depressus is estimated to consume about 18.4% of primary producer in average throughout the year. Therefore, the turnover time of primary producer by consumption of G. depressus was about five days.
The Safe Yield and Climatic Variability: Implications for Groundwater Management.
Loáiciga, Hugo A
2017-05-01
Methods for calculating the safe yield are evaluated in this paper using a high-quality and long historical data set of groundwater recharge, discharge, extraction, and precipitation in a karst aquifer. Consideration is given to the role that climatic variability has on the determination of a climatically representative period with which to evaluate the safe yield. The methods employed to estimate the safe yield are consistent with its definition as a long-term average extraction rate that avoids adverse impacts on groundwater. The safe yield is a useful baseline for groundwater planning; yet, it is herein shown that it is not an operational rule that works well under all climatic conditions. This paper shows that due to the nature of dynamic groundwater processes it may be most appropriate to use an adaptive groundwater management strategy that links groundwater extraction rates to groundwater discharge rates, thus achieving a safe yield that represents an estimated long-term sustainable yield. An example of the calculation of the safe yield of the Edwards Aquifer (Texas) demonstrates that it is about one-half of the average annual recharge. © 2016, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Li, Jingwei; Bai, Xiaolong; Li, Yanlei; Ban, Boyuan; Chen, Jian
2015-12-01
The effect of Ga addition on alloy macrostructure, morphology and recovery rate of primary Si during the Al-Si-Ga alloy solvent refining process of silicon was studied in this work. The addition of Ga to Al-Si alloy could change the morphology of the primary Si. The average plate thickness of the primary Si increases with increase of Ga content. With the increase of Ga content, the average plate length of the primary Si crystals becomes larger when the Ga content is less than 5% in the Al-30%Si-xGa alloy, but becomes smaller when the Ga content exceeds 5%. Al-Si-Ga alloys consist of three types, primary Si, GaxAl1-x, (α-Al+Si+β-Ga) eutectic. (111) is the preferred growth surface of the plate-like primary Si. The recovery rate of the primary Si increases with the increase of Ga content. When the Ga content increased to 20% in Al-30%Si-xGa alloy, the relative recovery rate of the primary Si increased to 50.41% than that in Al-30%Si alloy.
Cadmium concentrations in tobacco and tobacco smoke
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherer, G.; Barkemeyer, H.
The amount of cadmium in tobacco depends on the variety and origin of the plant as well as on the analytical method used to determine cadmium. In the literature, cadmium concentrations in tobacco of between 0.5 and 5 ppm are reported. Modern German cigarette tobacco contains about 0.5-1.5 micrograms cadmium/cigarette. Of importance for the smoker is the amount of the metal in the mainstream smoke. The cadmium level in the mainstream smoke of modern cigarettes is reduced by means of filters and other construction features. The average Cd value of German filter cigarettes is less than 0.1 microgram/cigarette in mainstreammore » smoke. An average daily intake of about 1 microgram cadmium by smoking 20 cigarettes can be calculated on the basis of an experimentally proved pulmonary retention rate of 50%. Pulmonary resorption rates relevant to uptake rates of cadmium by smoking are discussed. It can be assumed that cadmium uptake by smoking modern cigarettes has been reduced because of modifications in tobacco processing and cigarette construction in the last few decades.« less
Mesoscale elucidation of laser-assisted chemical deposition of Sn nanostructured electrodes
NASA Astrophysics Data System (ADS)
Liu, Zhixiao; Deng, Biwei; Cheng, Gary J.; Deng, Huiqiu; Mukherjee, Partha P.
2015-06-01
Nanostructured tin (Sn) is a promising high-capacity electrode for improved performance in lithium-ion batteries for electric vehicles. In this work, Sn nanoisland growth for nanostructured electrodes assisted by the pulse laser irradiation has been investigated based on a mesoscale modeling formalism. The influence of pertinent processing conditions, such as pulse duration, heating/cooling rates, and atom flux, on the Sn nanostructure formation is specifically considered. The interaction between the adsorbed atom and the substrate, represented by the adatom diffusion barrier, is carefully studied. It is found that the diffusion barrier predominantly affects the distribution of Sn atoms. For both α-Sn and β-Sn, the averaged coordination number is larger than 3 when the diffusion barrier equals to 0.15 eV. The averaged coordination number decreases as the diffusion barrier increases. The substrate temperature, which is determined by heating/cooling rates and pulse duration, can also affect the formation of Sn nanoislands. For α-Sn, when applied low heating/cooling rates, nanoislands cannot form if the diffusion barrier is larger than 0.35 eV.
Bedrock river erosion measurements and modelling along a river of the Frontal Himalaya
NASA Astrophysics Data System (ADS)
Lave, Jerome; Dubille, Matthieu
2017-04-01
River incision is a key process in mountains denudation and therefore in landscape evolution models. Despite its importance, most incision models for mountain rivers rely on simplified, or quite empirical relations, and generally only consider annual average values for water discharge and sediment flux. In contrast, very few studies consider mechanistic models at the timescale of a flood, and try to bridge the gap between experimental or theoretical approaches and long term river incision studies. In this contribution, we present observations made during 7 monsoon seasons on fluvial bedrock erosion along the Bakeya river across the Frontal Himalaya in Central Nepal. Along its lower gorge, this river incises alternation of indurated sandstone and less resistant claystone, at Holocene rates larger than 10mm/yr. More importantly, its upper drainage mostly drains through non-cohesive conglomerate which allows, in this specific setting, estimating the bedload characteristics and instantaneous fluxes, i.e. a pre-requisite to test mechanistic models of fluvial erosion. During the study period, we monitored and documented the channel bank erosion in order to understand the amplitude of the erosion processes, their occurrence in relation with hydrology, in order to test time-integrated models of erosion. Besides hydrologic monitoring, erosion measurements were threefold: (1) at the scale of the whole monsoon, plucking and block removal by repeated photo surveys of a 400m long channel reach, (2) detailed microtopographic surveys of channel bedrock elevation along a few sandstone bars to document their abrasion, (3) real time measurement of fluvial bedrock wear to document erosion timing using a new erosion sensor. Results indicate that: 1. Erosion is highly dependent on rock resistance, but on average block detachment and removal is a more efficient process than bedrock attrition, and operates at a rate that permit channel banks downcutting to keep pace with Holocene uplift rate. 2. Both block detachment and attrition processes clearly increase with fluvial shear stress, but non-linearly, in particular through the existence of a minimum threshold. As a result of which bank erosion occur during only a few hours per year during short and very high flood events, which questions the use of average discharge (or drainage area) in many bedrock erosion models. We then propose a semi-physical model of sandstone bars abrasion based on discharge history (HEC-RAS modelling), Rouse suspension model, and experimental measurements on dependency of abrasion rate vs impacting particle size. This model predicts well the timing and the amplitude of both real-time and monsoon average abrasion along the surveyed sandstone bars. This first validation of a model for bank erosion opens large perspective for future work on channel bottom incision modelling using physical models of erosion and their time- and gravel-size-integration, with the objective to introduce more physical rules in landscape evolution models.
NASA Astrophysics Data System (ADS)
Teitelbaum, Heshel; Caridade, Pedro J. S. B.; Varandas, António J. C.
2004-06-01
Classical trajectory calculations using the MERCURY/VENUS code have been carried out on the H+O2 reactive system using the DMBE-IV potential energy surface. The vibrational quantum number and the temperature were selected over the ranges v=0 to 15, and T=300 to 10 000 K, respectively. All other variables were averaged. Rate constants were determined for the energy transfer process, H+O2(v)-->H+O2(v''), for the bimolecular exchange process, H+O2(v)-->OH(v')+O, and for the dissociative process, H+O2(v)-->H+O+O. The dissociative process appears to be a mere extension of the process of transferring large amounts of energy. State-to-state rate constants are given for the exchange reaction, and they are in reasonable agreement with previous results, while the energy transfer and dissociative rate constants have never been reported previously. The lifetime distributions of the HO2 complex, calculated as a function of v and temperature, were used as a basis for determining the relative contributions of various vibrational states of O2 to the thermal rate coefficients for recombination at various pressures. This novel approach, based on the complex's ability to survive until it collides in a secondary process with an inert gas, is used here for the first time. Complete falloff curves for the recombination of H+O2 are also calculated over a wide range of temperatures and pressures. The combination of the two separate studies results in pressure- and temperature-dependent rate constants for H+O2(v)(+Ar)⇄HO2(+Ar). It is found that, unlike the exchange reaction, vibrational and rotational-translational energy are liabilities in promoting recombination.
Scaling of number, size, and metabolic rate of cells with body size in mammals.
Savage, Van M; Allen, Andrew P; Brown, James H; Gillooly, James F; Herman, Alexander B; Woodruff, William H; West, Geoffrey B
2007-03-13
The size and metabolic rate of cells affect processes from the molecular to the organismal level. We present a quantitative, theoretical framework for studying relationships among cell volume, cellular metabolic rate, body size, and whole-organism metabolic rate that helps reveal the feedback between these levels of organization. We use this framework to show that average cell volume and average cellular metabolic rate cannot both remain constant with changes in body size because of the well known body-size dependence of whole-organism metabolic rate. Based on empirical data compiled for 18 cell types in mammals, we find that many cell types, including erythrocytes, hepatocytes, fibroblasts, and epithelial cells, follow a strategy in which cellular metabolic rate is body size dependent and cell volume is body size invariant. We suggest that this scaling holds for all quickly dividing cells, and conversely, that slowly dividing cells are expected to follow a strategy in which cell volume is body size dependent and cellular metabolic rate is roughly invariant with body size. Data for slowly dividing neurons and adipocytes show that cell volume does indeed scale with body size. From these results, we argue that the particular strategy followed depends on the structural and functional properties of the cell type. We also discuss consequences of these two strategies for cell number and capillary densities. Our results and conceptual framework emphasize fundamental constraints that link the structure and function of cells to that of whole organisms.
Holder, Simon J; Achilleos, Mariliz; Jones, Richard G
2006-09-27
In this communication, we will demonstrate that polymerization in a chiral solvent can affect the molecular weight distribution of the product by perturbing the balance of the P and M helical screw senses of the growing chains. Specifically, for the Wurtz-type synthesis of polymethylphenylsilane (PMPS) in either (R) or (S)-limonene, the weight-average molecular weight of the products (average Mw = 80 000) was twice that of PMPS synthesized in (R/S)-limonene (average Mw = 39 200). Peturbation of the helical segmentation along the polymer chains leads to a reduction in the rate of occurrence of a key termination step. This the first time that a chiral solvent has been demonstrated to have such an effect on a polymerization process in affecting molecular weight parameters in contrast to affecting tacticity.
All solid state mid-infrared dual-comb spectroscopy platform based on QCL technology
NASA Astrophysics Data System (ADS)
Hugi, Andreas; Geiser, Markus; Villares, Gustavo; Cappelli, Francesco; Blaser, Stephane; Faist, Jérôme
2015-01-01
We develop a spectroscopy platform for industrial applications based on semiconductor quantum cascade laser (QCL) frequency combs. The platform's key features will be an unmatched combination of bandwidth of 100 cm-1, resolution of 100 kHz, speed of ten to hundreds of μs as well as size and robustness, opening doors to beforehand unreachable markets. The sensor can be built extremely compact and robust since the laser source is an all-electrically pumped semiconductor optical frequency comb and no mechanical elements are required. However, the parallel acquisition of dual-comb spectrometers comes at the price of enormous data-rates. For system scalability, robustness and optical simplicity we use free-running QCL combs. Therefore no complicated optical locking mechanisms are required. To reach high signal-to-noise ratios, we develop an algorithm, which is based on combination of coherent and non-coherent averaging. This algorithm is specifically optimized for free-running and small footprint, therefore high-repetition rate, comb sources. As a consequence, our system generates data-rates of up to 3.2 GB/sec. These data-rates need to be reduced by several orders of magnitude in real-time in order to be useful for spectral fitting algorithms. We present the development of a data-treatment solution, which reaches a single-channel throughput of 22% using a standard laptop-computer. Using a state-of-the art desktop computer, the throughput is increased to 43%. This is combined with a data-acquisition board to a stand-alone data processing unit, allowing real-time industrial process observation and continuous averaging to achieve highest signal fidelity.
The spreading time in SIS epidemics on networks
NASA Astrophysics Data System (ADS)
He, Zhidong; Van Mieghem, Piet
2018-03-01
In a Susceptible-Infected-Susceptible (SIS) process, we investigate the spreading time Tm, which is the time when the number of infected nodes in the metastable state is first reached, starting from the outbreak of the epidemics. We observe that the spreading time Tm resembles a lognormal-like distribution, though with different deep tails, both for the Markovian and the non-Markovian infection process, which implies that the spreading time can be very long with a relatively high probability. In addition, we show that a stronger virus, with a higher effective infection rate τ or an earlier timing of the infection attempts, does not always lead to a shorter average spreading time E [Tm ] . We numerically demonstrate that the average spreading time E [Tm ] in the complete graph and the star graph scales logarithmically as a function of the network size N for a fixed fraction of infected nodes in the metastable state.
NASA Astrophysics Data System (ADS)
Obrosova, N. K.; Shananin, A. A.
2015-04-01
A production model with allowance for a working capital deficit and a restricted maximum possible sales volume is proposed and analyzed. The study is motivated by an attempt to analyze the problems of functioning of low competitive macroeconomic structures. The model is formalized in the form of a Bellman equation, for which a closed-form solution is found. The stochastic process of product stock variations is proved to be ergodic and its final probability distribution is found. Expressions for the average production load and the average product stock are found by analyzing the stochastic process. A system of model equations relating the model variables to official statistical parameters is derived. The model is identified using data from the Fiat and KAMAZ companies. The influence of the credit interest rate on the firm market value assessment and the production load level are analyzed using comparative statics methods.
NASA Astrophysics Data System (ADS)
Bondurant, A. C.; Arp, C. D.; Jones, B. M.; Shur, Y.; Daanen, R. P.
2017-12-01
Thermokarst lakes are a dominant landform shaping landscapes and impacting permafrost on the Arctic Coastal Plain (ACP) of northern Alaska, a region of continuous permafrost. Here lakes cover greater than 20% of the landscape and drained lake basins cover an additional 50 to 60% of the landscape. The formation, expansion, and drainage of thaw lakes has been described by some researchers as part of a natural cycle that has reworked the ACP landscape during the Holocene. Yet the factors and processes controlling contemporary thermokarst lake expansion remain poorly described. This study focuses on the factors controlling expansion rates of thermokarst lakes in three ACP regions that vary in landscape history, ground-ice content, and lake morphology (i.e. size and depth), as well as evaluating changes through time. Through the use of historical aerial imagery, satellite imagery, and field observations, this study identifies the controlling factors at multiple spatial and temporal scales to better understand the processes relating to thermokarst lake expansion. Studies of 35 lakes across the ACP shows regional differences in expansion rate related to permafrost ice content ranging from an average expansion rate of 0.62 m/yr where ice content is highest ( 86%) to 0.16 m/yr where ice content is lowest (45%-71%). A subset of these lakes analyzed over multiple time periods show increasing rates of erosion, with average rates being 37% higher over the period 1979-2002 (0.73 m/yr) compared to 1948-1979 (0.53 m/yr). These increased rates of erosion have important implications for the regional hydrologic cycle and localized permafrost degradation. Predicting how thermokarst lakes will behave locally and on a landscape scale is increasingly important for managing habitat and water resources and informing models of land-climate interactions in the Arctic.
The Validity of Peer Review in a General Medicine Journal
Jackson, Jeffrey L.; Srinivasan, Malathi; Rea, Joanna; Fletcher, Kathlyn E.; Kravitz, Richard L.
2011-01-01
All the opinions in this article are those of the authors and should not be construed to reflect, in any way, those of the Department of Veterans Affairs. Background Our study purpose was to assess the predictive validity of reviewer quality ratings and editorial decisions in a general medicine journal. Methods Submissions to the Journal of General Internal Medicine (JGIM) between July 2004 and June 2005 were included. We abstracted JGIM peer review quality ratings, verified the publication status of all articles and calculated an impact factor for published articles (Rw) by dividing the 3-year citation rate by the average for this group of papers; an Rw>1 indicates a greater than average impact. Results Of 507 submissions, 128 (25%) were published in JGIM, 331 rejected (128 with review) and 48 were either not resubmitted after revision was requested or were withdrawn by the author. Of 331 rejections, 243 were published elsewhere. Articles published in JGIM had a higher citation rate than those published elsewhere (Rw: 1.6 vs. 1.1, p = 0.002). Reviewer quality ratings of article quality had good internal consistency and reviewer recommendations markedly influenced publication decisions. There was no quality rating cutpoint that accurately distinguished high from low impact articles. There was a stepwise increase in Rw for articles rejected without review, rejected after review or accepted by JGIM (Rw 0.60 vs. 0.87 vs. 1.56, p<0.0005). However, there was low agreement between reviewers for quality ratings and publication recommendations. The editorial publication decision accurately discriminated high and low impact articles in 68% of submissions. We found evidence of better accuracy with a greater number of reviewers. Conclusions The peer review process largely succeeds in selecting high impact articles and dispatching lower impact ones, but the process is far from perfect. While the inter-rater reliability between individual reviewers is low, the accuracy of sorting is improved with a greater number of reviewers. PMID:21799867
Poland, Michael P.
2014-01-01
Differencing digital elevation models (DEMs) derived from TerraSAR add-on for Digital Elevation Measurements (TanDEM-X) synthetic aperture radar imagery provides a measurement of elevation change over time. On the East Rift Zone (EZR) of Kīlauea Volcano, Hawai‘i, the effusion of lava causes changes in topography. When these elevation changes are summed over the area of an active lava flow, it is possible to quantify the volume of lava emplaced at the surface during the time spanned by the TanDEM-X data—a parameter that can be difficult to measure across the entirety of an ~100 km2 lava flow field using ground-based techniques or optical remote sensing data. Based on the differences between multiple TanDEM-X-derived DEMs collected days to weeks apart, the mean dense-rock equivalent time-averaged discharge rate of lava at Kīlauea between mid-2011 and mid-2013 was approximately 2 m3/s, which is about half the long-term average rate over the course of Kīlauea's 1983–present ERZ eruption. This result implies that there was an increase in the proportion of lava stored versus erupted, a decrease in the rate of magma supply to the volcano, or some combination of both during this time period. In addition to constraining the time-averaged discharge rate of lava and the rates of magma supply and storage, topographic change maps derived from space-based TanDEM-X data provide insights into the four-dimensional evolution of Kīlauea's ERZ lava flow field. TanDEM-X data are a valuable complement to other space-, air-, and ground-based observations of eruptive activity at Kīlauea and offer great promise at locations around the world for aiding with monitoring not just volcanic eruptions but any hazardous activity that results in surface change, including landslides, floods, earthquakes, and other natural and anthropogenic processes.
Dynamics of newly established elk populations
Sargeant, G.A.; Oehler, M.W.
2007-01-01
The dynamics of newly established elk (Cervus elaphus) populations can provide insights about maximum sustainable rates of reproduction, survival, and increase. However, data used to estimate rates of increase typically have been limited to counts and rarely have included complementary estimates of vital rates. Complexities of population dynamics cannot be understood without considering population processes as well as population states. We estimated pregnancy rates, survival rates, age ratios, and sex ratios for reintroduced elk at Theodore Roosevelt National Park, North Dakota, USA; combined vital rates in a population projection model; and compared model projections with observed elk numbers and population ratios. Pregnancy rates in January (early in the second trimester of pregnancy) averaged 54.1% (SE = 5.4%) for subadults and 91.0% (SE = 1.7%) for adults, and 91.6% of pregnancies resulted in recruitment at 8 months. Annual survival rates of adult females averaged 0.96 (95% CI = 0.94-0.98) with hunting included and 0.99 (95% CI = 0.97-0.99) with hunting excluded from calculations. Our fitted model explained 99.8% of past variation in population estimates and represents a useful new tool for short-term management planning. Although we found no evidence of temporal variation in vital rates, variation in population composition caused substantial variation in projected rates of increase (??=1.20-1.36). Restoring documented hunter harvests and removals of elk by the National Park Service led to a potential rate of ?? = 1.26. Greater rates of increase substantiated elsewhere were within the expected range of chance variation, given our model and estimates of vital rates. Rates of increase realized by small elk populations are too variable to support inferences about habitat quality or density dependence.
NASA Astrophysics Data System (ADS)
Sherr, Barry F.; Sherr, Evelyn B.
2003-04-01
Community metabolism (respiration and production) and bacterial activity were assessed in the upper water column of the central Arctic Ocean during the SHEBA/JOIS ice camp experiment, October 1997-September 1998. In the upper 50 m, decrease in integrated dissolved oxygen (DO) stocks over a period of 124 d in mid-winter suggested a respiration rate of ˜3.3 nM O 2 h -1 and a carbon demand of ˜4.5 gC m -2. Increase in 0-50 m integrated stocks of DO during summer implied a net community production of ˜20 gC m -2. Community respiration rates were directly measured via rate of decrease in DO in whole seawater during 72-h dark incubation experiments. Incubation-based respiration rates were on average 3-fold lower during winter (11.0±10.6 nM O 2 h -1) compared to summer (35.3±24.8 nM O 2 h -1). Bacterial heterotrophic activity responded strongly, without noticeable lag, to phytoplankton growth. Rate of leucine incorporation by bacteria (a proxy for protein synthesis and cell growth) increased ˜10-fold, and the cell-specific rate of leucine incorporation ˜5-fold, from winter to summer. Rates of production of bacterial biomass in the upper 50 m were, however, low compared to other oceanic regions, averaging 0.52±0.47 ngC l -1 h -1 during winter and 5.1±3.1 ngC l -1 h -1 during summer. Total carbon demand based on respiration experiments averaged 2.4±2.3 mgC m -3 d -1 in winter and 7.8±5.5 mgC m -3 d -1 in summer. Estimated bacterial carbon demand based on bacterial productivity and an assumed 10% gross growth efficiency was much lower, averaging about 0.12±0.12 mgC m -3 d -1 in winter and 1.3±0.7 mgC m -3 d -1 in summer. Our estimates of bacterial activity during summer were an order of magnitude less than rates reported from a summer 1994 study in the central Arctic Ocean, implying significant inter-annual variability of microbial processes in this region.
Universal Temporal Profile of Replication Origin Activation in Eukaryotes
NASA Astrophysics Data System (ADS)
Goldar, Arach
2011-03-01
The complete and faithful transmission of eukaryotic genome to daughter cells involves the timely duplication of mother cell's DNA. DNA replication starts at multiple chromosomal positions called replication origin. From each activated replication origin two replication forks progress in opposite direction and duplicate the mother cell's DNA. While it is widely accepted that in eukaryotic organisms replication origins are activated in a stochastic manner, little is known on the sources of the observed stochasticity. It is often associated to the population variability to enter S phase. We extract from a growing Saccharomyces cerevisiae population the average rate of origin activation in a single cell by combining single molecule measurements and a numerical deconvolution technique. We show that the temporal profile of the rate of origin activation in a single cell is similar to the one extracted from a replicating cell population. Taking into account this observation we exclude the population variability as the origin of observed stochasticity in origin activation. We confirm that the rate of origin activation increases in the early stage of S phase and decreases at the latter stage. The population average activation rate extracted from single molecule analysis is in prefect accordance with the activation rate extracted from published micro-array data, confirming therefore the homogeneity and genome scale invariance of dynamic of replication process. All these observations point toward a possible role of replication fork to control the rate of origin activation.
20 CFR 10.216 - How is the pay rate for COP calculated?
Code of Federal Regulations, 2011 CFR
2011-04-01
... for COP purposes is equal to the employee's regular “weekly” pay (the average of the weekly pay over... period of appointment), the weekly pay rate is an average of the weekly earnings, established by dividing... or every week of the year (or period of appointment), the weekly pay rate is the average weekly...
26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 10 2011-04-01 2011-04-01 false Definition of weighted average exchange rate. 1... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Export Trade Corporations § 1.989(b)-1 Definition of weighted average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Provisions § 89.204 Averaging. (a) Requirements for Tier 1 engines rated at or above 37 kW. A manufacturer... credits obtained through trading. (b) Requirements for Tier 2 and later engines rated at or above 37 kW and Tier 1 and later engines rated under 37 kW. A manufacturer may use averaging to offset an emission...
Southern Appalachian hillslope erosion rates measured by soil and detrital radiocarbon in hollows
Hales, T.C.; Scharer, K.M.; Wooten, R.M.
2012-01-01
Understanding the dynamics of sediment generation and transport on hillslopes provides important constraints on the rate of sediment output from orogenic systems. Hillslope sediment fluxes are recorded by organic material found in the deposits infilling unchanneled convergent topographic features called hollows. This study describes the first hollow infilling rates measured in the southern Appalachian Mountains. Infilling rates (and bedrock erosion rates) were calculated from the vertical distribution of radiocarbon ages at two sites in the Coweeta drainage basin, western North Carolina. At each site we dated paired charcoal and silt soil organic matter samples from five different horizons. Paired radiocarbon samples were used to bracket the age of the soil material in order to capture the range of complex soil forming processes and deposition within the hollows. These dates constrain hillslope erosion rates of between 0.051 and 0.111mmyr-1. These rates are up to 4 times higher than spatially-averaged rates for the Southern Appalachian Mountains making creep processes one of the most efficient erosional mechanisms in this mountain range. Our hillslope erosion rates are consistent with those of forested mountain ranges in the western United States, suggesting that the mechanisms (dominantly tree throw) driving creep erosion in both the western United States and the Southern Appalachian Mountains are equally effective. ?? 2011 Elsevier B.V.
Optimization of an enhanced ceramic micro-filter for concentrating E.coli in water
NASA Astrophysics Data System (ADS)
Zhang, Yushan; Guo, Tianyi; Xu, Changqing; Hong, Lingcheng
2017-02-01
Recently lower limit of detection (LOD) is necessary for rapid bacteria detection and analysis applications in clinical practices and daily life. A critical pre-conditioning step for these applications is bacterial concentration, especially for low level of pathogens. Sample volume can be largely reduced with an efficient pre-concentration process. Some approaches such as hollow-fiber ultra-filtration and electrokinetic technique have been applied to bacterial concentration. Since none of these methods can provide a concentrating method with a stable recovery efficiency, bacterial concentration still remains challenging Ceramic micro- filter can be used to concentrate the bacteria but the cross flow system keeps the bacteria in suspension. Similar harvesting bacteria using ultra-filtration showed an average recovery efficiency of 43% [1] and other studies achieved recovery rates greater than 50% [2]. In this study, an enhanced ceramic micro-filter with 0.14 μm pore size was proposed and demonstrated to optimize the concentration of E.coli. A high recovery rate (mean value >90%) and a high volumetric concentration ratio (>100) were achieved. Known quantities (104 to 106 CFU/ml) of E.coli cells were spiked to different amounts of phosphate buffered saline (0.1 to 1 L), and then concentrated to a final retentate of 5 ml to 10 ml. An average recovery efficiency of 95.3% with a standard deviation of 5.6% was achieved when the volumetric con- centration ratio was 10. No significant recovery rate loss was indicated when the volumetric concentration ratio reached up to 100. The effects of multiple parameters on E.coli recovery rate were also studied. The obtained results indicated that the optimized ceramic micro- filtration system can successfully concentrate E.coli cells in water with an average recovery rate of 90.8%.
Rainier Mesa CAU Infiltration Model using INFILv3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levitt, Daniel G.; Kwicklis, Edward M.
The outline of this presentation are: (1) Model Inputs - DEM, Precipitation, Air temp, Soil props, Surface geology, Vegetation; (2) Model Pre-processing - Runoff Routing and sinks, Slope and Azimuth, Soil Ksat reduction with slope (to mitigate bathtub ring), Soil-Bedrock Interface permeabilities; (3) Model Calibration - ET using PEST, Chloride mass balance data, Streamflow using PEST; (4) Model Validation - Streamflow data not used for calibration; (5) Uncertainty Analysis; and (6) Results. Conclusions are: (1) Average annual infiltration rates =11 to 18 mm/year for RM domain; (2) Average annual infiltration rates = 7 to 11 mm/year for SM domain; (3)more » ET = 70% of precipitation for both domains; (4) Runoff = 8-9% for RM; and 22-24% for SM - Apparently high average runoff is caused by the truncation of the lowerelevation portions of watersheds where much of the infiltration of runoff waters would otherwise occur; (5) Model results are calibrated to measured ET, CMB data, and streamflow observations; (6) Model results are validated using streamflow observations discovered after model calibration was complete; (7) Use of soil Ksat reduction with slope to mitigate bathtub ring was successful (based on calibration results); and (8) Soil-bedrock K{_}interface is innovative approach.« less
The role of global cloud climatologies in validating numerical models
NASA Technical Reports Server (NTRS)
HARSHVARDHAN
1993-01-01
The purpose of this work is to estimate sampling errors of area-time averaged rain rate due to temporal samplings by satellites. In particular, the sampling errors of the proposed low inclination orbit satellite of the Tropical Rainfall Measuring Mission (TRMM) (35 deg inclination and 350 km altitude), one of the sun synchronous polar orbiting satellites of NOAA series (98.89 deg inclination and 833 km altitude), and two simultaneous sun synchronous polar orbiting satellites--assumed to carry a perfect passive microwave sensor for direct rainfall measurements--will be estimated. This estimate is done by performing a study of the satellite orbits and the autocovariance function of the area-averaged rain rate time series. A model based on an exponential fit of the autocovariance function is used for actual calculations. Varying visiting intervals and total coverage of averaging area on each visit by the satellites are taken into account in the model. The data are generated by a General Circulation Model (GCM). The model has a diurnal cycle and parameterized convective processes. A special run of the GCM was made at NASA/GSFC in which the rainfall and precipitable water fields were retained globally for every hour of the run for the whole year.
Gas-liquid Phase Distribution and Void Fraction Measurements Using the MRI
NASA Technical Reports Server (NTRS)
Daidzic, N. E.; Schmidt, E.; Hasan, M. M.; Altobelli, S.
2004-01-01
We used a permanent-magnet MRI system to estimate the integral and spatially- and/or temporally-resolved void-fraction distributions and flow patterns in gas-liquid two-phase flows. Air was introduced at the bottom of the stagnant liquid column using an accurate and programmable syringe pump. Air flow rates were varied between 1 and 200 ml/min. The cylindrical non-conducting test tube in which two-phase flow was measured was placed in a 2.67 kGauss MRI with MRT spectrometer/imager. Roughly linear relationship has been obtained for the integral void-fraction, obtained by volume-averaging of the spatially-resolved signals, and the air flow rate in upward direction. The time-averaged spatially-resolved void fraction has also been obtained for the quasi-steady flow of air in a stagnant liquid column. No great accuracy is claimed as this was an exploratory proof-of-concept type of experiment. Preliminary results show that MRI a non-invasive and non-intrusive experimental technique can indeed provide a wealth of different qualitative and quantitative data and is especially well suited for averaged transport processes in adiabatic and diabatic multi-phase and/or multi-component flows.
Yamaguchi, Hironori; Tsurita, Giichirou; Ueno, Shoogo; Watanabe, Soichi; Wake, Kanako; Taki, Masao; Nagawa, Hirokazu
2003-05-01
This study sought to clarify the effects of exposure to electromagnetic waves (EMW) used in cellular phones on learning and memory processes. Sprague-Dawley rats were exposed for either 1 h daily for 4 days or for 4 weeks to a pulsed 1439 MHz time division multiple access (TDMA) field in a carousel type exposure system. At the brain, average specific absorption rate (SAR) was 7.5 W/kg, and the whole body average SAR was 1.7 W/kg. Other subjects were exposed at the brain average SAR of 25 W/kg and the whole body average SAR of 5.7 W/kg for 45 min daily for 4 days. Learning and memory were evaluated by reversal learning in a food rewarded T-maze, in which rats learned the location of food (right or left) by using environmental cues. The animals exposed to EMW with the brain average SAR of 25 W/kg for 4 days showed statistically significant decreases in the transition in number of correct choices in the reversal task, compared to sham exposed or cage control animals. However, rats exposed to the brain average SAR of 7.5 W/kg for either 4 days or for 4 weeks showed no T-maze performance impairments. Intraperitoneal temperatures, as measured by a fiber optic thermometer, increased in the rats exposed to the brain average SAR of 25 W/kg but remained the same for the brain average SAR of 7.5 W/kg. The SAR of a standard cellular phone is restricted to a maximum of 2 W/kg averaged over 10 g tissue. These results suggest that the exposure to a TDMA field at levels about four times stronger than emitted by cellular phones does not affect the learning and memory processes when there are no thermal effects. Copyright 2003 Wiley-Liss, Inc.
He, Ning; Sun, Hechun; Dai, Miaomiao
2014-05-01
To evaluate the influence of temperature and humidity on the drug stability by initial average rate experiment, and to obtained the kinetic parameters. The effect of concentration error, drug degradation extent, humidity and temperature numbers, humidity and temperature range, and average humidity and temperature on the accuracy and precision of kinetic parameters in the initial average rate experiment was explored. The stability of vitamin C, as a solid state model, was investigated by an initial average rate experiment. Under the same experimental conditions, the kinetic parameters obtained from this proposed method were comparable to those from classical isothermal experiment at constant humidity. The estimates were more accurate and precise by controlling the extent of drug degradation, changing humidity and temperature range, or by setting the average temperature closer to room temperature. Compared with isothermal experiments at constant humidity, our proposed method saves time, labor, and materials.
[Perforated duodenal ulcer: benefits and risks of laparoscopic repair].
Lunevicius, Raimundas; Morkevicius, Matas
2004-01-01
Laparoscopic perforated duodenal ulcer repair is a minimally invasive technique. Just like any other type of surgery, the laparoscopic approach carries operative risks in itself. The primary goal of this article is to describe the possible risk factors in laparoscopic duodenal ulcer repair. The secondary goal is to clarify benefits of the laparoscopic surgery. The Medline/Pubmed database was used; 73 articles were analyzed and evaluated. Six retrospective and nine prospective studies are summarized. The retrospective studies' results are as follows: total complication rate is 7-28% (average--16%); conversion rate is 6-30% (average--18%); postoperative mortality rate is 0-20% (average--6%); and average hospital stay is 6-17 days (average--8 days). The results of the prospective studies are the following: total complication rate is lower--5-25% (average--11%); conversion rate is lower - 0-27% (average--14%); postoperative mortality is lower 0-10% (average--3%); and average hospital stay is shorter--4-10 days (average--6 days). The difference is not significant but the results are better than in prospective studies. The risk factors were identical. Shock, delayed presentation (>24 hours), confounding medical condition, age >70 years, American Society of Anesthesiology III-IV degrees and Boey score--all above should be considered as preoperative laparoscopic repair risk factors. Inadequate ulcer localization, large perforation size (>6 mm diameter according to ones, >10 mm according to others) and ulcers with friable edges are also considered as laparoscopic repair risk factors: each of the factors independently is an indication for an open repair.
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, T. L.; Lau, William K. M. (Technical Monitor)
2002-01-01
A characteristic feature of rainfall statistics is that they in general depend on the space and time scales over which rain data are averaged. As a part of an earlier effort to determine the sampling error of satellite rain averages, a space-time model of rainfall statistics was developed to describe the statistics of gridded rain observed in GATE. The model allows one to compute the second moment statistics of space- and time-averaged rain rate which can be fitted to satellite or rain gauge data to determine the four model parameters appearing in the precipitation spectrum - an overall strength parameter, a characteristic length separating the long and short wavelength regimes and a characteristic relaxation time for decay of the autocorrelation of the instantaneous local rain rate and a certain 'fractal' power law exponent. For area-averaged instantaneous rain rate, this exponent governs the power law dependence of these statistics on the averaging length scale $L$ predicted by the model in the limit of small $L$. In particular, the variance of rain rate averaged over an $L \\times L$ area exhibits a power law singularity as $L \\rightarrow 0$. In the present work the model is used to investigate how the statistics of area-averaged rain rate over the tropical Western Pacific measured with ship borne radar during TOGA COARE (Tropical Ocean Global Atmosphere Coupled Ocean Atmospheric Response Experiment) and gridded on a 2 km grid depends on the size of the spatial averaging scale. Good agreement is found between the data and predictions from the model over a wide range of averaging length scales.
Powell, Joshua; Luh, Jeanne; Coronell, Orlando
2014-01-01
We studied the volume-averaged chlorine (Cl) uptake into the bulk region of the aromatic polyamide active layer of a reverse osmosis membrane upon exposure to free chlorine. Volume-averaged measurements were obtained using Rutherford backscattering spectrometry with samples prepared at a range of free chlorine concentrations, exposure times, and mixing, rinsing, and pH conditions. Our volume-averaged measurements complement previous studies that have quantified Cl uptake at the active layer surface (top ≈ 7 nm) and advance the mechanistic understanding of Cl uptake by aromatic polyamide active layers. Our results show that surface Cl uptake is representative of and underestimates volume-averaged Cl uptake under acidic conditions and alkaline conditions, respectively. Our results also support that (i) under acidic conditions, N-chlorination followed by Orton rearrangement is the dominant Cl uptake mechanism with N-chlorination as the rate-limiting step; (ii) under alkaline conditions, N-chlorination and dechlorination of N-chlorinated amide links by hydroxyl ion are the two dominant processes; and (iii) under neutral pH conditions, the rates of N-chlorination and Orton rearrangement are comparable. We propose a kinetic model that satisfactorily describes Cl uptake under acidic and alkaline conditions, with the largest discrepancies between model and experiment occurring under alkaline conditions at relatively high chlorine exposures.
NASA Astrophysics Data System (ADS)
Thigpen, R.; Ashley, K. T.; Law, R. D.; Mako, C. A.
2017-12-01
In natural systems, two key observations indicate that major strain discontinuities such as faults and shear zones should play a fundamental role in orogenic thermal evolution: (1) Large faults and shear zones often separate components of the composite orogen that have experienced broadly different thermal and deformational histories, and (2) quantitative metamorphic and diffusional studies indicate that heating rates are much faster and the duration of peak conditions much shorter in natural collisional systems than those predicted by numerical continuum deformation models. Because heat transfer processes such as conduction usually operate at much slower time scales than rates of other tectonic processes, thermal evolution is often transient and thus can be strongly influenced by tectonic disturbances that occur at rates much faster than thermal relaxation. Here, we use coupled thermal-mechanical finite element models of thrust faults to explore how fault slip rate may fundamentally influence the thermal evolution of individual footwall and hanging wall thrust slices. The model geometry involves a single crustal-scale thrust with a dip of 25° that is translated up the ramp at average velocities of 20, 35, and 50 km Myr-1, interpreted to represent average to relatively high slip rates observed in many collisional systems. Boundary conditions include crustal radioactive heat production, basal mantle heat flow, and surface erosion rates that are a function of thrust rate and subsequent topography generation. In the models, translation of the hanging wall along the crustal-scale detachment results in erosion, exhumation, and retrograde metamorphism of the emerging hanging wall topography and coeval burial, `hot iron' heating, and prograde metamorphism of the thrust footwall. Thrust slip rates of 20, 35, and 50 km Myr-1 yield maximum footwall heating rates ranging from 55-90° C Myr-1 and maximum hanging wall cooling rates of 138-303° C Myr-1. These relatively rapid heating rates explain, in part, the presence of chemical diffusion profiles in metamorphic minerals that are indicative of high heating rates. Additionally, the relatively high cooling rates explain preservation of chemical zoning, as rapid cooling prevents diffusive profiles from being substantially modified during exhumation.
Cancer of the gallbladder-Chilean statistics.
Villanueva, Luis
2016-01-01
Chile has the world's highest rate of incidence as well as death from cancer of the gallbladder and biliary ducts. The problem is most acute in the southern provinces. These areas constitute the low average income associated with low educational attainment and a high rate of obesity compared with the rest of Chile. We could also include genetic factors related to processes of lithogenesis to these elements which are more common among the Mapuche. This population sub-group could benefit from special government programmes for early diagnosis and treatment of lithiasic disease and for the management of risk factors such as obesity. In this way, we could reduce the mortality rate of gallbladder cancer.
Cancer of the gallbladder—Chilean statistics
Villanueva Olivares, Luis
2016-01-01
Chile has the world’s highest rate of incidence as well as death from cancer of the gallbladder and biliary ducts. The problem is most acute in the southern provinces. These areas constitute the low average income associated with low educational attainment and a high rate of obesity compared with the rest of Chile. We could also include genetic factors related to processes of lithogenesis to these elements which are more common among the Mapuche. This population sub-group could benefit from special government programmes for early diagnosis and treatment of lithiasic disease and for the management of risk factors such as obesity. In this way, we could reduce the mortality rate of gallbladder cancer. PMID:28105075
Zou, Haiming; Wang, Yan
2017-07-01
A new process of electrolysis cell (EC) coupled with microbial fuel cell (MFC) was developed here and its feasibility in methyl red (MR) wastewater treatment and simultaneous electricity generation was assessed. Results indicate that an excellent MR removal and electricity production performance was achieved, where the decolorization and COD removal efficiencies were 100% and 89.3%, respectively and a 0.56V of cell voltage output was generated. Electrolysis voltage showed a positive influence on decolorization rate (DR) but also cause a rapid decrease in current efficiency (CE). Although a low COD removal rate of 38.5% was found in EC system, biodegradability of MR solution was significantly enhanced, where the averaged DR was 85.6%. Importantly, COD removal rate in EC-MFC integrated process had a 50.8% improvement compared with the single EC system. The results obtained here would be beneficial to provide a prospective alternative for azo dyes wastewater treatment and power production. Copyright © 2017 Elsevier Ltd. All rights reserved.
The tangled bank of amino acids
Pollock, David D.
2016-01-01
Abstract The use of amino acid substitution matrices to model protein evolution has yielded important insights into both the evolutionary process and the properties of specific protein families. In order to make these models tractable, standard substitution matrices represent the average results of the evolutionary process rather than the underlying molecular biophysics and population genetics, treating proteins as a set of independently evolving sites rather than as an integrated biomolecular entity. With advances in computing and the increasing availability of sequence data, we now have an opportunity to move beyond current substitution matrices to more interpretable mechanistic models with greater fidelity to the evolutionary process of mutation and selection and the holistic nature of the selective constraints. As part of this endeavour, we consider how epistatic interactions induce spatial and temporal rate heterogeneity, and demonstrate how these generally ignored factors can reconcile standard substitution rate matrices and the underlying biology, allowing us to better understand the meaning of these substitution rates. Using computational simulations of protein evolution, we can demonstrate the importance of both spatial and temporal heterogeneity in modelling protein evolution. PMID:27028523
23 CFR Appendix D to Part 1240 - Determination of National Average Seat Belt Use Rate
Code of Federal Regulations, 2010 CFR
2010-04-01
... 23 Highways 1 2010-04-01 2010-04-01 false Determination of National Average Seat Belt Use Rate D Appendix D to Part 1240 Highways NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION AND FEDERAL HIGHWAY... BASED ON SEAT BELT USE RATES Pt. 1240, App. D Appendix D to Part 1240—Determination of National Average...
Suboptimal compliance with evidence-based guidelines in patients with traumatic brain injuries.
Shafi, Shahid; Barnes, Sunni A; Millar, D; Sobrino, Justin; Kudyakov, Rustam; Berryman, Candice; Rayan, Nadine; Dubiel, Rosemary; Coimbra, Raul; Magnotti, Louis J; Vercruysse, Gary; Scherer, Lynette A; Jurkovich, Gregory J; Nirula, Raminder
2014-03-01
Evidence-based management (EBM) guidelines for severe traumatic brain injuries (TBIs) were promulgated decades ago. However, the extent of their adoption into bedside clinical practices is not known. The purpose of this study was to measure compliance with EBM guidelines for management of severe TBI and its impact on patient outcome. This was a retrospective study of blunt TBI (11 Level I trauma centers, study period 2008-2009, n = 2056 patients). Inclusion criteria were an admission Glasgow Coma Scale score ≤ 8 and a CT scan showing TBI, excluding patients with nonsurvivable injuries-that is, head Abbreviated Injury Scale score of 6. The authors measured compliance with 6 nonoperative EBM processes (endotracheal intubation, resuscitation, correction of coagulopathy, intracranial pressure monitoring, maintaining cerebral perfusion pressure ≥ 50 cm H2O, and discharge to rehabilitation). Compliance rates were calculated for each center using multivariate regression to adjust for patient demographics, physiology, injury severity, and TBI severity. The overall compliance rate was 73%, and there was wide variation among centers. Only 3 centers achieved a compliance rate exceeding 80%. Risk-adjusted compliance was worse than average at 2 centers, better than average at 1, and the remainder were average. Multivariate analysis showed that increased adoption of EBM was associated with a reduced mortality rate (OR 0.88; 95% CI 0.81-0.96, p < 0.005). Despite widespread dissemination of EBM guidelines, patients with severe TBI continue to receive inconsistent care. Barriers to adoption of EBM need to be identified and mitigated to improve patient outcomes.
InSAR Detection and Field Evidence for Thermokarst after a Tundra Wildfire, Using ALOS-PALSAR
Iwahana, Go; Uchida, Masao; Liu, Lin; ...
2016-03-08
Thermokarst is the process of ground subsidence caused by either the thawing of ice-rich permafrost or the melting of massive ground ice. The consequences of permafrost degradation associated with thermokarst for surface ecology, landscape evolution, and hydrological processes have been of great scientific interest and social concern. Part of a tundra patch affected by wildfire in northern Alaska (27.5 km 2) was investigated here, using remote sensing and in situ surveys to quantify and understand permafrost thaw dynamics after surface disturbances. A two-pass differential InSAR technique using L-band ALOS-PALSAR has been shown capable of capturing thermokarst subsidence triggered by amore » tundra fire at a spatial resolution of tens of meters, with supporting evidence from field data and optical satellite images. We have introduced a calibration procedure, comparing burned and unburned areas for InSAR subsidence signals, to remove the noise due to seasonal surface movement. In the first year after the fire, an average subsidence rate of 6.2 cm/year (vertical) was measured. Subsidence in the burned area continued over the following two years, with decreased rates. The mean rate of subsidence observed in our interferograms (from 24 July 2008 to 14 September 2010) was 3.3 cm/year, a value comparable to that estimated from field surveys at two plots on average (2.2 cm/year) for the six years after the fire. These results suggest that this InSAR-measured ground subsidence is caused by the development of thermokarst, a thawing process supported by surface change observations from high-resolution optical images and in situ ground level surveys.« less
Cox, Louis A; Popken, Douglas A; Ricci, Paolo F
2013-08-01
Recent studies have indicated that reducing particulate pollution would substantially reduce average daily mortality rates, prolonging lives, especially among the elderly (age ≥ 75). These benefits are projected by statistical models of significant positive associations between levels of fine particulate matter (PM2.5) levels and daily mortality rates. We examine the empirical correspondence between changes in average PM2.5 levels and temperatures from 1999 to 2000, and corresponding changes in average daily mortality rates, in each of 100 U.S. cities in the National Mortality and Morbidity Air Pollution Study (NMMAPS) data base, which has extensive PM2.5, temperature, and mortality data for those 2 years. Increases in average daily temperatures appear to significantly reduce average daily mortality rates, as expected from previous research. Unexpectedly, reductions in PM2.5 do not appear to cause any reductions in mortality rates. PM2.5 and mortality rates are both elevated on cold winter days, creating a significant positive statistical relation between their levels, but we find no evidence that reductions in PM2.5 concentrations cause reductions in mortality rates. For all concerned, it is crucial to use causal relations, rather than statistical associations, to project the changes in human health risks due to interventions such as reductions in particulate air pollution. Copyright © 2013 Elsevier Inc. All rights reserved.
29 CFR 548.306 - Average earnings for year or quarter year preceding the current quarter.
Code of Federal Regulations, 2011 CFR
2011-07-01
... PAY Interpretations Authorized Basic Rates § 548.306 Average earnings for year or quarter year... regular rates of pay during the current quarter year, and (ii) such average hourly remuneration during the... 29 Labor 3 2011-07-01 2011-07-01 false Average earnings for year or quarter year preceding the...
NASA Astrophysics Data System (ADS)
Sembodo, Bregas Siswahjono Tatag; Sulistyo, Hary; Sediawan, Wahyudi Budi; Fahrurrozi, Mohammad
2018-02-01
Lignocellulosic biomass has recently received serious attention as an energy source that can replace fossil fuels. Corncob is one of lignocellulosic biomass wastes, which can be further processed into bio-oil through thermochemical liquefaction process. Bio-oil is expected to be further processed into fuel oil. In this research the effect of Na2CO3 catalyst weight on the yield of bio-oil was investigated. The composition of bio-oil produced in this process was analyzed by GC-MS. Bio-oil formation rate were analyzed through mathematical model development. First model aasumed as an isothermal process, while second model was not. It is found that both models were able to provide a good approach to experimental data. The average reaction rate constants was obtained from isothermal model, while the activation energy level and collision factors were obtained from non-isothermal model. The reaction rate will increase by addition of Na2CO3 (0 - 0.5 g) as catalyst to 250 mL system solution, then the activation energy will decrease from 1964.265 joules/mole to 1029.994 joules/mole. The GC-MS analysis results showed that the bio-oil were contained of ester compounds, phenolic compounds, cyclic compunds, heterocyclic compounds, and poly-alcohols compounds.
Begy, R; Cosma, C; Timar, A
2009-08-01
This work presents a first estimation of the sedimentation rate for the Red Lake (Romania). The sediment accumulation rates were determined by two well-known methods for recent sediment dating: (210)Pb and (137)Cs methods. Both techniques implied used the gamma emission of the above-mentioned radionuclides. The (210)Pb and (137)Cs concentrations in the sediment were measured using a gamma spectrometer with a HpGe detector, Gamma-X type. Activities ranging from 41+/-7 to 135+/-34Bq/kg were found for (210)Pb and from 3+/-0.5 to 1054+/-150Bq/kg for (137)Cs. The sediment profile indicates acceleration in sedimentation rate in the last 18 years. Thus, the sedimentation process for the Red Lake can be divided in two periods, the last 18 years, and respectively, the period before that. Using the Constant Rate of (210)Pb Supply method values between 0.18+/-0.04 and 1.85+/-0.5g/cm(2) year (0.32+/-0.08 and 2.83+/-0.7cm/year) were obtained. Considering both periods, an average sedimentation rate of 0.87+/-0.17g/cm(2) year (1.17cm/year) was calculated. Considering an average depth of 5.41m for the lake and the sedimentation rate estimated for the last 18 years, it could be estimated that the lake will disappear in 195 years.
Petrich, Nicholas T.; Spak, Scott N.; Carmichael, Gregory R.; Hu, Dingfei; Martinez, Andres; Hornbuckle, Keri C.
2013-01-01
Passive air samplers (PAS) including polyurethane foam (PUF) are widely deployed as an inexpensive and practical way to sample semi-volatile pollutants. However, concentration estimates from PAS rely on constant empirical mass transfer rates, which add unquantified uncertainties to concentrations. Here we present a method for modeling hourly sampling rates for semi-volatile compounds from hourly meteorology using first-principle chemistry, physics, and fluid dynamics, calibrated from depuration experiments. This approach quantifies and explains observed effects of meteorology on variability in compound-specific sampling rates and analyte concentrations; simulates nonlinear PUF uptake; and recovers synthetic hourly concentrations at a reference temperature. Sampling rates are evaluated for polychlorinated biphenyl congeners at a network of Harner model samplers in Chicago, Illinois during 2008, finding simulated average sampling rates within analytical uncertainty of those determined from loss of depuration compounds, and confirming quasi-linear uptake. Results indicate hourly, daily and interannual variability in sampling rates, sensitivity to temporal resolution in meteorology, and predictable volatility-based relationships between congeners. We quantify importance of each simulated process to sampling rates and mass transfer and assess uncertainty contributed by advection, molecular diffusion, volatilization, and flow regime within the PAS, finding PAS chamber temperature contributes the greatest variability to total process uncertainty (7.3%). PMID:23837599
Graham, Emily B.; Knelman, Joseph E.; Schindlbacher, Andreas; Siciliano, Steven; Breulmann, Marc; Yannarell, Anthony; Beman, J. M.; Abell, Guy; Philippot, Laurent; Prosser, James; Foulquier, Arnaud; Yuste, Jorge C.; Glanville, Helen C.; Jones, Davey L.; Angel, Roey; Salminen, Janne; Newton, Ryan J.; Bürgmann, Helmut; Ingram, Lachlan J.; Hamer, Ute; Siljanen, Henri M. P.; Peltoniemi, Krista; Potthast, Karin; Bañeras, Lluís; Hartmann, Martin; Banerjee, Samiran; Yu, Ri-Qing; Nogaro, Geraldine; Richter, Andreas; Koranda, Marianne; Castle, Sarah C.; Goberna, Marta; Song, Bongkeun; Chatterjee, Amitava; Nunes, Olga C.; Lopes, Ana R.; Cao, Yiping; Kaisermann, Aurore; Hallin, Sara; Strickland, Michael S.; Garcia-Pausas, Jordi; Barba, Josep; Kang, Hojeong; Isobe, Kazuo; Papaspyrou, Sokratis; Pastorelli, Roberta; Lagomarsino, Alessandra; Lindström, Eva S.; Basiliko, Nathan; Nemergut, Diana R.
2016-01-01
Microorganisms are vital in mediating the earth’s biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: ‘When do we need to understand microbial community structure to accurately predict function?’ We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of process rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology. PMID:26941732
Graham, Emily B; Knelman, Joseph E; Schindlbacher, Andreas; Siciliano, Steven; Breulmann, Marc; Yannarell, Anthony; Beman, J M; Abell, Guy; Philippot, Laurent; Prosser, James; Foulquier, Arnaud; Yuste, Jorge C; Glanville, Helen C; Jones, Davey L; Angel, Roey; Salminen, Janne; Newton, Ryan J; Bürgmann, Helmut; Ingram, Lachlan J; Hamer, Ute; Siljanen, Henri M P; Peltoniemi, Krista; Potthast, Karin; Bañeras, Lluís; Hartmann, Martin; Banerjee, Samiran; Yu, Ri-Qing; Nogaro, Geraldine; Richter, Andreas; Koranda, Marianne; Castle, Sarah C; Goberna, Marta; Song, Bongkeun; Chatterjee, Amitava; Nunes, Olga C; Lopes, Ana R; Cao, Yiping; Kaisermann, Aurore; Hallin, Sara; Strickland, Michael S; Garcia-Pausas, Jordi; Barba, Josep; Kang, Hojeong; Isobe, Kazuo; Papaspyrou, Sokratis; Pastorelli, Roberta; Lagomarsino, Alessandra; Lindström, Eva S; Basiliko, Nathan; Nemergut, Diana R
2016-01-01
Microorganisms are vital in mediating the earth's biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: 'When do we need to understand microbial community structure to accurately predict function?' We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of process rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology.
Projecting 21st century coastal cliff retreat in Southern California
NASA Astrophysics Data System (ADS)
Limber, P. W.; Barnard, P.; Erikson, L. H.; Vitousek, S.
2016-12-01
In California, sea level is expected to rise over 1 m by 2100, with extreme projections approaching 3 m. Sea level rise (SLR) increases the frequency, severity, and duration of wave impacts on coastal cliffs, potentially accelerating cliff retreat rates. To assess the future risk to cliff-top infrastructure, densely populated Southern California cities like Los Angeles and San Diego require estimates of coastal retreat over long time (multi-decadal) and large spatial (>100 km) scales. We developed a suite of eight coastal cliff retreat models, ranging in complexity from empirical 1-D representations of cliff response to wave impacts to more intricate 2-D process-based models integrated with artificial neural networks. The ensemble produces a comprehensive estimate of time-averaged coastal cliff retreat with uncertainty, is applicable to different geological environments, and is flexible in application depending on processing power, available data, and/or available time (e.g. if processing power and time are limited, the fast 1-D models can be used as a `rapid assessment' tool). Global-to-local nested wave models provided the hindcasts (1980-2010) and forecasts (2010-2100) used to force the models, and waves were applied in combination with eight SLR scenarios ranging from 0.25 m to 2 m. In the more detailed models, tides, non-tidal residuals, and storm surge were included for the hindcast and forecast periods. For model calibration, a new automated cliff edge extraction routine was used to estimate historical cliff retreat rates from LiDAR data. Initial model application to Southern California suggests that 1 m of SLR during the 21st century will cause cliff retreat rates to increase on average by over 50% relative to historical rates. Model results also demonstrate how small-scale, episodic cliff failure events can coalesce through time into spatially uniform, long-term cliff retreat signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newell, J; Miller, D; Stone, M
The Savannah River National Laboratory (SRNL) was tasked to provide an assessment of the downstream impacts to the Defense Waste Processing Facility (DWPF) of decisions regarding the implementation of Al-dissolution to support sludge mass reduction and processing. Based on future sludge batch compositional projections from the Liquid Waste Organization's (LWO) sludge batch plan, assessments have been made with respect to the ability to maintain comparable projected operating windows for sludges with and without Al-dissolution. As part of that previous assessment, candidate frits were identified to provide insight into melt rate for average sludge batches representing with and without Al-dissolution flowsheets.more » Initial melt rate studies using the melt rate furnace (MRF) were performed using five frits each for Cluster 2 and Cluster 4 compositions representing average without and with Al-dissolution. It was determined, however, that the REDOX endpoint (Fe{sup 2+}/{Sigma}Fe for the glass) for Clusters 2 and 4 resulted in an overly oxidized feed which negatively affected the initial melt rate tests. After the sludge was adjusted to a more reduced state, additional testing was performed with frits that contained both high and low concentrations of sodium and boron oxides. These frits were selected strictly based on the ability to ascertain compositional trends in melt rate and did not necessarily apply to any acceptability criteria for DWPF processing. The melt rate data are in general agreement with historical trends observed at SRNL and during processing of SB3 (Sludge Batch 3)and SB4 in DWPF. When MAR acceptability criteria were applied, Frit 510 was seen to have the highest melt rate at 0.67 in/hr for Cluster 2 (without Al-dissolution), which is compositionally similar to SB4. For Cluster 4 (with Al-dissolution), which is compositionally similar to SB3, Frit 418 had the highest melt rate at 0.63 in/hr. Based on this data, there appears to be a slight advantage of the Frit 510 based system without Al-dissolution relative to the Frit 418 based system with Al-dissolution. Though the without aluminum dissolution scenario suggests a slightly higher melt rate with frit 510, several points must be taken into consideration: (1) The MRF does not have the ability to assess liquid feeds and, thus, rheology impacts. Instead, the MRF is a 'static' test bed in which a mass of dried melter feed (SRAT product plus frit) is placed in an 'isothermal' furnace for a period of time to assess melt rate. These conditions, although historically effective in terms of identifying candidate frits for specific sludge batches and mapping out melt rate versus waste loading trends, do not allow for assessments of the potential impact of feed rheology on melt rate. That is, if the rheological properties of the slurried melter feed resulted in the mounding of the feed in the melter (i.e., the melter feed was thick and did not flow across the cold cap), melt rate and/or melter operations (i.e., surges) could be negatively impacted. This could affect one or both flowsheets. (2) Waste throughput factors were not determined for Frit 510 and Frit 418 over multiple waste loadings. In order to provide insight into the mission life versus canister count question, one needs to define the maximum waste throughput for both flowsheets. Due to funding limitations, the melt rate testing only evaluated melt rate at a fixed waste loading. (3) DWPF will be processing SB5 through their facility in mid-November 2008. Insight into the over arching questions of melt rate, waste throughput, and mission life can be obtained directly from the facility. It is recommended that processing of SB5 through the facility be monitored closely and that data be used as input into the decision making process on whether to implement Al-dissolution for future sludge batches.« less
Average of delta: a new quality control tool for clinical laboratories.
Jones, Graham R D
2016-01-01
Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.
NASA Technical Reports Server (NTRS)
Sellers, Piers
2012-01-01
Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.
Fractures of the talus: experience of two level 1 trauma centers.
Elgafy, H; Ebraheim, N A; Tile, M; Stephen, D; Kase, J
2000-12-01
Fifty-eight patients with 60 talar fractures were retrospectively reviewed. There were 39 men and 19 women. The age average was 32 (range, 14-74). Eighty six percent of the patients had multiple injuries. The most common mechanism of injury was a motor vehicle accident. Twenty-seven (45%) of the fractures were neck, 22 (36.7%) process, and 11 (18.3%) body. Forty-eight fractures had operative treatment and 12 had non-operative management. The average follow-up period was 30 months (range, 24-65). Thirty-two fractures (53.3%) developed subtalar arthritis. Two patients had subsequent subtalar fusion. Fifteen fractures (25%) developed ankle arthritis. None of these patients required ankle fusion. Fractures of the body of the talus were associated with the highest incidence of degenerative joint disease of both the subtalar and ankle joints. Ten fractures (16.6%) developed avascular necrosis (AVN), only one of which had subsequent slight collapse. Avascular necrosis occurred mostly after Hawkins Type 3 and 2 fractures of the talar neck. Three rating scores were used in this series to assess the outcome: the American Orthopedic Foot and Ankle Society (AOFAS) Ankle-Hindfoot Score, Maryland Foot Score, and Hawkins Evaluation Criteria. The outcome was different with every rating system. However, the outcome with AOFAS Ankle-Hindfoot Score and Hawkins Evaluation Criteria were almost equivalent. Assessment with the three rating scores showed that the process fractures had the best results followed by the neck and then the body fractures.
Will Commodity Properties Affect Seller's Creditworthy: Evidence in C2C E-commerce Market in China
NASA Astrophysics Data System (ADS)
Peng, Hui; Ling, Min
This paper finds out that the credit rating level shows significant difference among different sub-commodity markets in E-commerce, which provides room for sellers to get higher credit rating by entering businesses with higher average credit level before fraud. In order to study the influence of commodity properties on credit rating, this paper analyzes how commodity properties affect average crediting rating through the degree of information asymmetry, returns and costs of fraud, credibility perception and fraud tolerance. Empirical study shows that Delivery, average trading volume, average price and complaint possibility have decisive impacts on credit performance; brand market share, the degree of standardization and the degree of imitation also have a relatively less significant effect on credit rating. Finally, this paper suggests that important commodity properties should be introduced to modify reputation system, for preventing credit rating arbitrage behavior where sellers move into low-rating commodity after being assigned high credit rating.
Laser technologies for ultrasensitive groundwater dating using long-lived isotopes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, Sterling
In this phase I work, we propose to construct and demonstrate a 103 nm laser based on resonantly enhanced and phase matched fifth harmonic generation in hollow waveguides driven by a high power, low cost and compact ultrafast fiber laser. (Figure 4) This VUV laser source can potentially produce >100 milliwatts of VUV light at 103 nm with pulse repetition-rates of 100 kHz to 100 MHz, ideal for the above-mentioned applications. This technology is state-of-the-art and potentially compact, fieldable, low-cost, and of broad interest for a variety of science and technology applications. Laser-based VUV sources in the past have exhibitedmore » low repetition rate, low efficiency, low beam quality, and are based on expensive laser sources. Our approch is to combine ultrafast fiber laser drive technology, ultrafast pulses, and our proven waveguide technology, to create a high repetition rate, high average power VUV source for producing high yield metastable Krypton. At KMLabs we have been offering EUV light sources employing the high harmonic generation (HHG) process driven by high-power femtosecond lasers for >5 years now. Recently, we have developed much smaller scale (briefcase size), but still high average power femtosecond fiber laser sources to supply other markets, and create new ones. By combining these new laser sources with our patented waveguide frequency upconversion technology, we expect to be able to obtain >20mW average power initially, with potentially much higher powers depending on wavelength, in an affordable VUV product. For comparison, our current EUV light sources based on ti:sapphire generate an average power of ~5 µW (albeit at shorter 29 nm wavelength), and we are aware of one other supplier that has developed a VUV (112 nm) light source with ~10-20 µW power.« less
Repaying in Kind: Examination of the Reciprocity Effect in Faculty and Resident Evaluations.
Gardner, Aimee K; Scott, Daniel J
Although the reciprocity hypothesis (that trainees have a tendency to modify evaluations based on the grades they receive from instructors) has been documented in other fields, very little work has examined this phenomenon in the surgical residency environment. The purpose of this study was to investigate the extent to which lenient-grading faculty receive higher evaluations from surgery residents. Evaluation data from 2 consecutive academic years were collected retrospectively at a large university-based General Surgery residency program. Monthly faculty evaluations of residents (15 items) and resident evaluations of faculty (8 items; 1 = never demonstrates, 10 = always demonstrates) were included. Correlation and regression analyses were conducted with SPSS version 22 (IBM; Chicago, IL). A total of 2274 faculty assessments and 1480 resident assessments were included in this study, representing 2 years of evaluations for 32 core faculty members responsible for completing all resident evaluations and 68 PGY1-5 general surgery residents. Faculty (63% men, 13.5 ± 9.8 years out of training) represented 5 different divisions (general surgery, surgical oncology, transplant, trauma critical care, and vascular) within the general surgery department. Faculty received an average of 71.1 ± 33.9 evaluations from residents over the course of 2 years. The average rating of faculty teaching by residents was 9.5 ± 0.4. Residents received an average of 21.8 ± 0.5 evaluations with average ratings of 4.2 ± 0.4. Correlation analyses indicated a positive relationship between the average rating received from residents and the number of years since faculty completed training (r = 0.44, p = 0.01). Additionally, a significant relationship emerged between ratings received from residents and ratings given to residents (r = 0.40, p = 0.04). Regression analyses indicated that when both variables (years since training, ratings given to residents) were included in the model, only ratings given to residents remained a significant predictor of evaluation ratings received from residents (F (1,32) = 4.40, p = 0.04), with an R 2 of 0.16. Sex or division affiliation did not account for any unique variance. These findings suggest that a reciprocity effect exists between surgery faculty and resident evaluations. This effect warrants further exploration, such that efforts to mitigate the risks of providing inaccurate assessments may be developed. Providing trainees with accurate assessments is particularly important given the high-stakes use of these data for milestones, promotion, and graduation purposes, which currently do not account for this reciprocity effect. Results suggest that there is a reciprocity effect in the faculty and resident evaluation process. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Customer care. Patient satisfaction in the prehospital setting.
Doering, G T
1998-09-01
The focus of the study was to prioritize six emergency medical service treatment factors in terms of their impact upon patient satisfaction in the prehospital setting. The six treatment areas analyzed were: EMS response time; medical care provided on scene; explanation of care by the provider; the provider's ability to reduce patient anxiety; the provider's ability to meet the patient's non-medical needs; and the level of courtesy/politeness shown by the EMS provider toward the patient. Telephone interviews were conducted with both patients and bystanders to obtain their perception of how well the system met their needs. The study analyzed how the six issues were rated and then evaluated the impact an individual's low score in a category had on that person's overall rating of the service provided. The overall satisfaction rating is not a calculated score, but an overall score specified by the respondent. The effect each issue had on the respondent's overall rating was determined by averaging the overall ratings for a category's low scorers, averaging the overall ratings for high scorers and then measuring the difference. Results of the study indicate that the factor with the greatest negative impact on patient satisfaction came from a perceived lack of crew courtesy and politeness. Respondents who indicated a fair to poor score in this category decreased their overall score by 60.2%. Ratings in other categories yielded the following results: When respondents rated the response time as fair to poor, their average overall rating showed an 18.4% decrease. When respondents rated the quality of medical care as fair to poor, their average overall rating showed a decrease of 22.6%. When the crew's ability to explain what was happening to the patient was rated as fair to poor, the average overall score dropped 33.6%. When the EMT's and medic's ability to reduce the patient's anxiety was rated fair to poor, average overall score declined by 32.6%. Finally, when the crew's ability to satisfy a patient's non-medical needs was rated as fair to poor, the average overall score diminished by 37.4%.
Xiao, Bo; Imel, Zac E.; Georgiou, Panayiotis G.; Atkins, David C.; Narayanan, Shrikanth S.
2015-01-01
The technology for evaluating patient-provider interactions in psychotherapy–observational coding–has not changed in 70 years. It is labor-intensive, error prone, and expensive, limiting its use in evaluating psychotherapy in the real world. Engineering solutions from speech and language processing provide new methods for the automatic evaluation of provider ratings from session recordings. The primary data are 200 Motivational Interviewing (MI) sessions from a study on MI training methods with observer ratings of counselor empathy. Automatic Speech Recognition (ASR) was used to transcribe sessions, and the resulting words were used in a text-based predictive model of empathy. Two supporting datasets trained the speech processing tasks including ASR (1200 transcripts from heterogeneous psychotherapy sessions and 153 transcripts and session recordings from 5 MI clinical trials). The accuracy of computationally-derived empathy ratings were evaluated against human ratings for each provider. Computationally-derived empathy scores and classifications (high vs. low) were highly accurate against human-based codes and classifications, with a correlation of 0.65 and F-score (a weighted average of sensitivity and specificity) of 0.86, respectively. Empathy prediction using human transcription as input (as opposed to ASR) resulted in a slight increase in prediction accuracies, suggesting that the fully automatic system with ASR is relatively robust. Using speech and language processing methods, it is possible to generate accurate predictions of provider performance in psychotherapy from audio recordings alone. This technology can support large-scale evaluation of psychotherapy for dissemination and process studies. PMID:26630392
Code of Federal Regulations, 2014 CFR
2014-07-01
... dividing the sum by 12 (Recent Average). The partial year limitation is then calculated by dividing the Recent Average by the Recent Average from the most recent previous notice of rate adjustment (Previous Recent Average) applicable to each affected class of mail and subtracting 1 from the quotient. The result...
Kriegeskorte, Nikolaus
2012-01-01
The two major functions of a scientific publishing system are to provide access to and evaluation of scientific papers. While open access (OA) is becoming a reality, open evaluation (OE), the other side of the coin, has received less attention. Evaluation steers the attention of the scientific community and thus the very course of science. It also influences the use of scientific findings in public policy. The current system of scientific publishing provides only journal prestige as an indication of the quality of new papers and relies on a non-transparent and noisy pre-publication peer-review process, which delays publication by many months on average. Here I propose an OE system, in which papers are evaluated post-publication in an ongoing fashion by means of open peer review and rating. Through signed ratings and reviews, scientists steer the attention of their field and build their reputation. Reviewers are motivated to be objective, because low-quality or self-serving signed evaluations will negatively impact their reputation. A core feature of this proposal is a division of powers between the accumulation of evaluative evidence and the analysis of this evidence by paper evaluation functions (PEFs). PEFs can be freely defined by individuals or groups (e.g., scientific societies) and provide a plurality of perspectives on the scientific literature. Simple PEFs will use averages of ratings, weighting reviewers (e.g., by H-index), and rating scales (e.g., by relevance to a decision process) in different ways. Complex PEFs will use advanced statistical techniques to infer the quality of a paper. Papers with initially promising ratings will be more deeply evaluated. The continual refinement of PEFs in response to attempts by individuals to influence evaluations in their own favor will make the system ungameable. OA and OE together have the power to revolutionize scientific publishing and usher in a new culture of transparency, constructive criticism, and collaboration. PMID:23087639
NASA Astrophysics Data System (ADS)
Goebel, T.; Aminzadeh, F.
2015-12-01
The seismogenic response to induced pressure changes provides insight into the proximity to failure of faults close to injection sites. Here, we examine possible seismicity rate changes in response to wastewater disposal and enhanced oil recovery operations in hydrocarbon basins in California and Oklahoma. We test whether a statistically significant rate increase exists within these areas and determine the corresponding timing and location based on nonparametric modeling of background seismicity rates. Annual injection volumes increased monotonically since ~2001 in California and ~1998 in Oklahoma. While OK experienced a recent surge in seismic activity which exceeded the 95% confidence limit of a stationary Poisson process in ~2010, seismicity in CA showed no increase in background rates between 1980 and 2014. A systematic analysis of frequency-magnitude-distributions (FMDs) of likely induced earthquakes in OK indicates that FMDs are depleted in large-magnitude events. Seismicity in CA hydrocarbon basins, on the other hand, shows Gutenberg-Richter type FMDs and b~1. Moreover, the earthquakes and injection operations occur preferably in distinct areas in CA whereas in OK earthquakes occur closer to injection wells than expected from a random uniform process. To test whether injection operations may be responsible for the strongly different seismicity characteristics in CA and OK, we compare overall well density, wellhead pressures, peak and cumulative rates as well as injection depths. We find that average injection rates, pressures and volumes are comparable between CA and OK and that injection occurs on average 0.5 km deeper in CA than in OK. Thus, the here tested operational parameters can not easily explain the vastly different seismogenic response to injection operations in CA and OK, and may only be of secondary importance for the resulting earthquake activity. The potential to induce earthquakes by fluid injection operations is likely controlled by the specific geologic setting and stress state on nearby faults.
Lei, Ge; Ren, Hongqiang; Ding, Lili; Wang, Feifei; Zhang, Xingsong
2010-08-01
A full-scale combined biological system is used for the treatment of treated wastewater discharged from a pharmaceutical industrial park. This treated water is rich in NH(4)(+)-N (average in 86.4 mg/L), low in COD/NH(4)(+)-N (average in 3.4) and low in BOD(5)/COD ratio (average in 0.24) with pH varying from 7.16 to 7.78. The final effluent of the combined treatment process was stably below 100mg/L COD and 20mg/L NH(4)(+)-N, separately, with organic loading rate of 4954 kg COD/d and 92.5 kg NH(4)(+)-N/d. It is found that the BOD(5)/COD ratio could be raised from 0.24 to 0.35, and the production of total VFAs account for 9.57% of the total COD via the treatment of hydrolysis/acidification. MBBR and oxidation ditch represent 35.4% and 60.7% of NH(4)(+)-N removal, 30.2% and 61.5% of COD removal, separately, of the total treatment process. PCR-DGGE is used for microbial community analysis of MBBR and oxidation ditch. (c) 2010. Published by Elsevier Ltd. All rights reserved.
Everts, A J; Wulf, D M; Everts, A K R; Nath, T M; Jennings, T D; Weaver, A D
2010-10-01
Cooking yield, cooked pH, purge loss, moisture, lipid oxidation, external and internal color, break strength and elongation distance were assessed for pale (PALE), average (AVG) and dark (DARK) inside hams injected with either a control cure solution (CON) or BPI-processing technology cure solution (BPT). Following enhancement, muscles were chunked, vacuum tumbled, smoked and cooked to 66 degrees C. Cooked ham pH was 6.49 for DARK, 6.40 for AVG, and 6.30 for PALE, respectively (P<0.0001). Cooked pH was higher (P<.0001) for BPT than CON. Cooked ham moisture content was higher (P<0.0001) for BPT hams than CON hams (74.83 vs. 74.11%) but BPT did not significantly influence cooking yield or lipid oxidation. Consumers (n=150) of diverse demographics rated hams for appearance and taste. Results indicated that BPI-processing technology improved visual appearance of hams made from pale, average, and dark muscles and improved the eating quality of hams made from pale muscles. Copyright (c) 2010 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hulsman, P.; Bogaard, T.; Savenije, H. H. G.
2016-12-01
In hydrology and water resources management, discharge is the main time series for model calibration. Rating curves are needed to derive discharge from continuously measured water levels. However, assuring their quality is demanding due to dynamic changes and problems in accurately deriving discharge at high flows. This is valid everywhere, but even more in African socio-economic context. To cope with these uncertainties, this study proposes to use water levels instead of discharge data for calibration. Also uncertainties in rainfall measurements, especially the spatial heterogeneity needs to be considered. In this study, the semi-distributed rainfall runoff model FLEX-Topo was applied to the Mara River Basin. In this model seven sub-basins were distinguished and four hydrological response units with each a unique model structure based on the expected dominant flow processes. Parameter and process constrains were applied to exclude unrealistic results. To calibrate the model, the water levels were back-calculated from modelled discharges, using cross-section data and the Strickler formula calibrating parameter `k•s1/2', and compared to measured water levels. The model simulated the water depths well for the entire basin and the Nyangores sub-basin in the north. However, the calibrated and observed rating curves differed significantly at the basin outlet, probably due to uncertainties in the measured discharge, but at Nyangores they were almost identical. To assess the effect of rainfall uncertainties on the hydrological model, the representative rainfall in each sub-basin was estimated with three different methods: 1) single station, 2) average precipitation, 3) areal sub-division using Thiessen polygons. All three methods gave on average similar results, but method 1 resulted in more flashy responses, method 2 dampened the water levels due to averaging the rainfall and method 3 was a combination of both. In conclusion, in the case of unreliable rating curves, water level data can be used instead and a new rating curve can be calibrated. The effect of rainfall uncertainties on the hydrological model was insignificant.
Relative radiological risks derived from different TENORM wastes in Malaysia.
Ismail, B; Teng, I L; Muhammad Samudi, Y
2011-11-01
In Malaysia technologically enhanced naturally occurring radioactive materials (TENORM) wastes are mainly the product of the oil and gas industry and mineral processing. Among these TENORM wastes are tin tailing, tin slag, gypsum and oil sludge. Mineral processing and oil and gas industries produce large volume of TENORM wastes that has become a radiological concern to the authorities. A study was carried out to assess the radiological risk related to workers working at these disposal sites and landfills as well as to the members of the public should these areas be developed for future land use. Radiological risk was assessed based on the magnitude of radiation hazard, effective dose rates and excess cancer risks. Effective dose rates and excess cancer risks were estimated using RESRAD 6.4 computer code. All data on the activity concentrations of NORM in wastes and sludges used in this study were obtained from the Atomic Energy Licensing Board, Malaysia, and they were collected over a period of between 5 and 10 y. Results obtained showed that there was a wide range in the total activity concentrations (TAC) of nuclides in the TENORM wastes. With the exception of tin slag and tin tailing-based TENORM wastes, all other TENORM wastes have TAC values comparable to that of Malaysia's soil. Occupational Effective Dose Rates estimated in all landfill areas were lower than the 20 mSv y(-1) permissible dose limit. The average Excess Cancer Risk Coefficient was estimated to be 2.77×10(-3) risk per mSv. The effective dose rates for residents living on gypsum and oil sludge-based TENORM wastes landfills were estimated to be lower than the permissible dose limit for members of the public, and was also comparable to that of the average Malaysia's ordinary soils. The average excess cancer risk coefficient was estimated to be 3.19×10(-3) risk per mSv. Results obtained suggest that gypsum and oil sludge-based TENORM wastes should be exempted from any radiological regulatory control and should be considered radiologically safe for future land use.
Quality control process improvement of flexible printed circuit board by FMEA
NASA Astrophysics Data System (ADS)
Krasaephol, Siwaporn; Chutima, Parames
2018-02-01
This research focuses on the quality control process improvement of Flexible Printed Circuit Board (FPCB), centred around model 7-Flex, by using Failure Mode and Effect Analysis (FMEA) method to decrease proportion of defective finished goods that are found at the final inspection process. Due to a number of defective units that were found at the final inspection process, high scraps may be escaped to customers. The problem comes from poor quality control process which is not efficient enough to filter defective products from in-process because there is no In-Process Quality Control (IPQC) or sampling inspection in the process. Therefore, the quality control process has to be improved by setting inspection gates and IPCQs at critical processes in order to filter the defective products. The critical processes are analysed by the FMEA method. IPQC is used for detecting defective products and reducing chances of defective finished goods escaped to the customers. Reducing proportion of defective finished goods also decreases scrap cost because finished goods incur higher scrap cost than work in-process. Moreover, defective products that are found during process can reflect the abnormal processes; therefore, engineers and operators should timely solve the problems. Improved quality control was implemented for 7-Flex production lines from July 2017 to September 2017. The result shows decreasing of the average proportion of defective finished goods and the average of Customer Manufacturers Lot Reject Rate (%LRR of CMs) equal to 4.5% and 4.1% respectively. Furthermore, cost saving of this quality control process equals to 100K Baht.
ERIC Educational Resources Information Center
Bloom, Howard S.; Unterman, Rebecca
2013-01-01
In 2002, New York City embarked on an ambitious and wide-ranging series of education reforms. At the heart of its high school reforms were three interrelated changes: the institution of a district wide high school choice process for all rising ninth-graders, the closure of 31 large, failing high schools with an average graduation rate of 40…
Method of deposition by molecular beam epitaxy
Chalmers, Scott A.; Killeen, Kevin P.; Lear, Kevin L.
1995-01-01
A method is described for reproducibly controlling layer thickness and varying layer composition in an MBE deposition process. In particular, the present invention includes epitaxially depositing a plurality of layers of material on a substrate with a plurality of growth cycles whereby the average of the instantaneous growth rates for each growth cycle and from one growth cycle to the next remains substantially constant as a function of time.
Method of deposition by molecular beam epitaxy
Chalmers, S.A.; Killeen, K.P.; Lear, K.L.
1995-01-10
A method is described for reproducibly controlling layer thickness and varying layer composition in an MBE deposition process. In particular, the present invention includes epitaxially depositing a plurality of layers of material on a substrate with a plurality of growth cycles whereby the average of the instantaneous growth rates for each growth cycle and from one growth cycle to the next remains substantially constant as a function of time. 9 figures.
Valderrama, Joaquin T; de la Torre, Angel; Medina, Carlos; Segura, Jose C; Thornton, A Roger D
2016-03-01
The recording of auditory evoked potentials (AEPs) at fast rates allows the study of neural adaptation, improves accuracy in estimating hearing threshold and may help diagnosing certain pathologies. Stimulation sequences used to record AEPs at fast rates require to be designed with a certain jitter, i.e., not periodical. Some authors believe that stimuli from wide-jittered sequences may evoke auditory responses of different morphology, and therefore, the time-invariant assumption would not be accomplished. This paper describes a methodology that can be used to analyze the time-invariant assumption in jittered stimulation sequences. The proposed method [Split-IRSA] is based on an extended version of the iterative randomized stimulation and averaging (IRSA) technique, including selective processing of sweeps according to a predefined criterion. The fundamentals, the mathematical basis and relevant implementation guidelines of this technique are presented in this paper. The results of this study show that Split-IRSA presents an adequate performance and that both fast and slow mechanisms of adaptation influence the evoked-response morphology, thus both mechanisms should be considered when time-invariance is assumed. The significance of these findings is discussed. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Ye, Naihao; Li, Demao; Chen, Limei; Zhang, Xiaowen; Xu, Dong
2010-01-01
Seaweed has attracted considerable attention as a potential biofuel feedstock. The pyrolytic and kinetic characteristics of maize straw and the seaweed Ulva pertusa were studied and compared using heating rates of 10, 30 and 50°C min−1 under an inert atmosphere. The activation energy, and pre-exponential factors were calculated by the Flynn-Wall-Ozawa (FWO), Kissinger-Akahira-Sunose (KAS) and Popescu methods. The kinetic mechanism was deduced by the Popescu method. The results indicate that there are three stages to the pyrolysis; dehydration, primary devolatilization and residual decomposition. There were significant differences in average activation energy, thermal stability, final residuals and reaction rates between the two materials. The primary devolatilization stage of U. pertusa can be described by the Avramic-Erofeev equation (n = 3), whereas that of maize straw can be described by the Mampel Power Law (n = 2). The average activation energy of maize straw and U. pertusa were 153.0 and 148.7 KJ mol−1, respectively. The pyrolysis process of U.pertusa would be easier than maize straw. And co-firing of the two biomass may be require less external heat input and improve process stability. There were minor kinetic compensation effects between the pre-exponential factors and the activation energy. PMID:20844751
Feature Detection of Curve Traffic Sign Image on The Bandung - Jakarta Highway
NASA Astrophysics Data System (ADS)
Naseer, M.; Supriadi, I.; Supangkat, S. H.
2018-03-01
Unsealed roadside and problems with the road surface are common causes of road crashes, particularly when those are combined with curves. Curve traffic sign is an important component for giving early warning to driver on traffic, especially on high-speed traffic like on the highway. Traffic sign detection has became a very interesting research now, and in this paper will be discussed about the detection of curve traffic sign. There are two types of curve signs are discussed, namely the curve turn to the left and the curve turn to the right and the all data sample used are the curves taken / recorded from some signs on the Bandung - Jakarta Highway. Feature detection of the curve signs use Speed Up Robust Feature (SURF) method, where the detected scene image is 800x450. From 45 curve turn to the right images, the system can detect the feature well to 35 images, where the success rate is 77,78%, while from the 45 curve turn to the left images, the system can detect the feature well to 34 images and the success rate is 75,56%, so the average accuracy in the detection process is 76,67%. While the average time for the detection process is 0.411 seconds.
Perspectives on the changing healthcare system: teaching systems-based practice to medical residents
Martinez, Johanna; Phillips, Erica; Fein, Oliver
2013-01-01
Purpose The Accreditation Council for Graduate Medical Education restructured its accreditation system to be based on educational outcomes in six core competencies. Systems-based practice is one of the six core competencies. The purpose of this report is to describe Weill Cornell Medical College's Internal Medicine Residency program curriculum for systems-based practice (SBP) and its evaluation process. Methods To examine potential outcomes of the POCHS curriculum, an evaluation was conducted, examining participants': (1) knowledge gain; (2) course ratings; and (3) qualitative feedback. Results On average, there was a 19 percentage point increase in knowledge test scores for all three cohorts. The course was rated overall highly, receiving an average of 4.6 on a 1–5 scale. Lastly, the qualitative comments supported that the material is needed and valued. Conclusion The course, entitled Perspectives on the Changing Healthcare System (POCHS) and its evaluation process support that systems-based practice is crucial to residency education. The course is designed not only to educate residents about the current health care system but also to enable them to think critically about the risk and benefits of the changes. POCHS provides a framework for teaching and assessing this competency and can serve as a template for other residency programs looking to create or restructure their SBP curriculum. PMID:24001523
Hotspot of accelerated sea-level rise on the Atlantic coast of North America
Sallenger,, Asbury H.; Doran, Kara S.; Howd, Peter A.
2012-01-01
Climate warming does not force sea-level rise (SLR) at the same rate everywhere. Rather, there are spatial variations of SLR superimposed on a global average rise. These variations are forced by dynamic processes, arising from circulation and variations in temperature and/or salinity, and by static equilibrium processes, arising from mass redistributions changing gravity and the Earth's rotation and shape. These sea-level variations form unique spatial patterns, yet there are very few observations verifying predicted patterns or fingerprints. Here, we present evidence of recently accelerated SLR in a unique 1,000-km-long hotspot on the highly populated North American Atlantic coast north of Cape Hatteras and show that it is consistent with a modelled fingerprint of dynamic SLR. Between 1950–1979 and 1980–2009, SLR rate increases in this northeast hotspot were ~ 3–4 times higher than the global average. Modelled dynamic plus steric SLR by 2100 at New York City ranges with Intergovernmental Panel on Climate Change scenario from 36 to 51 cm (ref. 3); lower emission scenarios project 24–36 cm (ref. 7). Extrapolations from data herein range from 20 to 29 cm. SLR superimposed on storm surge, wave run-up and set-up will increase the vulnerability of coastal cities to flooding, and beaches and wetlands to deterioration.
NASA Astrophysics Data System (ADS)
Liang, Yu; Zhou, Liying; Huang, Haomin; Xu, Mingfei; Guo, Mei; Chen, Xin
2018-01-01
A set of GDI system is installed on a F188 single-cylinder, air-cooled and direct injection diesel engine, which is used for ethanol injection, with the injection time controlled by the crank angle signal collected by AVL angle encoder. The injection of ethanol amounts to half of the thermal equivalent of an original diesel fuel. A 3D combustion model is established for the ethanol - diesel dual direct injection engine. Diesel was injected from the original fuel injection system, with a fuel supply advance angle of 20°CA. The ethanol was injected into the cylinder during compression process. Diesel injection began after the completion of ethanol injection. Ethanol injection starting point of 240°CA, 260°CA, 280°CA, 300°CA and 319.4°CA were simulated and analyzed. Due to the different timing of ethanol injection, the ignition of the ethanol mixture when diesel fires, results in non-uniform ignition distribution and flame propagation rate, since the distribution and concentration gradients of the ethanol mixture in the cylinder are different, thus affecting the combustion process. The results show that, when ethanol is injected at 319.4°CA, the combustion heat release rate and the pressure rise rate during the initial stage are the highest. Also, the maximum combustion pressure, with a relatively advance phase, is the highest. In case of later initial ethanol injection, the average temperature in the cylinder during the initial combustion period will have a faster rise. In case of initial injection at 319.4°CA, the average temperature in the cylinder is the highest, followed by 240°CA ethanol injection. In the post-combustion stage, the earlier ethanol injection will result in higher average temperature in the cylinder and more complete fuel combustion. The injection of ethanol at 319.4°CA produces earlier and highest NOX emissions.
Arrangement Analysis of Leaves Optimized on Photon Flux Density or Photosynthetic Rate
NASA Astrophysics Data System (ADS)
Obara, Shin'ya; Tanno, Itaru
By clarifying a plant evolutive process, useful information may be obtained on engineering. Consequently, an analysis algorithm that investigates the optimal arrangement of plant leaves was developed. In the developed algorithm, the Monte Carlo method is introduced and sunlight is simulated. Moreover, the arrangement optimization of leaves is analyzed using a Genetic Algorithm (GA). The number of light quanta (photon flux density) that reaches leaves, or the average photosynthetic rate of the same was set as the objective function, and leaf models of a dogwood and a ginkgo tree were analyzed. The number of leaf models was set between two to four, and the position of the leaf was expressed in terms of the angle of direction, elevation angle, rotation angle, and the representative length of the branch of a leaf. The chromosome model introduced into GA consists of information concerning the position of the leaf. Based on the analysis results, the characteristics of the leaf of an actual plant could be simulated by ensuring the algorithm had multiple constrained conditions. The optimal arrangement of leaves differs in maximization of the photon flux density, and that of the average value of a photosynthetic rate. Furthermore, the leaf form affecting the optimal arrangement of leave and also having a significant influence also on a photosynthetic rate was shown.
Towards a sensorimotor aesthetics of performing art.
Calvo-Merino, B; Jola, C; Glaser, D E; Haggard, P
2008-09-01
The field of neuroaesthetics attempts to identify the brain processes underlying aesthetic experience, including but not limited to beauty. Previous neuroaesthetic studies have focussed largely on paintings and music, while performing arts such as dance have been less studied. Nevertheless, increasing knowledge of the neural mechanisms that represent the bodies and actions of others, and which contribute to empathy, make a neuroaesthetics of dance timely. Here, we present the first neuroscientific study of aesthetic perception in the context of the performing arts. We investigated brain areas whose activity during passive viewing of dance stimuli was related to later, independent aesthetic evaluation of the same stimuli. Brain activity of six naïve male subjects was measured using fMRI, while they watched 24 dance movements, and performed an irrelevant task. In a later session, participants rated each movement along a set of established aesthetic dimensions. The ratings were used to identify brain regions that were more active when viewing moves that received high average ratings than moves that received low average ratings. This contrast revealed bilateral activity in the occipital cortices and in right premotor cortex. Our results suggest a possible role of visual and sensorimotor brain areas in an automatic aesthetic response to dance. This sensorimotor response may explain why dance is widely appreciated in so many human cultures.
Passive dosimetry aboard the Mir Orbital Station: external measurements.
Benton, E R; Benton, E V; Frank, A L
2002-10-01
This paper reports results from the first measurements made on the exterior of a LEO spacecraft of mean dose equivalent rate and average quality factor as functions of shielding depth for shielding less than 1 g/cm2 Al equivalent. Two sets of measurements were made on the outside of the Mir Orbital Station; one near solar maximum in June 1991 and one near solar minimum in 1997. Absorbed dose was measured using stacks of TLDs. LET spectrum from charged particles of LET infinity H2O > o r= 5keV/micrometers was measured using stacks of CR-39 PNTDs. Results from the TLD and PNTD measurements at a given shielding depth were combined to yield mean total dose rate, mean dose equivalent rate, and average quality factor. Measurements made near solar maximum tend to be greater than those made during solar minimum. Both mean dose rate and mean dose equivalent rate decrease by nearly four orders of magnitude within the first g/cm2 shielding illustrating the attenuation of both trapped electrons and low-energy trapped protons. In order to overcome problems with detector saturation after standard chemical processing, measurement of LET spectrum in the least shielded CR-39 PNTD layer (0.005 g/cm2 Al) was carried out using an atomic force microscope. c2002 Elsevier Science Ltd. All rights reserved.
Passive dosimetry aboard the Mir Orbital Station: external measurements
NASA Technical Reports Server (NTRS)
Benton, E. R.; Benton, E. V.; Frank, A. L.
2002-01-01
This paper reports results from the first measurements made on the exterior of a LEO spacecraft of mean dose equivalent rate and average quality factor as functions of shielding depth for shielding less than 1 g/cm2 Al equivalent. Two sets of measurements were made on the outside of the Mir Orbital Station; one near solar maximum in June 1991 and one near solar minimum in 1997. Absorbed dose was measured using stacks of TLDs. LET spectrum from charged particles of LET infinity H2O > o r= 5keV/micrometers was measured using stacks of CR-39 PNTDs. Results from the TLD and PNTD measurements at a given shielding depth were combined to yield mean total dose rate, mean dose equivalent rate, and average quality factor. Measurements made near solar maximum tend to be greater than those made during solar minimum. Both mean dose rate and mean dose equivalent rate decrease by nearly four orders of magnitude within the first g/cm2 shielding illustrating the attenuation of both trapped electrons and low-energy trapped protons. In order to overcome problems with detector saturation after standard chemical processing, measurement of LET spectrum in the least shielded CR-39 PNTD layer (0.005 g/cm2 Al) was carried out using an atomic force microscope. c2002 Elsevier Science Ltd. All rights reserved.
Perry, Chris T; Murphy, Gary N; Kench, Paul S; Edinger, Evan N; Smithers, Scott G; Steneck, Robert S; Mumby, Peter J
2014-12-07
Coral cover has declined rapidly on Caribbean reefs since the early 1980s, reducing carbonate production and reef growth. Using a cross-regional dataset, we show that widespread reductions in bioerosion rates-a key carbonate cycling process-have accompanied carbonate production declines. Bioerosion by parrotfish, urchins, endolithic sponges and microendoliths collectively averages 2 G (where G = kg CaCO3 m(-2) yr(-1)) (range 0.96-3.67 G). This rate is at least 75% lower than that reported from Caribbean reefs prior to their shift towards their present degraded state. Despite chronic overfishing, parrotfish are the dominant bioeroders, but erosion rates are reduced from averages of approximately 4 to 1.6 G. Urchin erosion rates have declined further and are functionally irrelevant to bioerosion on most reefs. These changes demonstrate a fundamental shift in Caribbean reef carbonate budget dynamics. To-date, reduced bioerosion rates have partially offset carbonate production declines, limiting the extent to which more widespread transitions to negative budget states have occurred. However, given the poor prognosis for coral recovery in the Caribbean and reported shifts to coral community states dominated by slower calcifying taxa, a continued transition from production to bioerosion-controlled budget states, which will increasingly threaten reef growth, is predicted. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Willinger, Ulrike; Hergovich, Andreas; Schmoeger, Michaela; Deckert, Matthias; Stoettner, Susanne; Bunda, Iris; Witting, Andrea; Seidler, Melanie; Moser, Reinhilde; Kacena, Stefanie; Jaeckle, David; Loader, Benjamin; Mueller, Christian; Auff, Eduard
2017-05-01
Humour processing is a complex information-processing task that is dependent on cognitive and emotional aspects which presumably influence frame-shifting and conceptual blending, mental operations that underlie humour processing. The aim of the current study was to find distinctive groups of subjects with respect to black humour processing, intellectual capacities, mood disturbance and aggressiveness. A total of 156 adults rated black humour cartoons and conducted measurements of verbal and nonverbal intelligence, mood disturbance and aggressiveness. Cluster analysis yields three groups comprising following properties: (1) moderate black humour preference and moderate comprehension; average nonverbal and verbal intelligence; low mood disturbance and moderate aggressiveness; (2) low black humour preference and moderate comprehension; average nonverbal and verbal intelligence, high mood disturbance and high aggressiveness; and (3) high black humour preference and high comprehension; high nonverbal and verbal intelligence; no mood disturbance and low aggressiveness. Age and gender do not differ significantly, differences in education level can be found. Black humour preference and comprehension are positively associated with higher verbal and nonverbal intelligence as well as higher levels of education. Emotional instability and higher aggressiveness apparently lead to decreased levels of pleasure when dealing with black humour. These results support the hypothesis that humour processing involves cognitive as well as affective components and suggest that these variables influence the execution of frame-shifting and conceptual blending in the course of humour processing.
The Actual (Un)usefulness of the Bologna System in Medical Education.
Masic, Izet; Begic, Edin
2016-04-01
Faculty of Medicine, University of Sarajevo has officially started working on 22.11.1944, and is the oldest faculty in the medical field in Bosnia and Herzegovina. At the same time there are two systems of organization of the teaching process, the old system and the Bologna system. To analyze the implementation of the Bologna system, and making an overview of its justification. Answers from questionnaires from total of 459 students were analyzed (197 who had studied under the old system and 262 who studied under the Bologna system), so total of four generations of the Bologna system. They filled out a questionnaire in which they evaluated the teaching process. Student's opinion about quality of medical education was measured by modified Lickert scale. Students of old system are older than students of the Bologna process, whose average age is increasing from generation to generation, given the growing number of students who repeat a year. All students of old system repeated an academic year once or several times (p <0.05). Analysis of average grades showed statistically significant difference (p <0.05), where students in the Bologna system had higher averages than students who were studying under the old system. The presence of large number of female students, in both systems is significant (p <0.05). Out of 33 questions about satisfaction of class, 15 were answered with better average grade from students of the Bologna system. A slight improvement in the Bologna system is in terms of the evaluation of the quality of the educational process (teachers, methods, effects). The only significant progress has been proven in terms of rating the degree of computerization of the educational process-general records on enrolled students (old system vs Bologna system-3,44 vs 3,63), record of attendance (3,47 vs 3,73), obtaining certificates (3,08 vs 3,84), method of registration of exam (2,98 vs 3,71), method of practical exam (3,06 vs 3,36) and theoretical methods of taking exam (3,01 vs 3,14). Average grades where no average grade, on any issue, does not reach grade 4 of Likert Scale, talks about real problems of education in medical field. In a relatively large sample (four generations of students) true benefit and progress of the Bologna system has not been proven, in comparison to the old system. Bologna system has eased the students in the administrative sense by introduction of computerization of faculties, but the old problems and old questions about the organization's process and delivery have not been eliminated.
Firm profitability and the network of organizational capabilities
NASA Astrophysics Data System (ADS)
Wagner, Friedrich; Milaković, Mishael; Alfarano, Simone
2010-11-01
A Laplace distribution for firm profit rates (or returns on assets) can be obtained through the sum of many independent shocks if the number of shocks is Poisson distributed. Interpreting this as a linear chain of events, we generalize the process to a hierarchical network structure. The hierarchical model reproduces the observed distributional patterns of firm profitability, which crucially depend on the life span of firms. While the profit rates of long-lived firms obey a symmetric Laplacian, short-lived firms display a different behavior depending on whether they are capable of generating positive profits or not. Successful short-lived firms exhibit a symmetric yet more leptokurtic pdf than long-lived firms. Our model suggests that these firms are more dynamic in their organizational capabilities, but on average also face more risk than long-lived firms. Finally, short-lived firms that fail to generate positive profits have the most leptokurtic distribution among the three classes, and on average lose slightly more than their total assets within a year.
Safak, Ilgar; List, Jeffrey; Warner, John C.; Kumar, Nirnimesh
2017-01-01
Long-term decadal-scale shoreline change is an important parameter for quantifying the stability of coastal systems. The decadal-scale coastal change is controlled by processes that occur on short time scales (such as storms) and long-term processes (such as prevailing waves). The ability to predict decadal-scale shoreline change is not well established and the fundamental physical processes controlling this change are not well understood. Here we investigate the processes that create large-scale long-term shoreline change along the Outer Banks of North Carolina, an uninterrupted 60 km stretch of coastline, using both observations and a numerical modeling approach. Shoreline positions for a 24-yr period were derived from aerial photographs of the Outer Banks. Analysis of the shoreline position data showed that, although variable, the shoreline eroded an average of 1.5 m/yr throughout this period. The modeling approach uses a three-dimensional hydrodynamics-based numerical model coupled to a spectral wave model and simulates the full 24-yr time period on a spatial grid running on a short (second scale) time-step to compute the sediment transport patterns. The observations and the model results show similar magnitudes (O(105 m3/yr)) and patterns of alongshore sediment fluxes. Both the observed and the modeled alongshore sediment transport rates have more rapid changes at the north of our section due to continuously curving coastline, and possible effects of alongshore variations in shelf bathymetry. The southern section with a relatively uniform orientation, on the other hand, has less rapid transport rate changes. Alongshore gradients of the modeled sediment fluxes are translated into shoreline change rates that have agreement in some locations but vary in others. Differences between observations and model results are potentially influenced by geologic framework processes not included in the model. Both the observations and the model results show higher rates of erosion (∼−1 m/yr) averaged over the northern half of the section as compared to the southern half where the observed and modeled averaged net shoreline changes are smaller (<0.1 m/yr). The model indicates accretion in some shallow embayments, whereas observations indicate erosion in these locations. Further analysis identifies that the magnitude of net alongshore sediment transport is strongly dominated by events associated with high wave energy. However, both big- and small- wave events cause shoreline change of the same order of magnitude because it is the gradients in transport, not the magnitude, that are controlling shoreline change. Results also indicate that alongshore momentum is not a simple balance between wave breaking and bottom stress, but also includes processes of horizontal vortex force, horizontal advection and pressure gradient that contribute to long-term alongshore sediment transport. As a comparison to a more simple approach, an empirical formulation for alongshore sediment transport is used. The empirical estimates capture the effect of the breaking term in the hydrodynamics-based model, however, other processes that are accounted for in the hydrodynamics-based model improve the agreement with the observed alongshore sediment transport.
Alsep data processing: How we processed Apollo Lunar Seismic Data
NASA Technical Reports Server (NTRS)
Latham, G. V.; Nakamura, Y.; Dorman, H. J.
1979-01-01
The Apollo lunar seismic station network gathered data continuously at a rate of 3 x 10 to the 8th power bits per day for nearly eight years until the termination in September, 1977. The data were processed and analyzed using a PDP-15 minicomputer. On the average, 1500 long-period seismic events were detected yearly. Automatic event detection and identification schemes proved unsuccessful because of occasional high noise levels and, above all, the risk of overlooking unusual natural events. The processing procedures finally settled on consist of first plotting all the data on a compressed time scale, visually picking events from the plots, transferring event data to separate sets of tapes and performing detailed analyses using the latter. Many problems remain especially for automatically processing extraterrestrial seismic signals.
Wang, Wen J; He, Hong S; Thompson, Frank R; Spetich, Martin A; Fraser, Jacob S
2018-09-01
Demographic processes (fecundity, dispersal, colonization, growth, and mortality) and their interactions with environmental changes are not well represented in current climate-distribution models (e.g., niche and biophysical process models) and constitute a large uncertainty in projections of future tree species distribution shifts. We investigate how species biological traits and environmental heterogeneity affect species distribution shifts. We used a species-specific, spatially explicit forest dynamic model LANDIS PRO, which incorporates site-scale tree species demography and competition, landscape-scale dispersal and disturbances, and regional-scale abiotic controls, to simulate the distribution shifts of four representative tree species with distinct biological traits in the central hardwood forest region of United States. Our results suggested that biological traits (e.g., dispersal capacity, maturation age) were important for determining tree species distribution shifts. Environmental heterogeneity, on average, reduced shift rates by 8% compared to perfect environmental conditions. The average distribution shift rates ranged from 24 to 200myear -1 under climate change scenarios, implying that many tree species may not able to keep up with climate change because of limited dispersal capacity, long generation time, and environmental heterogeneity. We suggest that climate-distribution models should include species demographic processes (e.g., fecundity, dispersal, colonization), biological traits (e.g., dispersal capacity, maturation age), and environmental heterogeneity (e.g., habitat fragmentation) to improve future predictions of species distribution shifts in response to changing climates. Copyright © 2018 Elsevier B.V. All rights reserved.
Influential factors of formation kinetics of flocs produced by water treatment coagulants.
Wu, Chunde; Wang, Lin; Hu, Bing; Ye, Jian
2013-05-01
The growth rate and size of floc formation is of great importance in water treatment especially in coagulation process. The floc formation kinetics and the coagulation efficiency of synthetic water were investigated by using an on-line continuous optical photometric dispersion analyze and the analysis of water quality. Experimental conditions such as alum dosage, pH value for coagulation, stirring intensity and initial turbidity were extensively examined. The photometric dispersion analyze results showed that coagulation of kaolin suspensions with two coagulants (alum and polyaluminium chloride) could be taken as a two-phase process: slow and rapid growth periods. Operating conditions with higher coagulant doses, appropriate pH and average shear rate might be particularly advantageous. The rate of overall floc growth was mainly determined by a combination of hydraulic and water quality conditions such as pH and turbidity. The measurement of zeta potential indicates that polyaluminium chloride exhibited higher charge-neutralizing ability than alum and achieved lower turbidities than alum for equivalent Al dosages. Under the same operating conditions, the alum showed a higher grow rate, but with smaller floc size.
Topographic expression of active faults in the foothills of the Northern Apennines
NASA Astrophysics Data System (ADS)
Picotti, Vincenzo; Ponza, Alessio; Pazzaglia, Frank J.
2009-09-01
Active faults that rupture the earth's surface leave an imprint on the topography that is recognized using a combination of geomorphic and geologic metrics including triangular facets, the shape of mountain fronts, the drainage network, and incised river valleys with inset terraces. We document the presence of a network of active, high-angle extensional faults, collectively embedded in the actively shortening mountain front of the Northern Apennines, that possess unique geomorphic expressions. We measure the strain rate for these structures and find that they have a constant throw-to-length ratio. We demonstrate the necessary and sufficient conditions for triangular facet development in the footwalls of these faults and argue that rock-type exerts the strongest control. The slip rates of these faults range from 0.1 to 0.3 mm/yr, which is similar to the average rate of river incision and mountain front unroofing determined by corollary studies. The faults are a near-surface manifestation of deeper crustal processes that are actively uplifting rocks and growing topography at a rate commensurate with surface processes that are eroding the mountain front to base level.
Code of Federal Regulations, 2010 CFR
2010-01-01
... AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Sugar Loan Program § 1435.101 Loan rates. (a) The national average loan rate for raw cane sugar produced from domestically grown sugarcane is: 18... for the 2012 crop year. (b) The national average loan rate for refined beet sugar from domestically...
Code of Federal Regulations, 2011 CFR
2011-01-01
... AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Sugar Loan Program § 1435.101 Loan rates. (a) The national average loan rate for raw cane sugar produced from domestically grown sugarcane is: 18... for the 2012 crop year. (b) The national average loan rate for refined beet sugar from domestically...
Code of Federal Regulations, 2013 CFR
2013-01-01
... AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Sugar Loan Program § 1435.101 Loan rates. (a) The national average loan rate for raw cane sugar produced from domestically grown sugarcane is: 18... for the 2012 crop year. (b) The national average loan rate for refined beet sugar from domestically...
Code of Federal Regulations, 2014 CFR
2014-01-01
... AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Sugar Loan Program § 1435.101 Loan rates. (a) The national average loan rate for raw cane sugar produced from domestically grown sugarcane is: 18... for the 2012 crop year. (b) The national average loan rate for refined beet sugar from domestically...
Code of Federal Regulations, 2012 CFR
2012-01-01
... AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Sugar Loan Program § 1435.101 Loan rates. (a) The national average loan rate for raw cane sugar produced from domestically grown sugarcane is: 18... for the 2012 crop year. (b) The national average loan rate for refined beet sugar from domestically...
5 CFR 532.241 - Analysis of usable wage survey data.
Code of Federal Regulations, 2010 CFR
2010-01-01
... average rates identified and computed under paragraph (a) of this section. (ii) The frequency payline... PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.241 Analysis of usable wage survey data. (a)(1) The lead agency shall compute a weighted average rate for each appropriated fund survey job having at...
Chambers, Chester G; Dada, Maqbool; Elnahal, Shereef; Terezakis, Stephanie; DeWeese, Theodore; Herman, Joseph; Williams, Kayode A
2016-10-18
We examine interactions among 3 factors that affect patient waits and use of overtime in outpatient clinics: clinic congestion, patient punctuality and physician processing rates. We hypothesise that the first 2 factors affect physician processing rates, and this adaptive physician behaviour serves to reduce waiting times and the use of overtime. 2 urban academic clinics and an affiliated suburban clinic in metropolitan Baltimore, Maryland, USA. Appointment times, patient arrival times, start of service and physician processing times were collected for 105 visits at a low-volume suburban clinic 1, 264 visits at a medium-volume academic clinic 2 and 22 266 visits at a high-volume academic clinic 3 over 3 distinct spans of time. Data from the first clinic were previously used to document an intervention to influence patient punctuality. This included a policy that tardy patients were rescheduled. Clinicians' processing times were gathered, conditioned on whether the patient or clinician was tardy to test the first hypothesis. Probability distributions of patient unpunctuality were developed preintervention and postintervention for the clinic in which the intervention took place and these data were used to seed a discrete-event simulation. Average physician processing times differ conditioned on tardiness at clinic 1 with p=0.03, at clinic 2 with p=10 -5 and at clinic 3 with p=10 -7 . Within the simulation, the adaptive physician behaviour degrades system performance by increasing waiting times, probability of overtime and the average amount of overtime used. Each of these changes is significant at the p<0.01 level. Processing times differed for patients in different states in all 3 settings studied. When present, this can be verified using data commonly collected. Ignoring these behaviours leads to faulty conclusions about the efficacy of efforts to improve clinic flow. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Huang, Qunfang; Lu, Yuqi
2015-07-27
The Yangtze River Delta (YRD) has experienced rapid urbanization and dramatic economic development since 1978 and the Yangtze River Delta urban agglomeration (YRDUA) has been one of the three largest urban agglomerations in China. We present evidence of a significant urban heat island (UHI) effect on climate warming based on an analysis of the impacts of the urbanization rate, urban population, and land use changes on the warming rate of the daily average, minimal (nighttime) and maximal (daytime) air temperature in the YRDUA using 41 meteorological stations observation data. The effect of the UHI on climate warming shows a large spatial variability. The average warming rates of average air temperature of huge cities, megalopolises, large cities, medium-sized cities, and small cities are 0.483, 0.314 ± 0.030, 0.282 ± 0.042, 0.225 ± 0.044 and 0.179 ± 0.046 °C/decade during the period of 1957-2013, respectively. The average warming rates of huge cities and megalopolises are significantly higher than those of medium-sized cities and small cities, indicating that the UHI has a significant effect on climate warming (t-test, p < 0.05). Significantly positive correlations are found between the urbanization rate, population, built-up area and warming rate of average air temperature (p < 0.001). The average warming rate of average air temperature attributable to urbanization is 0.124 ± 0.074 °C/decade in the YRDUA. Urbanization has a measurable effect on the observed climate warming in the YRD aggravating the global climate warming.
Huang, Qunfang; Lu, Yuqi
2015-01-01
The Yangtze River Delta (YRD) has experienced rapid urbanization and dramatic economic development since 1978 and the Yangtze River Delta urban agglomeration (YRDUA) has been one of the three largest urban agglomerations in China. We present evidence of a significant urban heat island (UHI) effect on climate warming based on an analysis of the impacts of the urbanization rate, urban population, and land use changes on the warming rate of the daily average, minimal (nighttime) and maximal (daytime) air temperature in the YRDUA using 41 meteorological stations observation data. The effect of the UHI on climate warming shows a large spatial variability. The average warming rates of average air temperature of huge cities, megalopolises, large cities, medium-sized cities, and small cities are 0.483, 0.314 ± 0.030, 0.282 ± 0.042, 0.225 ± 0.044 and 0.179 ± 0.046 °C/decade during the period of 1957–2013, respectively. The average warming rates of huge cities and megalopolises are significantly higher than those of medium-sized cities and small cities, indicating that the UHI has a significant effect on climate warming (t-test, p < 0.05). Significantly positive correlations are found between the urbanization rate, population, built-up area and warming rate of average air temperature (p < 0.001). The average warming rate of average air temperature attributable to urbanization is 0.124 ± 0.074 °C/decade in the YRDUA. Urbanization has a measurable effect on the observed climate warming in the YRD aggravating the global climate warming. PMID:26225986
Does Avicennia germinans expansion alter salt marsh nitrogen removal capacity?
NASA Astrophysics Data System (ADS)
Tatariw, C.; Kleinhuizen, A.; Rajan, S.; Flournoy, N.; Sobecky, P.; Mortazavi, B.
2017-12-01
Plant species expansion poses risks to ecosystem services through alterations to plant-microbiome interactions associated with changes to key microbial drivers such as organic carbon (C) substrates, nitrogen (N) availability, and rhizosphere-associated microbial communities. In the northern Gulf of Mexico (GOM), warming winter temperatures associated with climate change have promoted Avicennia germinans (black mangrove) expansion into salt marshes. To date, there is limited knowledge regarding the effects of mangrove expansion on vital ecosystem services such as N cycling in the northern GOM. We designed a field-based study to determine the potential effects of mangrove expansion on salt marsh N biogeochemical cycling in the Spartina alterniflora dominated Chandeleur Islands (LA, USA). We used a combination of process rate measurements and metadata to: 1) Determine the impact of mangrove expansion on salt marsh denitrification and dissimilatory nitrate reduction to ammonium (DNRA), with the goal of quantifying losses or gains in ecosystem services; and 2) identify the mechanisms driving changes in ecosystem services to improve predictions about the impacts of mangrove expansion on salt marsh functional resiliency. The pneumatophore root structure of A. germinans is efficient at delivering oxygen (O2) to sediment, which can promote coupled nitrification-denitrification and decrease sulfide inhibition. We hypothesized that increased sediment O2, when coupled with cooler soil temperatures caused by plant shading, will favor denitrification instead of the DNRA process. An increase in sediment O2, as well as higher N content of A. germinans litter, will also result in a shift in the microbial community. Initial findings indicated that the denitrification pathway dominates over DNRA regardless of vegetation type, with average denitrification rates of 30.1 µmol N kg-1 h-1 versus average DNRA rates of 8.5 µmol N kg-1 h-1. However, neither denitrification nor DNRA rates have differed between vegetation types. Additional results of the study will relate process rate measurements to sediment physiochemical characteristics and rhizosphere-associated microbial communities from both vegetation types to identify drivers of long term change in ecosystems services associated with mangrove expansion.
A Novel Mental Health Crisis Service - Outcomes of Inpatient Data.
Morrow, R; McGlennon, D; McDonnell, C
2016-01-01
Northern Ireland has high mental health needs and a rising suicide rate. Our area has suffered a 32% reduction of inpatient beds consistent with the national drive towards community based treatment. Taking these factors into account, a new Mental Health Crisis Service was developed incorporating a high fidelity Crisis Response Home Treatment Team (CRHTT), Acute Day Care facility and two inpatient wards. The aim was to provide alternatives to inpatient admission. The new service would facilitate transition between inpatient and community care while decreasing bed occupancy and increasing treatment in the community. All services and processes were reviewed to assess deficiencies in current care. There was extensive consultation with internal and external stakeholders and process mapping using the COBRAs framework as a basis for the service improvement model. The project team set the service criteria and reviewed progress. In the original service model, the average inpatient occupancy rate was 106.6%, admission rate was 48 patients per month and total length of stay was 23.4 days. After introducing the inpatient consultant hospital model, the average occupancy rate decreased to 90%, admissions to 43 per month and total length of stay to 22 days. The results further decreased to 83% occupancy, 32 admissions per month and total length of stay 12 days after CRHTT initiation. The Crisis Service is still being evaluated but currently the model has provided safe alternatives to inpatient care. Involvement with patients, carers and all multidisciplinary teams is maximised to improve the quality and safety of care. Innovative ideas including structured weekly timetable and regular interface meetings have improved communication and allowed additional time for patient care.
A Novel Mental Health Crisis Service – Outcomes of Inpatient Data
McGlennon, D; McDonnell, C
2016-01-01
Introduction Northern Ireland has high mental health needs and a rising suicide rate. Our area has suffered a 32% reduction of inpatient beds consistent with the national drive towards community based treatment. Taking these factors into account, a new Mental Health Crisis Service was developed incorporating a high fidelity Crisis Response Home Treatment Team (CRHTT), Acute Day Care facility and two inpatient wards. The aim was to provide alternatives to inpatient admission. The new service would facilitate transition between inpatient and community care while decreasing bed occupancy and increasing treatment in the community. Methods All services and processes were reviewed to assess deficiencies in current care. There was extensive consultation with internal and external stakeholders and process mapping using the COBRAs framework as a basis for the service improvement model. The project team set the service criteria and reviewed progress. Results In the original service model, the average inpatient occupancy rate was 106.6%, admission rate was 48 patients per month and total length of stay was 23.4 days. After introducing the inpatient consultant hospital model, the average occupancy rate decreased to 90%, admissions to 43 per month and total length of stay to 22 days. The results further decreased to 83% occupancy, 32 admissions per month and total length of stay 12 days after CRHTT initiation. Discussion The Crisis Service is still being evaluated but currently the model has provided safe alternatives to inpatient care. Involvement with patients, carers and all multidisciplinary teams is maximised to improve the quality and safety of care. Innovative ideas including structured weekly timetable and regular interface meetings have improved communication and allowed additional time for patient care. PMID:27158159
A novel compact heat exchanger using gap flow mechanism.
Liang, J S; Zhang, Y; Wang, D Z; Luo, T P; Ren, T Q
2015-02-01
A novel, compact gap-flow heat exchanger (GFHE) using heat-transfer fluid (HTF) was developed in this paper. The detail design of the GFHE coaxial structure which forms the annular gap passage for HTF is presented. Computational fluid dynamics simulations were introduced into the design to determine the impacts of the gap width and the HTF flow rate on the GFHE performance. A comparative study on the GFHE heating rate, with the gap widths ranged from 0.1 to 1.0 mm and the HTF flow rates ranged from 100 to 500 ml/min, was carried out. Results show that a narrower gap passage and a higher HTF flow rate can yield a higher average heating rate in GFHE. However, considering the compromise between the GFHE heating rate and the HTF pressure drop along the gap, a 0.4 mm gap width is preferred. A testing loop was also set up to experimentally evaluate the GFHE capability. The testing results show that, by using 0.4 mm gap width and 500 ml/min HTF flow rate, the maximum heating rate in the working chamber of the as-made GFHE can reach 18 °C/min, and the average temperature change rates in the heating and cooling processes of the thermal cycle test were recorded as 6.5 and 5.4 °C/min, respectively. These temperature change rates can well satisfy the standard of IEC 60068-2-14:2009 and show that the GFHE developed in this work has sufficient heat exchange capacity and can be used as an ideal compact heat exchanger in small volume desktop thermal fatigue test apparatus.
Kinetic study of anaerobic digestion of fruit-processing wastewater in immobilized-cell bioreactors.
Borja, R; Banks, C J
1994-08-01
The kinetics of the anaerobic digestion of a fruit-processing wastewater [chemical oxygen demand (COD) = 5.1 g/l] were investigated. Laboratory experiments were carried out in bioreactors containing supports of different chemical composition and features, namely bentonite and zeolite (aluminum silicates), sepiolite and saponite (magnesium silicates) and polyurethane foam, to which the microorganisms responsible for the process adhered. The influence of the support medium on the kinetics was compared with a control digester with suspended biomass. Assuming the overall anaerobic digestion process conforms to first-order kinetics, the specific rate constant, K0, was determined for each of the experimental reactors. The average values obtained were: 0.080 h-1 (bentonite); 0.103 h-1 (zeolite); 0.180 h-1 (sepiolite); 0.198 h-1 (saponite); 0.131 h-1 (polyurethane); and 0.037 h-1 (control). The results indicate that the support used to immobilize the micro-organisms had a marked influence on the digestion process; the results were significant at the 95% confidence level. Methanogenic activity increased linearly with COD, with the saponite and sepiolite supports showing the highest values. The yield coefficient of methane was 270 ml of methane (under standard temperature and pressure conditions)/g of COD. The average elimination of COD was 89.5%.
Supplements in human islet culture: human serum albumin is inferior to fetal bovine serum.
Avgoustiniatos, Efstathios S; Scott, William E; Suszynski, Thomas M; Schuurman, Henk-Jan; Nelson, Rebecca A; Rozak, Phillip R; Mueller, Kate R; Balamurugan, A N; Ansite, Jeffrey D; Fraga, Daniel W; Friberg, Andrew S; Wildey, Gina M; Tanaka, Tomohiro; Lyons, Connor A; Sutherland, David E R; Hering, Bernhard J; Papas, Klearchos K
2012-01-01
Culture of human islets before clinical transplantation or distribution for research purposes is standard practice. At the time the Edmonton protocol was introduced, clinical islet manufacturing did not include culture, and human serum albumin (HSA), instead of fetal bovine serum (FBS), was used during other steps of the process to avoid the introduction of xenogeneic material. When culture was subsequently introduced, HSA was also used for medium supplementation instead of FBS, which was typically used for research islet culture. The use of HSA as culture supplement was not evaluated before this implementation. We performed a retrospective analysis of 103 high-purity islet preparations (76 research preparations, all with FBS culture supplementation, and 27 clinical preparations, all with HSA supplementation) for oxygen consumption rate per DNA content (OCR/DNA; a measure of viability) and diabetes reversal rate in diabetic nude mice (a measure of potency). After 2-day culture, research preparations exhibited an average OCR/DNA 51% higher (p < 0.001) and an average diabetes reversal rate 54% higher (p < 0.05) than clinical preparations, despite 87% of the research islet preparations having been derived from research-grade pancreata that are considered of lower quality. In a prospective paired study on islets from eight research preparations, OCR/DNA was, on average, 27% higher with FBS supplementation than that with HSA supplementation (p < 0.05). We conclude that the quality of clinical islet preparations can be improved when culture is performed in media supplemented with serum instead of albumin.
[Optimization of solid-phase extraction for enrichment of toxic organic compounds in water samples].
Zhang, Ming-quan; Li, Feng-min; Wu, Qian-yuan; Hu, Hong-ying
2013-05-01
A concentration method for enrichment of toxic organic compounds in water samples has been developed based on combined solid-phase extraction (SPE) to reduce impurities and improve recoveries of target compounds. This SPE method was evaluated in every stage to identify the source of impurities. Based on the analysis of Waters Oasis HLB without water samples, the eluent of SPE sorbent after dichloromethane and acetone contributed 85% of impurities during SPE process. In order to reduce the impurities from SPE sorbent, soxhlet extraction of dichloromethane followed by acetone and lastly methanol was applied to the sorbents for 24 hours and the results had proven that impurities were reduced significantly. In addition to soxhlet extraction, six types of prevalent SPE sorbents were used to absorb 40 target compounds, the lgK(ow) values of which were within the range of 1.46 and 8.1, and recovery rates were compared. It was noticed and confirmed that Waters Oasis HLB had shown the best recovery results for most of the common testing samples among all three styrenedivinylbenzene (SDB) polymer sorbents, which were 77% on average. Furthermore, Waters SepPak AC-2 provided good recovery results for pesticides among three types of activated carbon sorbents and the average recovery rates reached 74%. Therefore, Waters Oasis HLB and Waters SepPak AC-2 were combined to obtain a better recovery and the average recovery rate for the tested 40 compounds of this new SPE method was 87%.
Variable mass pendulum behaviour processed by wavelet analysis
NASA Astrophysics Data System (ADS)
Caccamo, M. T.; Magazù, S.
2017-01-01
The present work highlights how, in order to characterize the motion of a variable mass pendulum, wavelet analysis can be an effective tool in furnishing information on the time evolution of the oscillation spectral content. In particular, the wavelet transform is applied to process the motion of a hung funnel that loses fine sand at an exponential rate; it is shown how, in contrast to the Fourier transform which furnishes only an average frequency value for the motion, the wavelet approach makes it possible to perform a joint time-frequency analysis. The work is addressed at undergraduate and graduate students.
Powerless fluxes and forces, and change of scale in irreversible thermodynamics
NASA Astrophysics Data System (ADS)
Ostoja-Starzewski, M.; Zubelewicz, A.
2011-08-01
We show that the dissipation function of linear processes in continuum thermomechanics may be treated as the average of the statistically fluctuating dissipation rate on either coarse or small spatial scales. The first case involves thermodynamic orthogonality due to Ziegler, while the second one involves powerless forces in a general solution of the Clausius-Duhem inequality according to Poincaré and Edelen. This formulation is demonstrated using the example of parabolic versus hyperbolic heat conduction. The existence of macroscopic powerless heat fluxes is traced here to the hidden dissipative processes at lower temporal and spatial scales.
Moisture processes accompanying convective activity
NASA Technical Reports Server (NTRS)
Sienkiewicz, M. E.; Scoggins, J. R.
1982-01-01
A moisture budget analysis was performed on data collected during the AVE 7 (May 2 to 3, 1978) and AVE-SESAME1 (April 10 to 11, 1979) experiments. Local rates-of-change of moisture were compared with average moisture divergence in the same time period. Results were presented as contoured plots in the horizontal and as vertical cross sections. These results were used to develop models of the distribution of moisture processes in the vicinity of convective areas in two layers representing lower and middle tropospheric conditions. Good correspondence was found between the residual term of the moisture budget and actual precipitation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B., E-mail: pritychenko@bnl.go; Mughaghab, S.F.; Sonzogni, A.A.
We have calculated the Maxwellian-averaged cross sections and astrophysical reaction rates of the stellar nucleosynthesis reactions (n, {gamma}), (n, fission), (n, p), (n, {alpha}), and (n, 2n) using the ENDF/B-VII.0, JEFF-3.1, JENDL-3.3, and ENDF/B-VI.8 evaluated nuclear reaction data libraries. These four major nuclear reaction libraries were processed under the same conditions for Maxwellian temperatures (kT) ranging from 1 keV to 1 MeV. We compare our current calculations of the s-process nucleosynthesis nuclei with previous data sets and discuss the differences between them and the implications for nuclear astrophysics.
Recombination processes in quantum well lasers with superlattice barriers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blood, P.; Fletcher, E.D.; Foxon, C.T.
1989-12-04
Spontaneous emission spectra from GaAs quantum well lasers grown by molecular beam epitaxy show that the radiative recombination rate in (AlAs)(GaAs) superlattice barriers is greater than in alloy barriers of the same average composition ({ital x}=0.25) due to reduction in effective gap by superlattice effects. Measurements of emission spectra as functions of temperature show that these radiative processes account for a significant part of the temperature variation of the threshold current and we estimate that the nonradiative lifetime in the superlattice barriers is an order of magnitude longer than in alloy barriers grown under similar conditions.
Bouallagui, Hassib; Rachdi, Boutheina; Gannoun, Hana; Hamdi, Moktar
2009-06-01
Anaerobic co-digestion of fruit and vegetable waste (FVW) and abattoir wastewater (AW) was investigated using anaerobic sequencing batch reactors (ASBRs). The effects of hydraulic retention time (HRT) and temperature variations on digesters performances were examined. At both 20 and 10 days biogas production for co-digestion was greater thanks to the improved balance of nutrients. The high specific gas productions for the different digestion processes were 0.56, 0.61 and 0.85 l g(-1) total volatile solids (TVS) removal for digesters treating AW, FVW and AW + FVW, respectively. At an HRT of 20 days, biogas production rates from thermophilic digesters were higher on average than from mesophilic AW, FVW and AW + FVW digestion by 28.5, 44.5 and 25%, respectively. However, at 10 days of HRT results showed a decrease of biogas production rate for AW and AW + FVW digestion processes due to the high amount of free ammonia at high organic loading rate (OLR).
Waste battery treatment options: comparing their environmental performance.
Briffaerts, K; Spirinckx, C; Van der Linden, A; Vrancken, K
2009-08-01
Waste consumer batteries are recycled using different routes based on hydrometallurgical and pyrometallurgical processes. Two hydrometallurgical and two pyrometallurgical treatment scenarios are compared starting from an average composition of Belgian waste batteries. The environmental performance is compared using life cycle analysis (LCA). The recycling rate is studied through mass balance calculation. Each treatment scenario results in a specific recycling rate. The environmental impact and benefits also vary between the treatment options. There is no such thing as a typical hydrometallurgical or pyrometallurgical treatment. When applying a hydrometallurgical treatment scenario, the focus lies on zinc and iron recycling. When allowing manganese recycling, the energy demand of the hydrometallurgical process increases considerably. Both pyrometallurgical options recycle zinc, iron and manganese. According to the LCA, none of the treatment scenarios performs generally better or worse than the others. Each option has specific advantages and disadvantages. The Batteries Directive 2006/66/EC sets out a recycling rate of 50% for consumer waste batteries. Based on metal recycling alone, the mass balances show that the target is difficult to obtain.
Vagedes, Jan; Bialkowski, Anja; Wiechers, Cornelia; Poets, Christian F.; Dietz, Klaus
2014-01-01
Objective The number of desaturations determined in recordings of pulse oximeter saturation (SpO2) primarily depends on the time over which values are averaged. As the averaging time in pulse oximeters is not standardized, it varies considerably between centers. To make SpO2 data comparable, it is thus desirable to have a formula that allows conversion between desaturation rates obtained using different averaging times for various desaturation levels and minimal durations. Methods Oxygen saturation was measured for 170 hours in 12 preterm infants with a mean number of 65 desaturations <90% per hour of arbitrary duration by using a pulse oximeter in a 2–4 s averaging mode. Using 7 different averaging times between 3 and 16 seconds, the raw red-to-infrared data were reprocessed to determine the number of desaturations (D). The whole procedure was carried out for 7 different minimal desaturation durations (≥1, ≥5, ≥10, ≥15, ≥20, ≥25, ≥30 s) below SpO2 threshold values of 80%, 85% or 90% to finally reach a conversion formula. The formula was validated by splitting the infants into two groups of six children each and using one group each as a training set and the other one as a test set. Results Based on the linear relationship found between the logarithm of the desaturation rate and the logarithm of the averaging time, the conversion formula is: D2 = D1 (T2/T1)c, where D2 is the desaturation rate for the desired averaging time T2, and D1 is the desaturation rate for the original averaging time T1, with the exponent c depending on the desaturation threshold and the minimal desaturation duration. The median error when applying this formula was 2.6%. Conclusion This formula enables the conversion of desaturation rates between different averaging times for various desaturation thresholds and minimal desaturation durations. PMID:24489887
Vagedes, Jan; Bialkowski, Anja; Wiechers, Cornelia; Poets, Christian F; Dietz, Klaus
2014-01-01
The number of desaturations determined in recordings of pulse oximeter saturation (SpO2) primarily depends on the time over which values are averaged. As the averaging time in pulse oximeters is not standardized, it varies considerably between centers. To make SpO2 data comparable, it is thus desirable to have a formula that allows conversion between desaturation rates obtained using different averaging times for various desaturation levels and minimal durations. Oxygen saturation was measured for 170 hours in 12 preterm infants with a mean number of 65 desaturations <90% per hour of arbitrary duration by using a pulse oximeter in a 2-4 s averaging mode. Using 7 different averaging times between 3 and 16 seconds, the raw red-to-infrared data were reprocessed to determine the number of desaturations (D). The whole procedure was carried out for 7 different minimal desaturation durations (≥ 1, ≥ 5, ≥ 10, ≥ 15, ≥ 20, ≥ 25, ≥ 30 s) below SpO2 threshold values of 80%, 85% or 90% to finally reach a conversion formula. The formula was validated by splitting the infants into two groups of six children each and using one group each as a training set and the other one as a test set. Based on the linear relationship found between the logarithm of the desaturation rate and the logarithm of the averaging time, the conversion formula is: D2 = D1 (T2/T1)(c), where D2 is the desaturation rate for the desired averaging time T2, and D1 is the desaturation rate for the original averaging time T1, with the exponent c depending on the desaturation threshold and the minimal desaturation duration. The median error when applying this formula was 2.6%. This formula enables the conversion of desaturation rates between different averaging times for various desaturation thresholds and minimal desaturation durations.
Graham, Emily B.; Knelman, Joseph E.; Schindlbacher, Andreas; ...
2016-02-24
In this study, microorganisms are vital in mediating the earth’s biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: ‘When do we need to understand microbial community structure to accurately predict function?’ We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of processmore » rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Emily B.; Knelman, Joseph E.; Schindlbacher, Andreas
In this study, microorganisms are vital in mediating the earth’s biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: ‘When do we need to understand microbial community structure to accurately predict function?’ We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of processmore » rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology.« less
Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations
NASA Technical Reports Server (NTRS)
Stefanski, Philip L.
2014-01-01
A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.
Measurement systems and indices of miners' exposure to radon daughter products in the air of mines.
Domański, T
1990-01-01
This paper presents the classification of measurement systems that may be used for the assessment of miners' exposure to radiation in mines. The following systems were described and characterized as the Air Sampling System (ASS), the Environmental Control System (ECS), the Individual Dosimetry System (IDS), the Stream Monitoring System (SMS) and the Exhaust Monitoring System (EMS). The indices for evaluation of miners' working environments, or for assessment of individual or collective miners' exposure, were selected and determined. These are: average expected concentration (CAE), average observed concentration (CAO), average expected rate of exposure cumulation rate (EEXP), average observed exposure cumulation rate (EOBS), average effective exposure cumulation rate (EEFF). Mathematical formulae for determining all these indicators, according to the type of measurement system used in particular mines, are presented. The reliability of assessment of miners' exposure in particular measurement systems, as well as the role of the possible reference system, are discussed.
Predicting online ratings based on the opinion spreading process
NASA Astrophysics Data System (ADS)
He, Xing-Sheng; Zhou, Ming-Yang; Zhuo, Zhao; Fu, Zhong-Qian; Liu, Jian-Guo
2015-10-01
Predicting users' online ratings is always a challenge issue and has drawn lots of attention. In this paper, we present a rating prediction method by combining the user opinion spreading process with the collaborative filtering algorithm, where user similarity is defined by measuring the amount of opinion a user transfers to another based on the primitive user-item rating matrix. The proposed method could produce a more precise rating prediction for each unrated user-item pair. In addition, we introduce a tunable parameter λ to regulate the preferential diffusion relevant to the degree of both opinion sender and receiver. The numerical results for Movielens and Netflix data sets show that this algorithm has a better accuracy than the standard user-based collaborative filtering algorithm using Cosine and Pearson correlation without increasing computational complexity. By tuning λ, our method could further boost the prediction accuracy when using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) as measurements. In the optimal cases, on Movielens and Netflix data sets, the corresponding algorithmic accuracy (MAE and RMSE) are improved 11.26% and 8.84%, 13.49% and 10.52% compared to the item average method, respectively.
Zuo, Chun-Guang; Liu, Xia-Jun; Wang, Xin-Hu; Wang, Jian-shun
2013-01-01
To discuss the therapeutic effects of the atlantoaxial pedicle screw system fixation in treatment of atlantoaxial instability. From June 2003 to March 2010, 32 patients with atlantoaxial instability were treated by atlantoaxial pedicle screw system fixation, included 21 males and 11 females wiht an average age of 42.5 years old ranging from 28 to 66 years. Among them, 18 cases were odontoid process fractures, 7 were congenital dissociate odontoid process, 4 were Jefferson fracture combined with odontoid fracture, 3 were rheumatic arthritis causing atlantoaxial instability. All patients suffered from the atlantoaxial subluxation and atlantoaxial instability. The JOA score ranged from 4 to 14 (means 9.1 +/- 0.3) before operation. The patients had some image examination including the X-ray of cervical vertebrae (include of dynamic position film), spiral CT 3D reconstruction and/or MRI. The position of pedicle screw system implantation,the angle of pedicle screw system implantation and screw length were measured. Operating skull traction. Operation undewent general anesthesia, implanted the pedicle screw, reduction and bone fusion under direct vision. The bone was fixated between posterior arch of atlas and lamina of axis by the lateral combination bended to posterior. One hundred and twenty-eight atlantoaxial pedicle screws were implanted in 32 patients. No patient had the injure of spinal cord, nerve root and vertebral artery. All patients were followed-up from 6 to 48 months (averaged 16 months). After operation, the JOA score ranged from 11 to 17 (averaged 15.9 +/- 0.2), improvement rate was 86.1%. The fracture of odontoid process were healing completely. All fusion bone were combinated. The internal fixation wasn't loosening and breaking. The atlantoaxial pedicle screw system fixation was effective method to treat atlantoaxial instability. The method had many advantages, such as provide rigid and short segment fixation, safe and simple, high fusion rate. The method was worth in clinical application.
5 CFR 550.707 - Computation of severance pay fund.
Code of Federal Regulations, 2011 CFR
2011-01-01
... pay for standby duty regularly varies throughout the year, compute the average standby duty premium...), compute the weekly average percentage, and multiply that percentage by the weekly scheduled rate of pay in... hours in a pay status (excluding overtime hours) and multiply that average by the hourly rate of basic...
Rate equations for nitrogen molecules in ultrashort and intense x-ray pulses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ji -Cai; Berrah, Nora; Cederbaum, Lorenz S.
Here, we study theoretically the quantum dynamics of nitrogen molecules (N2) exposed to intense and ultrafast x-rays at a wavelength ofmore » $$1.1\\;{\\rm{nm}}$$ ($$1100\\;{\\rm{eV}}$$ photon energy) from the Linac Coherent Light Source (LCLS) free electron laser. Molecular rate equations are derived to describe the intertwined photoionization, decay, and dissociation processes occurring for N2. This model complements our earlier phenomenological approaches, the single-atom, symmetric-sharing, and fragmentation-matrix models of 2012 (J. Chem. Phys. 136 214310). Our rate-equations are used to obtain the effective pulse energy at the sample and the time scale for the dissociation of the metastable dication $${{\\rm{N}}}_{2}^{2+}$$. This leads to a very good agreement between the theoretically and experimentally determined ion yields and, consequently, the average charge states. The effective pulse energy is found to decrease with shortening pulse duration. This variation together with a change in the molecular fragmentation pattern and frustrated absorption—an effect that reduces absorption of x-rays due to (double) core hole formation—are the causes for the drop of the average charge state with shortening LCLS pulse duration discovered previously.« less
Rate equations for nitrogen molecules in ultrashort and intense x-ray pulses
Liu, Ji -Cai; Berrah, Nora; Cederbaum, Lorenz S.; ...
2016-03-16
Here, we study theoretically the quantum dynamics of nitrogen molecules (N2) exposed to intense and ultrafast x-rays at a wavelength ofmore » $$1.1\\;{\\rm{nm}}$$ ($$1100\\;{\\rm{eV}}$$ photon energy) from the Linac Coherent Light Source (LCLS) free electron laser. Molecular rate equations are derived to describe the intertwined photoionization, decay, and dissociation processes occurring for N2. This model complements our earlier phenomenological approaches, the single-atom, symmetric-sharing, and fragmentation-matrix models of 2012 (J. Chem. Phys. 136 214310). Our rate-equations are used to obtain the effective pulse energy at the sample and the time scale for the dissociation of the metastable dication $${{\\rm{N}}}_{2}^{2+}$$. This leads to a very good agreement between the theoretically and experimentally determined ion yields and, consequently, the average charge states. The effective pulse energy is found to decrease with shortening pulse duration. This variation together with a change in the molecular fragmentation pattern and frustrated absorption—an effect that reduces absorption of x-rays due to (double) core hole formation—are the causes for the drop of the average charge state with shortening LCLS pulse duration discovered previously.« less
Ghelani, Karen; Sidhu, Robindra; Jain, Umesh; Tannock, Rosemary
2004-11-01
Reading comprehension is a very complex task that requires different cognitive processes and reading abilities over the life span. There are fewer studies of reading comprehension relative to investigations of word reading abilities. Reading comprehension difficulties, however, have been identified in two common and frequently overlapping childhood disorders: reading disability (RD) and attention-deficit/hyperactivity disorder (ADHD). The nature of reading comprehension difficulties in these groups remains unclear. The performance of four groups of adolescents (RD, ADHD, comorbid ADHD and RD, and normal controls) was compared on reading comprehension tasks as well as on reading rate and accuracy tasks. Adolescents with RD showed difficulties across most reading tasks, although their comprehension scores were average. Adolescents with ADHD exhibited adequate single word reading abilities. Subtle difficulties were observed, however, on measures of text reading rate and accuracy as well as on silent reading comprehension, but scores remained in the average range. The comorbid group demonstrated similar difficulties to the RD group on word reading accuracy and on reading rate but experienced problems on only silent reading comprehension. Implications for reading interventions are outlined, as well as the clinical relevance for diagnosis.
Xue, Peng; Wu, Yafeng; Guo, Jinhong; Kang, Yuejun
2015-04-01
Circulating tumor cells (CTCs), which are derived from primary tumor site and transported to distant organs, are considered as the major cause of metastasis. So far, various techniques have been applied for CTC isolation and enumeration. However, there exists great demand to improve the sensitivity of CTC capture, and it remains challenging to elute the cells efficiently from device for further biomolecular and cellular analyses. In this study, we fabricate a dual functional chip integrated with herringbone structure and micropost array to achieve CTC capture and elution through EpCAM-based immunoreaction. Hep3B tumor cell line is selected as the model of CTCs for processing using this device. The results demonstrate that the capture limit of Hep3B cells can reach up to 10 cells (per mL of sample volume) with capture efficiency of 80% on average. Moreover, the elution rate of the captured Hep3B cells can reach up to 69.4% on average for cell number ranging from 1 to 100. These results demonstrate that this device exhibits dual functions with considerably high capture rate and elution rate, indicating its promising capability for cancer diagnosis and therapeutics.
Mutation rate estimation for 15 autosomal STR loci in a large population from Mainland China.
Zhao, Zhuo; Zhang, Jie; Wang, Hua; Liu, Zhi-Peng; Liu, Ming; Zhang, Yuan; Sun, Li; Zhang, Hui
2015-09-01
STR, short tandem repeats, are well known as a type of powerful genetic marker and widely used in studying human population genetics. Compared with the conventional genetic markers, the mutation rate of STR is higher. Additionally, the mutations of STR loci do not lead to genetic inconsistencies between the genotypes of parents and children; therefore, the analysis of STR mutation is more suited to assess the population mutation. In this study, we focused on 15 autosomal STR loci. DNA samples from a total of 42,416 unrelated healthy individuals (19,037 trios) from the population of Mainland China collected between Jan 2012 and May 2014 were successfully investigated. In our study, the allele frequencies, paternal mutation rates, maternal mutation rates and average mutation rates were detected. Furthermore, we also investigated the relationship between paternal ages, maternal ages, area, the time of pregnancy and average mutation rate. We found that the paternal mutation rate was higher than the maternal mutation rate and the paternal, maternal, and average mutation rates had a positive correlation with paternal age, maternal age and the time of pregnancy respectively. Additionally, the average mutation rate of coastal areas was higher than that of inland areas.
Riggs, Stanley R.; Ames, Dorothea V.
2007-01-01
The effect of storms on long-term dynamics of barrier islands was evaluated on Core Banks, a series of barrier islands that extend from Cape Lookout to Okracoke Inlet in the Cape Lookout National Seashore, North Carolina. Shoreline and elevation changes were determined by comparing 77 profiles and associated reference markers established by the U.S. Army Corps of Engineers (USACE) on Core Banks from June 1960 to July 1962 to a follow-up survey by Godfrey and Godfrey (G&G) in 1971 and a survey by the Department of Geology at East Carolina University (ECU) in 2001, in which 57 of the original 77 profiles were located. Evaluation of the baseline data associated with the USACE study supplies an important record of barrier island response to two specific storm events—Hurricane Donna in September 1960 and the Ash Wednesday extra-tropical cyclone in March 1962. The 1962 USACE survey was followed by 9 years characterized by no major storms; this low-energy period was captured by the G&G survey in 1971. The G&G survey was followed by 22 years characterized by occasional small to moderate storms. Starting in 1993, however, and continuing through 1999, the North Carolina coast experienced a major increase in storm activity, with seven major hurricanes impacting Core Banks. Both the USACE 1960–1962 and G&G 1962–1971 surveys produced short-term data sets that reflected very different sets of weather conditions. The ECU 2001 survey data were then compared with the USACE 1960 survey data to develop a long-term (41 years) data set for shoreline erosion on Core Banks. Those resulting long-term data were compared with the long-term (52 years) data sets by the North Carolina Division of Coastal Management (NCDCM) from 1940–1992 and 1946–1998; a strong positive correlation and very similar rates of average annual erosion resulted. However, the ECU and NCDCM long-term data sets did not correlate with either of the USACE and G&G short-term survey data and had very different average annual erosion rates. The average annual long-term rate of shoreline erosion for all of Core Banks and for both the ECU 1960–2001 and the NCDCM 1946–1998 surveys was -5 feet per year (ft/yr). These long-term rates of shoreline recession are in strong contrast with the short-term, storm-dominated rates of shoreline erosion for all of Core Banks developed by the USACE 1960–1961 and USACE 1961–1962 surveys, which have average annual erosion rates of -40 ft/yr and -26 ft/yr, respectively, and range from -226 feet (ft) to +153 ft. The combined short-term, storm-dominated shoreline erosion rate for the USACE surveys (1960–1962) was -36 ft/yr. In contrast, the average annual short-term, non-stormy period G&G 1962–1971 survey demonstrated shoreline accretion for all of Core Banks with an average annual rate of +12 ft/yr. In general, North Core Banks has higher erosion and accretion rates than South Core Banks. In the 1961 survey, the USACE installed 231 reference markers (RM-0 is closest to the ocean and RM-2 is farthest from the ocean) along the 77 profiles, as well as 33 reference markers labeled RM-4, RM-6, and RM-8 in the wider portions of the islands. The G&G survey recovered a total of 141 reference markers (61 percent), and the ECU survey recovered a total of 83 reference markers (36 percent) of the RM-0, RM-1, and RM-2 markers. The average ground elevation measured by the USACE in 1961 was RM-0 = +5.8 ft, RM-1 = +5.2 ft, and RM-2 = +4.8 ft. The G&G 1970 survey measured average ground elevations of RM-0 = +6.7 ft, RM-1 = +6.4 ft, and RM-2 = +6.1 ft, and the average ground elevation measured by ECU in 2001 was RM-0 = +10.1 ft, RM-1 = +9.1 ft, and RM-2 = +8.5 ft. The latter numbers represent approximately an overall 72-percent increase in island elevation from 1961 to 2001. Based on aerial photographic time-slice analyses, it is hypothesized that this increase in island elevation occurred during the post-1962 period with storm overwash systematically raising the island elevation through time, which in turn led to decreased numbers of overwash events. The latter processes and responses in turn led to a substantial increase in vegetative growth on the barrier island, as well as submerged aquatic vegetation on the back-barrier sand shoals. Integration of the USACE, G&G, ECU, and NCDCM shoreline erosion data for Core Banks shows several important points about shoreline recession. (1) The ECU and NCDCM data sets demonstrate that there is an ongoing net, long-term, but small-scale shoreline recession associated with Core Banks; (2) the USACE short-term data sets demonstrate that processes associated with individual storm events or sets of events produce extremely large-scale changes that include both erosion and accretion; (3) the short-term, non-stormy period data set of G&G demonstrates that if given enough time between storm events, barriers can rebuild to their pre-storm period conditions; and (4) the post-storm response generally tends to approach the pre-storm location, but rarely reaches it before the next storm or stormy period sets in. The result is the net long-term change documented by both the ECU 1960–2001 and NCDCM 1946–1998 Core Banks data sets that resulted in erosion rates ranging from 0 to -30 ft/yr with net annual average recession rates of -5 ft/yr. Analysis and comparison of these data sets supply important information for understanding the dynamics and responses of barrier island systems through time. In addition, the results of the present study on Core Banks supply essential process-response information that can be used to design and implement management plans for the Cape Lookout and Cape Hatteras National Seashores and for other seashores in the U.S. National Park Service system.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-26
... DEPARTMENT OF AGRICULTURE Food and Nutrition Service Child and Adult Care Food Program: National Average Payment Rates, Day Care Home Food Service Payment Rates, and Administrative Reimbursement Rates for Sponsoring Organizations of Day Care Homes for the Period July 1, 2011 Through June 30, 2012...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-26
... DEPARTMENT OF AGRICULTURE Food and Nutrition Service Child and Adult Care Food Program: National Average Payment Rates, Day Care Home Food Service Payment Rates, and Administrative Reimbursement Rates for Sponsoring Organizations of Day Care Homes for the Period July 1, 2013 Through June 30, 2014...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-20
... DEPARTMENT OF AGRICULTURE Food and Nutrition Service Child and Adult Care Food Program: National Average Payment Rates, Day Care Home Food Service Payment Rates, and Administrative Reimbursement Rates for Sponsoring Organizations of Day Care Homes for the Period July 1, 2011 Through June 30, 2012...
40 CFR 60.2916 - What operating limits must I meet and by when?
Code of Federal Regulations, 2013 CFR
2013-07-01
... intermittent units, maximum charge rate is the average charge rate measured during the most recent performance... charge rate is the charge rate measured during the most recent performance test demonstrating compliance... calculated as the average pressure drop across the wet scrubber measured during the most recent performance...
40 CFR 60.2916 - What operating limits must I meet and by when?
Code of Federal Regulations, 2014 CFR
2014-07-01
... intermittent units, maximum charge rate is the average charge rate measured during the most recent performance... charge rate is the charge rate measured during the most recent performance test demonstrating compliance... calculated as the average pressure drop across the wet scrubber measured during the most recent performance...
Social Norms Shift Preferences for Healthy and Unhealthy Foods
Templeton, Emma M.; Stanton, Michael V.; Zaki, Jamil
2016-01-01
This research investigated whether people change their food preferences and eating behavior in response to health-based social norms. One hundred twenty participants rated a series of healthy and unhealthy food images. After each rating, participants sometimes viewed a rating that ostensibly represented the average rating of previous participants. In fact, these average ratings were manipulated to convey a particular social norm. Participants either saw average ratings that favored healthy foods, favored unhealthy foods, or did not see any average ratings. Participants then re-rated those same food images after approximately ten minutes and again three days later. After the norm manipulation, participants were given the chance to take as many M&Ms as they wanted. Participants exposed to a healthy social norm consistently reported lower preferences for unhealthy foods as compared to participants in the other two conditions. This preference difference persisted three days after the social norm manipulation. However, health-based social norm manipulations did not influence the amount of M&Ms participants took. Although health-based social norm manipulations can influence stated food preferences, in this case they did not influence subsequent eating behavior. PMID:27861518
NASA Astrophysics Data System (ADS)
Callahan, R. P.; Riebe, C. S.; Ferrier, K.
2017-12-01
For more than two decades, cosmogenic nuclides have been used to quantify catchment-wide erosion rates averaged over tens of thousands of years. These rates have been used as baselines for comparison with sediment yields averaged over decades, leading to insights on how human activities such as deforestation and agriculture have influenced the production and delivery of sediment to streams and oceans. Here we present new data from the southern Sierra Nevada, California, where sediment yields have been measured over the last ten years using sediment trapping and gauging methods. Cosmogenic nuclides measured in stream sediment reveal erosion rates that are between 13 and 400 (average = 94) times faster than erosion rates inferred from annual accumulations in sediment traps. We show that the discrepancy can be explained by extremely low sediment trapping efficiency, which leads to bias in the short-term rates due to incomplete capture of suspended sediment. Thus the short-term rates roughly agree with the long-term rates, despite intensive timber harvesting in the study catchments over the last century. This differs from results obtained in similar forested granitic catchments of Idaho, where long-term rates are more than ten times greater than short-term rates because large, rare events do not contribute to the short-term averages. Our analysis of a global database indicates that both the magnitude and sign of differences between short- and long-term average erosion rates are difficult to predict, even when the history of land use in known.
Riverbed Hydrologic Exchange Dynamics in a Large Regulated River Reach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Tian; Bao, Jie; Huang, Maoyi
Hydrologic exchange flux (HEF) is an important hydrologic component in river corridors that includes both bidirectional (hyporheic) and unidirectional (gaining/losing) surface water – groundwater exchanges. Quantifying HEF rates in a large regulated river is difficult due to the large spatial domains, complexity of geomorphologic features and subsurface properties, and the great stage variations created by dam operations at multiple time scales. In this study, we developed a method that combined numerical modeling and field measurements for estimating HEF rates across the river bed in a 7‐km long reach of the highly regulated Columbia River. A high‐resolution computational fluid dynamics (CFD)more » modeling framework was developed and validated by field measurements and other modeling results to characterize the HEF dynamics across the river bed. We found that about 85% of the time from 2008‐2014 the river was losing water with an annual average net HEF rates across the river bed (Qz) of ‐2.3 m3 s−1 (negative indicating downwelling). June was the only month that the river gained water, with monthly averaged Qz of 0.8 m3 s−1. We also found that the daily dam operations increased the hourly gross gaining and losing rate over an average year of 8% and 2%, respectively. By investigating the HEF feedbacks at various time scales, we suggest that the dam operations could reduce the HEF at seasonal time scale by decreasing the seasonal flow variations, while also enhance the HEF at sub‐daily time scale by generating high frequency discharge variations. These changes could generate significant impacts on biogeochemical processes in the hyporheic zone.« less
Rainfall measurement from the opportunistic use of an Earth-space link in the Ku band
NASA Astrophysics Data System (ADS)
Barthès, L.; Mallet, C.
2013-08-01
The present study deals with the development of a low-cost microwave device devoted to the measurement of average rain rates observed along Earth-satellite links, the latter being characterized by a tropospheric path length of a few kilometres. The ground-based power measurements, which are made using the Ku-band television transmissions from several different geostationary satellites, are based on the principle that the atmospheric attenuation produced by rain encountered along each transmission path can be used to determine the path-averaged rain rate. This kind of device could be very useful in hilly areas where radar data are not available or in urban areas where such devices could be directly placed in homes by using residential TV antenna. The major difficulty encountered with this technique is that of retrieving rainfall characteristics in the presence of many other causes of received signal fluctuation, produced by atmospheric scintillation, variations in atmospheric composition (water vapour concentration, cloud water content) or satellite transmission parameters (variations in emitted power, satellite pointing). In order to conduct a feasibility study with such a device, a measurement campaign was carried out over a period of five months close to Paris. The present paper proposes an algorithm based on an artificial neural network, used to identify dry and rainy periods and to model received signal variability resulting from effects not related to rain. When the altitude of the rain layer is taken into account, the rain attenuation can be inverted to obtain the path-averaged rain rate. The rainfall rates obtained from this process are compared with co-located rain gauges and radar measurements taken throughout the full duration of the campaign, and the most significant rainfall events are analysed.
Assessing Multivariate Constraints to Evolution across Ten Long-Term Avian Studies
Teplitsky, Celine; Tarka, Maja; Møller, Anders P.; Nakagawa, Shinichi; Balbontín, Javier; Burke, Terry A.; Doutrelant, Claire; Gregoire, Arnaud; Hansson, Bengt; Hasselquist, Dennis; Gustafsson, Lars; de Lope, Florentino; Marzal, Alfonso; Mills, James A.; Wheelwright, Nathaniel T.; Yarrall, John W.; Charmantier, Anne
2014-01-01
Background In a rapidly changing world, it is of fundamental importance to understand processes constraining or facilitating adaptation through microevolution. As different traits of an organism covary, genetic correlations are expected to affect evolutionary trajectories. However, only limited empirical data are available. Methodology/Principal Findings We investigate the extent to which multivariate constraints affect the rate of adaptation, focusing on four morphological traits often shown to harbour large amounts of genetic variance and considered to be subject to limited evolutionary constraints. Our data set includes unique long-term data for seven bird species and a total of 10 populations. We estimate population-specific matrices of genetic correlations and multivariate selection coefficients to predict evolutionary responses to selection. Using Bayesian methods that facilitate the propagation of errors in estimates, we compare (1) the rate of adaptation based on predicted response to selection when including genetic correlations with predictions from models where these genetic correlations were set to zero and (2) the multivariate evolvability in the direction of current selection to the average evolvability in random directions of the phenotypic space. We show that genetic correlations on average decrease the predicted rate of adaptation by 28%. Multivariate evolvability in the direction of current selection was systematically lower than average evolvability in random directions of space. These significant reductions in the rate of adaptation and reduced evolvability were due to a general nonalignment of selection and genetic variance, notably orthogonality of directional selection with the size axis along which most (60%) of the genetic variance is found. Conclusions These results suggest that genetic correlations can impose significant constraints on the evolution of avian morphology in wild populations. This could have important impacts on evolutionary dynamics and hence population persistence in the face of rapid environmental change. PMID:24608111
NASA Astrophysics Data System (ADS)
González, D.; Braza, V.; Utrilla, A. D.; Gonzalo, A.; Reyes, D. F.; Ben, T.; Guzman, A.; Hierro, A.; Ulloa, J. M.
2017-10-01
A procedure to quantitatively analyse the relationship between the wetting layer (WL) and the quantum dots (QDs) as a whole in a statistical way is proposed. As we will show in the manuscript, it allows determining, not only the proportion of deposited InAs held in the WL, but also the average In content inside the QDs. First, the amount of InAs deposited is measured for calibration in three different WL structures without QDs by two methodologies: strain mappings in high-resolution transmission electron microscopy images and compositional mappings with ChemiSTEM x-ray energy spectrometry. The area under the average profiles obtained by both methodologies emerges as the best parameter to quantify the amount of InAs in the WL, in agreement with high-resolution x-ray diffraction results. Second, the effect of three different GaAs capping layer (CL) growth rates on the decomposition of the QDs is evaluated. The CL growth rate has a strong influence on the QD volume as well as the WL characteristics. Slower CL growth rates produce an In enrichment of the WL if compared to faster ones, together with a diminution of the QD height. In addition, assuming that the QD density does not change with the different CL growth rates, an estimation of the average In content inside the QDs is given. The high Ga/In intermixing during the decomposition of buried QDs does not only trigger a reduction of the QD height, but above all, a higher impoverishment of the In content inside the QDs, therefore modifying the two most important parameters that determine the optical properties of these structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michaels, R.A.; Kleinman, M.T.
1999-07-01
Twenty-four-hour airborne particle mass levels permissible under the NAAQS have been associated with mortality and morbidity in communities, motivating reconsideration of the standard. Reports of shorter-term mechanisms of toxic action exerted by airborne PM and PM constituents are emerging. The mechanisms are diverse, but have in common a short time frame of toxic action, from minutes to hours. In view of documented PM excursions also lasting minutes to hours, this study inquires whether such short-term mechanisms might contribute to explaining daily morbidity and mortality. Toxicology experiments have demonstrated the harmfulness of brief exposure to PM levels in the range ofmore » observed excursions. This suggests that toxicological processes initiated by short-term inhalation of PM may exert clinically important effects, and that weak associations of 24-hour-average particle mass with mortality and morbidity may represent artifacts of stronger, shorter-term associations whose full magnitude remains to be quantified. In one study, the area of lung surface developing lesions was elevated in rats breathing the same four-hour dose of aerosols, when the four-hour average rate of aerosol delivery included a short-term (five-minute) burst fifty percent above the average dose rate. Elevations were observed with each of two aerosols tested. The magnitude of the effect was higher with one of the two aerosols, whose dose rate included four excursions rather than just one excursion. Particulate matter inhaled or instilled intratracheally has produced morbidity in animals, including apnea and electrophysiological effects in dogs. Other studies reveal that PM can kill rats via electrophysiological and possibly other mechanisms. PM has also adversely affected asthmatic people in controlled clinical settings during exercise or, in one study, at rest.« less
Moore, Laura J.; Griggs, Gary B.
2002-01-01
Quantification of cliff retreat rates for the southern half of Santa Cruz County, CA, USA, located within the Monterey Bay National Marine Sanctuary, using the softcopy/geographic information system (GIS) methodology results in average cliff retreat rates of 7–15 cm/yr between 1953 and 1994. The coastal dunes at the southern end of Santa Cruz County migrate seaward and landward through time and display net accretion between 1953 and 1994, which is partially due to development. In addition, three critically eroding segments of coastline with high average erosion rates ranging from 20 to 63 cm/yr are identified as erosion ‘hotspots’. These locations include: Opal Cliffs, Depot Hill and Manresa. Although cliff retreat is episodic, spatially variable at the scale of meters, and the factors affecting cliff retreat vary along the Santa Cruz County coastline, there is a compensation between factors affecting retreat such that over the long-term the coastline maintains a relatively smooth configuration. The softcopy/GIS methodology significantly reduces errors inherent in the calculation of retreat rates in high-relief areas (e.g. erosion rates generated in this study are generally correct to within 10 cm) by removing errors due to relief displacement. Although the resulting root mean squared error for erosion rates is relatively small, simple projections of past erosion rates are inadequate to provide predictions of future cliff position. Improved predictions can be made for individual coastal segments by using a mean erosion rate and the standard deviation as guides to future cliff behavior in combination with an understanding of processes acting along the coastal segments in question. This methodology can be applied on any high-relief coast where retreat rates can be measured.
NASA Astrophysics Data System (ADS)
Taha, Z.; Rahim, MF Abdul; Mamat, R.
2017-10-01
The injection characteristics of direct injector affect the mixture formation and combustion processes. In addition, the injector is converted from gasoline operation for CNG application. Thus measurement of CNG direct injector mass flow rate was done by independently tested a single injector on a test bench. The first case investigated the effect of CNG injection pressure and the second case evaluate the effect of pulse-width of injection duration. An analytical model was also developed to predict the mass flow rate of the injector. The injector was operated in a choked condition in both the experiments and simulation studies. In case 1, it was shown that mass flow rate through the injector is affected by injection pressure linearly. Based on the tested injection pressure of 20 bar to 60 bar, the resultant mass flow rate are in the range of 0.4 g/s to 1.2 g/s which are met with theoretical flow rate required by the engine. However, in Case 2, it was demonstrated that the average mass flow rate at short injection durations is lower than recorded in Case 1. At injection pressure of 50 bar, the average mass flow rate for Case 2 and Case 1 are 0.7 g/s and 1.1 g/s respectively. Also, the measured mass flow rate at short injection duration showing a fluctuating data in the range of 0.2 g/s - 1.3 g/s without any noticeable trends. The injector model able to predict the trend of the mass flow rate at different injection pressure but unable to track the fluctuating trend at short injection duration.
Excess Entropy Production in Quantum System: Quantum Master Equation Approach
NASA Astrophysics Data System (ADS)
Nakajima, Satoshi; Tokura, Yasuhiro
2017-12-01
For open systems described by the quantum master equation (QME), we investigate the excess entropy production under quasistatic operations between nonequilibrium steady states. The average entropy production is composed of the time integral of the instantaneous steady entropy production rate and the excess entropy production. We propose to define average entropy production rate using the average energy and particle currents, which are calculated by using the full counting statistics with QME. The excess entropy production is given by a line integral in the control parameter space and its integrand is called the Berry-Sinitsyn-Nemenman (BSN) vector. In the weakly nonequilibrium regime, we show that BSN vector is described by ln \\breve{ρ }_0 and ρ _0 where ρ _0 is the instantaneous steady state of the QME and \\breve{ρ }_0 is that of the QME which is given by reversing the sign of the Lamb shift term. If the system Hamiltonian is non-degenerate or the Lamb shift term is negligible, the excess entropy production approximately reduces to the difference between the von Neumann entropies of the system. Additionally, we point out that the expression of the entropy production obtained in the classical Markov jump process is different from our result and show that these are approximately equivalent only in the weakly nonequilibrium regime.
Cohen-Schotanus, Janke; Schönrock-Adema, Johanna; Schmidt, Henk G
2010-01-01
A well-known problem with student surveys is a too low response rate. Experiences with predicting electoral outcomes, which required much smaller sample sizes, inspired us to adopt a similar approach to course evaluation. We expected that having respondents estimate the average opinions of their peers required fewer respondents for comparable outcomes than giving own opinions. Two course evaluation studies were performed among successive first-year medical students (N = 380 and 450, respectively). Study 1: Half the cohort gave opinions on nine questions, while the other half predicted the average outcomes. A prize was offered for the three best predictions (motivational remedy). Study 2: Half the cohort gave opinions, a quarter made predictions without a prize and a quarter made predictions with previous year's results as prior knowledge (cognitive remedy). The numbers of respondents required for stable outcomes were determined following an iterative process. Differences between numbers of respondents required and between average scores were analysed with ANOVA. In both studies, the prediction conditions required significantly fewer respondents (p < 0.001) for comparable outcomes. The informed prediction condition required the fewest respondents (N < 20). Problems with response rates can be reduced by asking respondents to predict evaluation outcomes rather than giving opinions.
NASA Astrophysics Data System (ADS)
Voeikov, Vladimir L.; Buravleva, Ekaterina; Bulargina, Yulia; Gurfinkel, Youri I.
2001-10-01
An automatic device for high-temporal resolution of the process of erythrocytes sedimentation in blood was designed. The position of the boundary between red blood and plasma is registered each 30 sec in several pipettes simultaneously with +/- 10 mkm precision. Data are processed by a PC and presented as velocity-time curves (ESR-grams) and the curves describing time evolution of the boundary position. ESR-grams demonstrate non-monotonous character of erythrocytes sedimentation in blood. Blood of particular donor being in a stable physiological state taken on different days is characterized by similar ESR-grams. Pathological deviations from a normal physiological state are reflected in the shortening of duration of each process stage and increasing of average sedimentation rate. Intravenous infusion of some medical preparations may lead either to improving (prolonging of macrokinetic stages, decreasing of sedimentation rate), or to worsening of studied parameters depending on an individual. The low extent of blood dilution with saline in vitro lead as a rule to decreasing of sedimentation rate and improving of microkinetic parameters of the process. Adding of highly diluted hydrogen peroxide to blood samples of patients resulted in the improving of sedimentation kinetics. ESR-graphy may widen opportunities of practical medicine in diagnostics, prognostics and drug therapy.
Plasma processing of large curved surfaces for superconducting rf cavity modification
Upadhyay, J.; Im, Do; Popović, S.; ...
2014-12-15
In this study, plasma based surface modification of niobium is a promising alternative to wet etching of superconducting radio frequency (SRF) cavities. The development of the technology based on Cl 2/Ar plasma etching has to address several crucial parameters which influence the etching rate and surface roughness, and eventually, determine cavity performance. This includes dependence of the process on the frequency of the RF generator, gas pressure, power level, the driven (inner) electrode configuration, and the chlorine concentration in the gas mixture during plasma processing. To demonstrate surface layer removal in the asymmetric non-planar geometry, we are using a simplemore » cylindrical cavity with 8 ports symmetrically distributed over the cylinder. The ports are used for diagnosing the plasma parameters and as holders for the samples to be etched. The etching rate is highly correlated with the shape of the inner electrode, radio-frequency (RF) circuit elements, chlorine concentration in the Cl 2/Ar gas mixtures, residence time of reactive species and temperature of the cavity. Using cylindrical electrodes with variable radius, large-surface ring-shaped samples and d.c. bias implementation in the external circuit we have demonstrated substantial average etching rates and outlined the possibility to optimize plasma properties with respect to maximum surface processing effect.« less
Hsu, Ya-Chu; Hung, Yu-Chen; Wang, Chiu-Yen
2017-09-15
High uniformity Au-catalyzed indium selenide (In 2 Se 3) nanowires are grown with the rapid thermal annealing (RTA) treatment via the vapor-liquid-solid (VLS) mechanism. The diameters of Au-catalyzed In 2 Se 3 nanowires could be controlled with varied thicknesses of Au films, and the uniformity of nanowires is improved via a fast pre-annealing rate, 100 °C/s. Comparing with the slower heating rate, 0.1 °C/s, the average diameters and distributions (standard deviation, SD) of In 2 Se 3 nanowires with and without the RTA process are 97.14 ± 22.95 nm (23.63%) and 119.06 ± 48.75 nm (40.95%), respectively. The in situ annealing TEM is used to study the effect of heating rate on the formation of Au nanoparticles from the as-deposited Au film. The results demonstrate that the average diameters and distributions of Au nanoparticles with and without the RTA process are 19.84 ± 5.96 nm (30.00%) and about 22.06 ± 9.00 nm (40.80%), respectively. It proves that the diameter size, distribution, and uniformity of Au-catalyzed In 2 Se 3 nanowires are reduced and improved via the RTA pre-treated. The systemic study could help to control the size distribution of other nanomaterials through tuning the annealing rate, temperatures of precursor, and growth substrate to control the size distribution of other nanomaterials. Graphical Abstract Rapid thermal annealing (RTA) process proved that it can uniform the size distribution of Au nanoparticles, and then it can be used to grow the high uniformity Au-catalyzed In 2 Se 3 nanowires via the vapor-liquid-solid (VLS) mechanism. Comparing with the general growth condition, the heating rate is slow, 0.1 °C/s, and the growth temperature is a relatively high growth temperature, > 650 °C. RTA pre-treated growth substrate can form smaller and uniform Au nanoparticles to react with the In 2 Se 3 vapor and produce the high uniformity In 2 Se 3 nanowires. The in situ annealing TEM is used to realize the effect of heating rate on Au nanoparticle formation from the as-deposited Au film. The byproduct of self-catalyzed In 2 Se 3 nanoplates can be inhibited by lowering the precursors and growth temperatures.
NASA Technical Reports Server (NTRS)
Helmreich, R. L.
1991-01-01
Formal cockpit resource management training in crew coordination concepts increases the percentage of crews rated as above average in performance and decreases the percentage of crews rated as below average.
NASA Astrophysics Data System (ADS)
Li, Weifeng; Zhan, Jing; Fan, Yanqing; Wei, Chang; Zhang, Chuanfu; Hwang, Jiann-Yang
2017-04-01
A pyrometallurgical process for the direct reduction of molten high-lead smelting slag obtained by the Shuikoushan (SKS) method was reported in this article using solid anthracite as the fuel and reductant. The chemical composition, the lead phase composition, and the physical properties of the molten high-lead slag were examined. The effects of the process parameters on the recovery rate of valued metals were investigated in the laboratory. According to the experimental results, a new efficient bottom blow reduction furnace was employed in the pilot-scale test for high-lead slag reduction. The results showed the average recovery rate of lead was more than 96.0% with lower Pb and high Zn content of the reducing slag under the condition of reduction temperature 1100-1200°C, coal ratio 5.5-7.5%, reduction time 90-150 min, CaO/SiO2 ratio 0.35-0.45, and FeO/SiO2 ratio 1.4-1.55. Moreover, nearly 250 kg of standard coal per ton of crude Pb output was reduced compared with the blast furnace reduction process.
Advantages offered by high average power picosecond lasers
NASA Astrophysics Data System (ADS)
Moorhouse, C.
2011-03-01
As electronic devices shrink in size to reduce material costs, device size and weight, thinner material thicknesses are also utilized. Feature sizes are also decreasing, which is pushing manufacturers towards single step laser direct write process as an attractive alternative to conventional, multiple step photolithography processes by eliminating process steps and the cost of chemicals. The fragile nature of these thin materials makes them difficult to machine either mechanically or with conventional nanosecond pulsewidth, Diode Pumped Solids State (DPSS) lasers. Picosecond laser pulses can cut materials with reduced damage regions and selectively remove thin films due to the reduced thermal effects of the shorter pulsewidth. Also, the high repetition rate allows high speed processing for industrial applications. Selective removal of thin films for OLED patterning, silicon solar cells and flat panel displays is discussed, as well as laser cutting of transparent materials with low melting point such as Polyethylene Terephthalate (PET). For many of these thin film applications, where low pulse energy and high repetition rate are required, throughput can be increased by the use of a novel technique to using multiple beams from a single laser source is outlined.
Energy diffusion controlled reaction rate of reacting particle driven by broad-band noise
NASA Astrophysics Data System (ADS)
Deng, M. L.; Zhu, W. Q.
2007-10-01
The energy diffusion controlled reaction rate of a reacting particle with linear weak damping and broad-band noise excitation is studied by using the stochastic averaging method. First, the stochastic averaging method for strongly nonlinear oscillators under broad-band noise excitation using generalized harmonic functions is briefly introduced. Then, the reaction rate of the classical Kramers' reacting model with linear weak damping and broad-band noise excitation is investigated by using the stochastic averaging method. The averaged Itô stochastic differential equation describing the energy diffusion and the Pontryagin equation governing the mean first-passage time (MFPT) are established. The energy diffusion controlled reaction rate is obtained as the inverse of the MFPT by solving the Pontryagin equation. The results of two special cases of broad-band noises, i.e. the harmonic noise and the exponentially corrected noise, are discussed in details. It is demonstrated that the general expression of reaction rate derived by the authors can be reduced to the classical ones via linear approximation and high potential barrier approximation. The good agreement with the results of the Monte Carlo simulation verifies that the reaction rate can be well predicted using the stochastic averaging method.
Empirical relations for cavitation and liquid impingement erosion processes
NASA Technical Reports Server (NTRS)
Rao, P. V.; Buckley, D. H.
1984-01-01
A unified power-law relationship between average erosion rate and cumulative erosion is presented. Extensive data analyses from venturi, magnetostriction (stationary and oscillating specimens), liquid drop, and jet impact devices appear to conform to this relation. A normalization technique using cavitation and liquid impingement erosion data is also presented to facilitate prediction. Attempts are made to understand the relationship between the coefficients in the power-law relationships and the material properties.
Bhansali, Archita H; Sangani, Darshan S; Mhatre, Shivani K; Sansgiry, Sujit S
2018-01-01
To compare three over-the-counter (OTC) Drug Facts panel versions for information processing optimization among college students. University of Houston students (N = 210) participated in a cross-sectional survey from January to May 2010. A current FDA label was compared to two experimental labels developed using the theory of CHREST to test information processing by re-positioning the warning information within the Drug Facts panel. Congruency was defined as placing like information together. Information processing was evaluated using the OTC medication Label Evaluation Process Model (LEPM): label comprehension, ease-of-use, attitude toward the product, product evaluation, and purchase intention. Experimental label with chunked congruent information (uses-directions-other information-warnings) was rated significantly higher than the current FDA label and had the best average scores among the LEPM information processing variables. If replications uphold these findings, the FDA label design might be revised to improve information processing.
NASA Astrophysics Data System (ADS)
Mackey, B. H.; Lamb, M. P.; Scheingross, J. S.; Farley, K. A.
2011-12-01
Channel incision and knickpoint retreat are the drivers of landscape evolution, yet we are still challenged to quantify the rate and processes by which rivers cut into rock. The Napali Coast on the northwestern side of Kauai, Hawaii, has multiple linear channels incising >200 m into the shield volcano surface. The channels have well-constrained initial conditions, including original topography, and relatively uniform layered basalt of known age (~4.5 Ma), which have attracted previous studies of channel evolution (e.g., Seidl et al., 1994, 1997). Many channels feature prominent waterfalls, although the mechanism of knickpoint initiation (submarine landslide vs cliff erosion) and subsequent retreat remain ambiguous. Motivated by these knowledge gaps and recent advances in cosmogenic helium geochronology, we revisited the Kaulaula Valley, a 9 km long narrow valley, beheaded on its upslope extent by the Waimea Canyon, and ending near the coast at the northern Mana Plain. Four kilometers up the canyon is a prominent 40 m high vertical knickpoint, dividing the valley into strongly contrasting geomorphic domains. The boulder-lined channel below the knickpoint is linear, steep (15%), and confined to a narrow valley with steep rocky cliffs (average slope 31°). Large, >2 m diameter angular boulders in the lower section of channel show evidence of mobility from debris flows. Above the knickpoint, average channel gradient is reduced (9%), bed load is much finer, and convex, soil-mantled hillslopes have a consistently lower mean slope of 18°. We constrained the exposure age of 18 features (in-channel boulders, stable boulders on terraces, and in-channel bedrock) along the length of the channel, by analysis of cosmogenic 3He in olivine phenocrysts. Cosmogenic exposure ages are oldest near the coast (80 ka) and systematically decrease with upstream distance towards the waterfall (< 5ka). In the upper section of channel, cosmogenic ages are approximately constant (10-20 ka). This data supports the conceptual model of knickpoint retreat and downstream terrace abandonment advocated by Seidl (1997), and we can constrain a maximum retreat rate of 40 mm/yr. Steady state erosion appears to dominate upstream of the waterfall. Catchment averaged erosion rates from detrital olivine (assuming catchment averaged production rate) give an erosion rate of 0.04 mm/yr .
Evaluation of Millstone-2 steam generator chemical decontamination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, D.T.; Blok, J.
The steam generator channel heads at Millstone-2 were decontaminated prior to carrying out extensive maintenance work in 1983. Isotopic gamma ray measurements were made of the inner channel head surfaces before and after the decontamination to evaluate the effectiveness of the process. The Combustion Engineering/Kraftwerk Union chemical decontamination, by itself, provided a decontamination factor ranging from 2.7 to 6.6 for the various steam generator surfaces. The corresponding average dose rate reduction factor, based on gross-gamma radiation surveys, was approximately 1.5 to 2.5. Following the chemical treatment, high pressure water flushing reduced the radiation levels still further, to an average overallmore » dose reduction factor of 5.3 to 7.2.« less
Distribution of tunnelling times for quantum electron transport.
Rudge, Samuel L; Kosov, Daniel S
2016-03-28
In electron transport, the tunnelling time is the time taken for an electron to tunnel out of a system after it has tunnelled in. We define the tunnelling time distribution for quantum processes in a dissipative environment and develop a practical approach for calculating it, where the environment is described by the general Markovian master equation. We illustrate the theory by using the rate equation to compute the tunnelling time distribution for electron transport through a molecular junction. The tunnelling time distribution is exponential, which indicates that Markovian quantum tunnelling is a Poissonian statistical process. The tunnelling time distribution is used not only to study the quantum statistics of tunnelling along the average electric current but also to analyse extreme quantum events where an electron jumps against the applied voltage bias. The average tunnelling time shows distinctly different temperature dependence for p- and n-type molecular junctions and therefore provides a sensitive tool to probe the alignment of molecular orbitals relative to the electrode Fermi energy.
Improvements in sub-grid, microphysics averages using quadrature based approaches
NASA Astrophysics Data System (ADS)
Chowdhary, K.; Debusschere, B.; Larson, V. E.
2013-12-01
Sub-grid variability in microphysical processes plays a critical role in atmospheric climate models. In order to account for this sub-grid variability, Larson and Schanen (2013) propose placing a probability density function on the sub-grid cloud microphysics quantities, e.g. autoconversion rate, essentially interpreting the cloud microphysics quantities as a random variable in each grid box. Random sampling techniques, e.g. Monte Carlo and Latin Hypercube, can be used to calculate statistics, e.g. averages, on the microphysics quantities, which then feed back into the model dynamics on the coarse scale. We propose an alternate approach using numerical quadrature methods based on deterministic sampling points to compute the statistical moments of microphysics quantities in each grid box. We have performed a preliminary test on the Kessler autoconversion formula, and, upon comparison with Latin Hypercube sampling, our approach shows an increased level of accuracy with a reduction in sample size by almost two orders of magnitude. Application to other microphysics processes is the subject of ongoing research.
NASA Astrophysics Data System (ADS)
Eshach, Haim
2014-06-01
This article describes the development and field test of the Sound Concept Inventory Instrument (SCII), designed to measure middle school students' concepts of sound. The instrument was designed based on known students' difficulties in understanding sound and the history of science related to sound and focuses on two main aspects of sound: sound has material properties, and sound has process properties. The final SCII consists of 71 statements that respondents rate as either true or false and also indicate their confidence on a five-point scale. Administration to 355 middle school students resulted in a Cronbach alpha of 0.906, suggesting a high reliability. In addition, the average percentage of students' answers to statements that associate sound with material properties is significantly higher than the average percentage of statements associating sound with process properties (p <0.001). The SCII is a valid and reliable tool that can be used to determine students' conceptions of sound.
Average expansion rate and light propagation in a cosmological Tardis spacetime
NASA Astrophysics Data System (ADS)
Lavinto, Mikko; Räsänen, Syksy; Szybka, Sebastian J.
2013-12-01
We construct the first exact statistically homogeneous and isotropic cosmological solution in which inhomogeneity has a significant effect on the expansion rate. The universe is modelled as a Swiss Cheese, with dust FRW background and inhomogeneous holes. We show that if the holes are described by the quasispherical Szekeres solution, their average expansion rate is close to the background under certain rather general conditions. We specialise to spherically symmetric holes and violate one of these conditions. As a result, the average expansion rate at late times grows relative to the background, ie backreaction is significant. The holes fit smoothly into the background, but are larger on the inside than a corresponding background domain: we call them Tardis regions. We study light propagation, find the effective equations of state and consider the relation of the spatially averaged expansion rate to the redshift and the angular diameter distance.
29 CFR 548.302 - Average earnings for period other than a workweek.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 3 2011-07-01 2011-07-01 false Average earnings for period other than a workweek. 548.302... LABOR REGULATIONS AUTHORIZATION OF ESTABLISHED BASIC RATES FOR COMPUTING OVERTIME PAY Interpretations Authorized Basic Rates § 548.302 Average earnings for period other than a workweek. (a) Section 548.3(b...
40 CFR 62.14455 - What if my HMIWI goes outside of a parameter limit?
Code of Federal Regulations, 2010 CFR
2010-07-01
... temperature (3-hour rolling average) simultaneously The PM, CO, and dioxin/furan emission limits. (c) Except..., daily average for batch HMIWI), and below the minimum dioxin/furan sorbent flow rate (3-hour rolling average) simultaneously The dioxin/furan emission limit. (3) Operates above the maximum charge rate (3...
Process, cost, and clinical quality: the initial oral contraceptive visit.
McMullen, Michael J; Woolford, Samuel W; Moore, Charles L; Berger, Barry M
2013-01-01
To demonstrate how the analysis of clinical process, cost, and outcomes can identify healthcare improvements that reduce cost without sacrificing quality, using the example of the initial visit associated with oral contraceptive pill use. Cross-sectional study using data collected by HealthMETRICS between 1996 and 2009. Using data collected from 106 sites in 24 states, the unintended pregnancy (UIP) rate, effectiveness of patient education, and unit visit cost were calculated. Staff type providing education and placement of education were recorded. Two-way analysis of variance models were created and tested for significance to identify differences between groups. Sites using nonclinical staff to provide education outside the exam were associated with lower cost, higher education scores, and a UIP rate no different from that of sites using clinical staff. Sites also providing patient education during the physical examination were associated with higher cost, lower education scores, and a UIP rate no lower than that of sites providing education outside of the exam. Through analyzing process, cost, and quality, lower-cost processes that did not reduce clinical quality were identified. This methodology is applicable to other clinical services for identifying low-cost processes that do not result in lower clinical quality. By using nonclinical staff educators to provide education outside of the physical examination, sites could save an average of 32% of the total cost of the visit.
Ring rolling process simulation for microstructure optimization
NASA Astrophysics Data System (ADS)
Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio
2017-10-01
Metal undergoes complicated microstructural evolution during Hot Ring Rolling (HRR), which determines the quality, mechanical properties and life of the ring formed. One of the principal microstructure properties which mostly influences the structural performances of forged components, is the value of the average grain size. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular velocity of driver roll) on microstructural and on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR has been developed in SFTC DEFORM V11, taking into account also microstructural development of the material used (the nickel superalloy Waspalloy). The Finite Element (FE) model has been used to formulate a proper optimization problem. The optimization procedure has been developed in order to find the combination of process parameters which allows to minimize the average grain size. The Response Surface Methodology (RSM) has been used to find the relationship between input and output parameters, by using the exact values of output parameters in the control points of a design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. Then, an optimization procedure based on Genetic Algorithms has been applied. At the end, the minimum value of average grain size with respect to the input parameters has been found.
34 CFR 668.215 - Average rates appeals.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.215 Section 668.215 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.215...
Gjini, Erida; Haydon, Daniel T.; Barry, J. David; Cobbold, Christina A.
2012-01-01
Patterns of genetic diversity in parasite antigen gene families hold important information about their potential to generate antigenic variation within and between hosts. The evolution of such gene families is typically driven by gene duplication, followed by point mutation and gene conversion. There is great interest in estimating the rates of these processes from molecular sequences for understanding the evolution of the pathogen and its significance for infection processes. In this study, a series of models are constructed to investigate hypotheses about the nucleotide diversity patterns between closely related gene sequences from the antigen gene archive of the African trypanosome, the protozoan parasite causative of human sleeping sickness in Equatorial Africa. We use a hidden Markov model approach to identify two scales of diversification: clustering of sequence mismatches, a putative indicator of gene conversion events with other lower-identity donor genes in the archive, and at a sparser scale, isolated mismatches, likely arising from independent point mutations. In addition to quantifying the respective probabilities of occurrence of these two processes, our approach yields estimates for the gene conversion tract length distribution and the average diversity contributed locally by conversion events. Model fitting is conducted using a Bayesian framework. We find that diversifying gene conversion events with lower-identity partners occur at least five times less frequently than point mutations on variant surface glycoprotein (VSG) pairs, and the average imported conversion tract is between 14 and 25 nucleotides long. However, because of the high diversity introduced by gene conversion, the two processes have almost equal impact on the per-nucleotide rate of sequence diversification between VSG subfamily members. We are able to disentangle the most likely locations of point mutations and conversions on each aligned gene pair. PMID:22735079
NASA Astrophysics Data System (ADS)
Jiao, Lei; Yang, Yonggang; Li, Hui; Zhao, Yutao; Wang, Xiaolu
2018-05-01
In this study, the in situ Al3Ti/2024Al composites were successfully fabricated by direct melt reaction method and subjected to forging and friction stir processing (FSP) to achieve superplasticity. Then, the microstructure and superplastic tensile behavior of the composites were investigated. The results show that the reinforcement particles are broken and grains are fine after plastic processing. Particularly, the size of reinforcement particles ranges from 0.2 μm to 5 μm and the average size of fine equiaxed grains is 5 μm after FSP processing. And the superplasticity of the composites was improved apparently. The maximum elongation of 642% was obtained at 0.15 s‑1 and 510 °C for the FSP specimen, with a strain rate sensitive parameter (m) of 0.58, indicating the FSP specimen has obtained excellent high strain rate superplasticity. The strain rate sensitivity parameter, m, ranges from 0.23 to 0.58, and the activation energy was calculated to be 135.24 kJ mol‑1. All results indicated that the main superplastic deformation mechanism was grain boundary sliding (GBS) for the FSP Al3Ti/2024Al composites.
The tangled bank of amino acids.
Goldstein, Richard A; Pollock, David D
2016-07-01
The use of amino acid substitution matrices to model protein evolution has yielded important insights into both the evolutionary process and the properties of specific protein families. In order to make these models tractable, standard substitution matrices represent the average results of the evolutionary process rather than the underlying molecular biophysics and population genetics, treating proteins as a set of independently evolving sites rather than as an integrated biomolecular entity. With advances in computing and the increasing availability of sequence data, we now have an opportunity to move beyond current substitution matrices to more interpretable mechanistic models with greater fidelity to the evolutionary process of mutation and selection and the holistic nature of the selective constraints. As part of this endeavour, we consider how epistatic interactions induce spatial and temporal rate heterogeneity, and demonstrate how these generally ignored factors can reconcile standard substitution rate matrices and the underlying biology, allowing us to better understand the meaning of these substitution rates. Using computational simulations of protein evolution, we can demonstrate the importance of both spatial and temporal heterogeneity in modelling protein evolution. © 2016 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.
Comparing methods for modelling spreading cell fronts.
Markham, Deborah C; Simpson, Matthew J; Maini, Philip K; Gaffney, Eamonn A; Baker, Ruth E
2014-07-21
Spreading cell fronts play an essential role in many physiological processes. Classically, models of this process are based on the Fisher-Kolmogorov equation; however, such continuum representations are not always suitable as they do not explicitly represent behaviour at the level of individual cells. Additionally, many models examine only the large time asymptotic behaviour, where a travelling wave front with a constant speed has been established. Many experiments, such as a scratch assay, never display this asymptotic behaviour, and in these cases the transient behaviour must be taken into account. We examine the transient and the asymptotic behaviour of moving cell fronts using techniques that go beyond the continuum approximation via a volume-excluding birth-migration process on a regular one-dimensional lattice. We approximate the averaged discrete results using three methods: (i) mean-field, (ii) pair-wise, and (iii) one-hole approximations. We discuss the performance of these methods, in comparison to the averaged discrete results, for a range of parameter space, examining both the transient and asymptotic behaviours. The one-hole approximation, based on techniques from statistical physics, is not capable of predicting transient behaviour but provides excellent agreement with the asymptotic behaviour of the averaged discrete results, provided that cells are proliferating fast enough relative to their rate of migration. The mean-field and pair-wise approximations give indistinguishable asymptotic results, which agree with the averaged discrete results when cells are migrating much more rapidly than they are proliferating. The pair-wise approximation performs better in the transient region than does the mean-field, despite having the same asymptotic behaviour. Our results show that each approximation only works in specific situations, thus we must be careful to use a suitable approximation for a given system, otherwise inaccurate predictions could be made. Copyright © 2014 Elsevier Ltd. All rights reserved.
Soil Production and Erosion Rates and Processes in Mountainous Landscapes
NASA Astrophysics Data System (ADS)
Heimsath, A. M.; DiBiase, R. A.; Whipple, K. X.
2012-12-01
We focus here on high-relief, steeply sloped landscapes from the Nepal Himalaya to the San Gabriels of California that are typically thought to be at a critical threshold of soil cover. Observations reveal that, instead, there are significant areas mantled with soil that fit the conceptual framework of a physically mobile layer derived from the underlying parent material with some locally-derived organic content. The extent and persistence of such soils depends on the long-term balance between soil production and erosion despite the perceived discrepancy between high erosion and low soil production rates. We present cosmogenic Be-10-derived soil production and erosion rates that show that soil production increases with catchment-averaged erosion, suggesting a feedback that enhances soil-cover persistence, even in threshold landscapes. Soil production rates do decline systematically with increasing soil thickness, but hint at the potential for separate soil production functions for different erosional regimes. We also show that a process transistion to landslide-dominated erosion results in thinner, patchier soils and rockier topography, but find that there is no sudden transition to bedrock landscapes. Our landslide modeling is combined with a detailed quantification of bedrock exposure for these steep, mountainous landscapes. We also draw an important conclusion connecting the physical processes producing and transporting soil and the chemical processes weathering the parent material by measuring parent material strength across three different field settings. We observe that parent material strength increases with overlying soil thickness and, therefore, the weathered extent of the saprolite. Soil production rates, thus, decrease with increasing parent material competence. These observation highlight the importance of quantifying hillslope hydrologic processes where such multi-facted measurements are made.
NASA Astrophysics Data System (ADS)
Gray, Christopher M.; Monson, Russell K.; Fierer, Noah
2014-04-01
Nonmethane biogenic volatile organic compounds (BVOCs) play key roles in the atmosphere, where they can influence a wide range of chemical processes, and in soils, where they can alter the rates of biogeochemical cycles and impact the growth of plants and soil organisms. However, the diversity and quantities of BVOCs released from or taken up by soils remain poorly characterized as do the biotic and abiotic controls on these fluxes. Here we used proton transfer reaction mass spectrometry to quantify BVOC flux rates from soils with and without active root systems in a subalpine coniferous forest. The total measured BVOC flux averaged 102 nmol m-2 h-1 (an estimated 2.0 µg-C m-2 h-1). The individual BVOCs with the highest net emissions from soil included monoterpenes and methanol (averaging 646 and 641 ng-C m-2 h-1, respectively) while soil represented a net sink of isoprene (-98 ng-C m-2 h-1) and formaldehyde (-37 ng-C m-2 h-1). Tree roots, directly or indirectly, contributed an average of 53% of the total carbon emitted from the soil as BVOCs, with methanol and acetaldehyde among those BVOCs most strongly associated with active root presence. The fluxes of most of the dominant BVOCs emitted from soil, including methanol, increased linearly with increasing temperature. Together the fluxes of certain BVOCs into or out of the forest floor (particularly methanol, isoprene, and monoterpenes) are likely relevant to ecosystem-level processes and belowground ecology, but these fluxes are highly variable and are strongly controlled by both root presence and soil abiotic conditions.
Chen, Ruifeng; Zhu, Lijun; Lv, Lihuo; Yao, Su; Li, Bin; Qian, Junqing
2017-06-01
Optimization of compatible solutes (ectoine) extraction and purification from Halomonas elongata cell fermentation had been investigated in the laboratory tests of a large scale commercial production project. After culturing H. elongata cells in developed medium at 28 °C for 23-30 h, we obtained an average yield and biomass of ectoine for 15.9 g/L and 92.9 (OD 600 ), respectively. Cell lysis was performed with acid treatment at moderate high temperature (60-70 °C). The downstream processing operations were designed to be as follows: filtration, desalination, cation exchange, extraction of crude product and three times of refining. Among which the cation exchange and extraction of crude product acquired a high average recovery rate of 95 and 96%; whereas a great loss rate of 19 and 15% was observed during the filtration and desalination, respectively. Combined with the recovering of ectoine from the mother liquor of the three times refining, the average of overall yield (referring to the amount of ectoine synthesized in cells) and purity of final product obtained were 43% and over 98%, respectively. However, key factors that affected the production efficiency were not yields but the time used in the extraction of crude product, involving the crystallization step from water, which spended 24-72 h according to the production scale. Although regarding to the productivity and simplicity on laboratory scale, the method described here can not compete with other investigations, in this study we acquired higher purity of ectoine and provided downstream processes that are capable of operating on industrial scale.
Factors influencing suspended solids concentrations in activated sludge settling tanks.
Kim, Y; Pipes, W O
1999-05-31
A significant fraction of the total mass of sludge in an activated sludge process may be in the settling tanks if the sludge has a high sludge volume index (SVI) or when a hydraulic overload occurs during a rainstorm. Under those conditions, an accurate estimate of the amount of sludge in the settling tanks is needed in order to calculate the mean cell residence time or to determine the capacity of the settling tanks to store sludge. Determination of the amount of sludge in the settling tanks requires estimation of the average concentration of suspended solids in the layer of sludge (XSB) in the bottom of the settling tanks. A widely used reference recommends averaging the concentrations of suspended solids in the mixed liquor (X) and in the underflow (Xu) from the settling tanks (XSB=0. 5{X+Xu}). This method does not take into consideration other pertinent information available to an operator. This is a report of a field study which had the objective of developing a more accurate method for estimation of the XSB in the bottom of the settling tanks. By correlation analysis, it was found that only 44% of the variation in the measured XSB is related to sum of X and Xu. XSB is also influenced by the SVI, the zone settling velocity at X and the overflow and underflow rates of the settling tanks. The method of averaging X and Xu tends to overestimate the XSB. A new empirical estimation technique for XSB was developed. The estimation technique uses dimensionless ratios; i.e., the ratio of XSB to Xu, the ratio of the overflow rate to the sum of the underflow rate and the initial settling velocity of the mixed liquor and sludge compaction expressed as a ratio (dimensionless SVI). The empirical model is compared with the method of averaging X and Xu for the entire range of sludge depths in the settling tanks and for SVI values between 100 and 300 ml/g. Since the empirical model uses dimensionless ratios, the regression parameters are also dimensionless and the model can be readily adopted for other activated sludge processes. A simplified version of the empirical model provides an estimation of XSB as a function of X, Xu and SVf and can be used by an operator when flow conditions are normal. Copyright 1999 Elsevier Science B.V.
Asl, Leila Kheibarshekan; Dhondt, Stijn; Boudolf, Véronique; Beemster, Gerrit T S; Beeckman, Tom; Inzé, Dirk; Govaerts, Willy; De Veylder, Lieven
2011-08-01
To efficiently capture sunlight for photosynthesis, leaves typically develop into a flat and thin structure. This development is driven by cell division and expansion, but the individual contribution of these processes is currently unknown, mainly because of the experimental difficulties to disentangle them in a developing organ, due to their tight interconnection. To circumvent this problem, we built a mathematic model that describes the possible division patterns and expansion rates for individual epidermal cells. This model was used to fit experimental data on cell numbers and sizes obtained over time intervals of 1 d throughout the development of the first leaf pair of Arabidopsis (Arabidopsis thaliana). The parameters were obtained by a derivative-free optimization method that minimizes the differences between the predicted and experimentally observed cell size distributions. The model allowed us to calculate probabilities for a cell to divide into guard or pavement cells, the maximum size at which it can divide, and its average cell division and expansion rates at each point during the leaf developmental process. Surprisingly, average cell cycle duration remained constant throughout leaf development, whereas no evidence for a maximum cell size threshold for cell division of pavement cells was found. Furthermore, the model predicted that neighboring cells of different sizes within the epidermis expand at distinctly different relative rates, which could be verified by direct observations. We conclude that cell division seems to occur independently from the status of cell expansion, whereas the cell cycle might act as a timer rather than as a size-regulated machinery.
Development of microwave rainfall retrieval algorithm for climate applications
NASA Astrophysics Data System (ADS)
KIM, J. H.; Shin, D. B.
2014-12-01
With the accumulated satellite datasets for decades, it is possible that satellite-based data could contribute to sustained climate applications. Level-3 products from microwave sensors for climate applications can be obtained from several algorithms. For examples, the Microwave Emission brightness Temperature Histogram (METH) algorithm produces level-3 rainfalls directly, whereas the Goddard profiling (GPROF) algorithm first generates instantaneous rainfalls and then temporal and spatial averaging process leads to level-3 products. The rainfall algorithm developed in this study follows a similar approach to averaging instantaneous rainfalls. However, the algorithm is designed to produce instantaneous rainfalls at an optimal resolution showing reduced non-linearity in brightness temperature (TB)-rain rate(R) relations. It is found that the resolution tends to effectively utilize emission channels whose footprints are relatively larger than those of scattering channels. This algorithm is mainly composed of a-priori databases (DBs) and a Bayesian inversion module. The DB contains massive pairs of simulated microwave TBs and rain rates, obtained by WRF (version 3.4) and RTTOV (version 11.1) simulations. To improve the accuracy and efficiency of retrieval process, data mining technique is additionally considered. The entire DB is classified into eight types based on Köppen climate classification criteria using reanalysis data. Among these sub-DBs, only one sub-DB which presents the most similar physical characteristics is selected by considering the thermodynamics of input data. When the Bayesian inversion is applied to the selected DB, instantaneous rain rate with 6 hours interval is retrieved. The retrieved monthly mean rainfalls are statistically compared with CMAP and GPCP, respectively.
Xu, Peng-cheng; Hao, Rui-xia; Zhang, Ya; Wang, Dong-yue; Zhong, Li-yan; Xu, Hao-dan
2016-02-15
In order to investigate the feasibility of deep denitrification and simultaneous removing phthalate esters (PAEs) in the process of reclaimed water treatment uses three-dimensional biofilm-electrode reactor coupled with sulfur autotrophic deep denitrification technology (3BER-S), the technological characteristics and mechanisms were analyzed based on determining the static adsorption capacity of biofilm cultured active carbon fillers in 3BER-S reactor together with the operation results of dynamic denitrification and simultaneous PAEs removing. The results showed that the average adsorption rates of DBP, DEHP were 85.84% and 97.12% in the biofilm cultured active carbon fillers, the equilibrium adsorption capacities were 0.1426 mg x g(-1) and 0.162 mg(-1) and the time spans of reaching adsorption saturation were 120 min and 60 min, respectively; The existence of PAEs had no obvious effect on denitrification, the reactor effluent concentration of TN was in range of 1-2 mg x L(-1) before and after the addition of PAEs, and the average removal rate of TN reached above 94%; 3BER-S denitrification system showed significant ability in removing PAEs, leading to effluent concentrations of DBP and DEHP of no more than 6 microg x L(-1) with removal rates of above 96%; this was due to the synergistic effect of absorption, biodegradation and electrochemistry. After treatment with 3BER-S technology, DBP and DEHP in simulative municipal secondary effluent met the regulated limitation of The Reuse of Urban Recycling Water Quality Standard for Groundwater Recharge (GB/T 19772-2005).
Fuel decomposition and boundary-layer combustion processes of hybrid rocket motors
NASA Technical Reports Server (NTRS)
Chiaverini, Martin J.; Harting, George C.; Lu, Yeu-Cherng; Kuo, Kenneth K.; Serin, Nadir; Johnson, David K.
1995-01-01
Using a high-pressure, two-dimensional hybrid motor, an experimental investigation was conducted on fundamental processes involved in hybrid rocket combustion. HTPB (Hydroxyl-terminated Polybutadiene) fuel cross-linked with diisocyanate was burned with GOX under various operating conditions. Large-amplitude pressure oscillations were encountered in earlier test runs. After identifying the source of instability and decoupling the GOX feed-line system and combustion chamber, the pressure oscillations were drastically reduced from +/-20% of the localized mean pressure to an acceptable range of +/-1.5% Embedded fine-wire thermocouples indicated that the surface temperature of the burning fuel was around 1000 K depending upon axial locations and operating conditions. Also, except near the leading-edge region, the subsurface thermal wave profiles in the upstream locations are thicker than those in the downstream locations since the solid-fuel regression rate, in general, increases with distance along the fuel slab. The recovered solid fuel slabs in the laminar portion of the boundary layer exhibited smooth surfaces, indicating the existence of a liquid melt layer on the burning fuel surface in the upstream region. After the transition section, which displayed distinct transverse striations, the surface roughness pattern became quite random and very pronounced in the downstream turbulent boundary-layer region. Both real-time X-ray radiography and ultrasonic pulse-echo techniques were used to determine the instantaneous web thickness burned and instantaneous solid-fuel regression rates over certain portions of the fuel slabs. Globally averaged and axially dependent but time-averaged regression rates were also obtained and presented.
Asl, Leila Kheibarshekan; Dhondt, Stijn; Boudolf, Véronique; Beemster, Gerrit T.S.; Beeckman, Tom; Inzé, Dirk; Govaerts, Willy; De Veylder, Lieven
2011-01-01
To efficiently capture sunlight for photosynthesis, leaves typically develop into a flat and thin structure. This development is driven by cell division and expansion, but the individual contribution of these processes is currently unknown, mainly because of the experimental difficulties to disentangle them in a developing organ, due to their tight interconnection. To circumvent this problem, we built a mathematic model that describes the possible division patterns and expansion rates for individual epidermal cells. This model was used to fit experimental data on cell numbers and sizes obtained over time intervals of 1 d throughout the development of the first leaf pair of Arabidopsis (Arabidopsis thaliana). The parameters were obtained by a derivative-free optimization method that minimizes the differences between the predicted and experimentally observed cell size distributions. The model allowed us to calculate probabilities for a cell to divide into guard or pavement cells, the maximum size at which it can divide, and its average cell division and expansion rates at each point during the leaf developmental process. Surprisingly, average cell cycle duration remained constant throughout leaf development, whereas no evidence for a maximum cell size threshold for cell division of pavement cells was found. Furthermore, the model predicted that neighboring cells of different sizes within the epidermis expand at distinctly different relative rates, which could be verified by direct observations. We conclude that cell division seems to occur independently from the status of cell expansion, whereas the cell cycle might act as a timer rather than as a size-regulated machinery. PMID:21693673
Results for the Brine Evaporation Bag (BEB) Brine Processing Test
NASA Technical Reports Server (NTRS)
Delzeit, Lance; Flynn, Michael; Fisher, John; Shaw, Hali; Kawashima, Brian; Beeler, David; Howard, Kevin
2015-01-01
The recent Brine Processing Test compared the NASA Forward Osmosis Brine Dewatering (FOBD), Paragon Ionomer Water Processor (IWP), UMPQUA Ultrasonic Brine Dewatering System (UBDS), and the NASA Brine Evaporation Bag (BEB). This paper reports the results of the BEB. The BEB was operated at 70 deg C and a base pressure of 12 torr. The BEB was operated in a batch mode, and processed 0.4L of brine per batch. Two different brine feeds were tested, a chromic acid-urine brine and a chromic acid-urine-hygiene mix brine. The chromic acid-urine brine, known as the ISS Alternate Pretreatment Brine, had an average processing rate of 95 mL/hr with a specific power of 5kWhr/L. The complete results of these tests will be reported within this paper.
Atomic Processes and Diagnostics of Low Pressure Krypton Plasma
NASA Astrophysics Data System (ADS)
Srivastava, Rajesh; Goyal, Dipti; Gangwar, Reetesh; Stafford, Luc
2015-03-01
Optical emission spectroscopy along with suitable collisional-radiative (CR) model is used in plasma diagnostics. Importance of reliable cross-sections for various atomic processes is shown for low pressure argon plasma. In the present work, radially-averaged Kr emission lines from the 2pi --> 1sj were recorded as a function of pressure from 1 to 50mTorr. We have developed a CR model using our fine-structure relativistic-distorted wave cross sections. The various processes considered are electron-impact excitation, ionization and their reverse processes. The required rate coefficients have been calculated from these cross-sections assuming Maxwellian energy distribution. Electron temperature obtained from the CR model is found to be in good agreement with the probe measurements. Work is supported by IAEA Vienna, DAE-BRNS Mumbai and CSIR, New Delhi.
34 CFR 668.196 - Average rates appeals.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.196 Section 668.196 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668...
Scaling analysis on Indian foreign exchange market
NASA Astrophysics Data System (ADS)
Sarkar, A.; Barat, P.
2006-05-01
In this paper, we investigate the scaling behavior of the average daily exchange rate returns of the Indian Rupee against four foreign currencies: namely, US Dollar, Euro, Great Britain Pound and Japanese Yen. The average daily exchange rate return of the Indian Rupee against US Dollar is found to exhibit a persistent scaling behavior and follow Levy stable distribution. On the contrary, the average daily exchange rate returns of the other three foreign currencies do not show persistency or antipersistency and follow Gaussian distribution.
NASA Astrophysics Data System (ADS)
Bieniek, Ronald
2008-05-01
Rates for collisionally induced transitions between molecular vibrational levels are important in modeling a variety of non-LTE processes in astrophysical environments. Two examples are SiO masering in circumstellar envelopes in certain late-type stars [1] and the vibrational populations of molecular hydrogen in shocked interstellar medium [cf 2]. A simple exponential-potential model of molecular collisions leads to a two-parameter analytic expression for state-to-state and thermally averaged rates for collisionally induced vibrational-translational (VT) transitions in diatomic molecules [3,4]. The thermally averaged rates predicted by this formula have been shown to be in excellent numerical agreement with absolute experimental and quantum mechanical rates over large temperature ranges and initial vibrational excitation levels in a variety of species, e.g., OH, O2, N2 [3] and even for the rate of H2(v=1)+H2, which changes by five orders of magnitude in the temperature range 50-2000 K [4]. Analogous analytic rates will be reported for vibrational transitions in SiO due to collisions with H2 and compared to the numerical fit of quantum-mechanical rates calculated by Bieniek and Green [5]. [1] Palov, A.P., Gray, M.D., Field, D., & Balint-Kurti, G.G. 2006, ApJ, 639, 204. [2] Flower, D. 2007, Molecular Collisions in the Interstellar Medium (Cambridge: Cambridge Univ. Press) [3] Bieniek, R.J. & Lipson, S.J. 1996, Chem. Phys. Lett. 263, 276. [4] Bieniek, R.J. 2006, Proc. NASA LAW (Lab. Astrophys. Workshop) 2006, 299; http://www.physics.unlv.edu/labastro/nasalaw2006proceedings.pdf. [5] Bieniek, R.J., & Green, S. 1983, ApJ, 265, L29 and 1983, ApJ, 270, L101.
The effects of light, primary production, and temperature on bacterial production at Station ALOHA
NASA Astrophysics Data System (ADS)
Viviani, D. A.; Church, M. J.
2016-02-01
In the open oceans, bacterial metabolism is responsible for a large fraction of the movement of reduced carbon through these ecosystems. While broad meta-analyses suggest that factors such as temperature or primary production control rates of bacterial production over large geographic scales, to date little is known about how these factors influence variability in bacterial production in the open sea. Here we present two years of measurements of 3H-leucine incorporation, a proxy for bacterial production, at the open ocean field site of the Hawaii Ocean Time-series, Station ALOHA (22° 45'N, 158° 00'W). By examining 3H-leucine incorporation over monthly, daily, and hourly scales, this work provides insight into processes controlling bacterial growth in this persistently oligotrophic habitat. Rates of 3H-leucine incorporation were consistently 60% greater when measured in the light than in the dark, highlighting the importance of sunlight in fueling bacterial metabolism in this ecosystem. Over diel time scales, rates of 3H-leucine incorporation were quasi-sinusoidal, with rates in the light higher near midday, while rates in the dark were greatest after sunset. Depth-integrated (0 -125 m) rates of 3H-leucine incorporation in both light and dark were more variable ( 5- and 4-fold, respectively) than coincident measurements of primary production ( 2-fold). On average, rates of bacterial production averaged 2 and 4% of primary production (in the dark and light, respectively). At near-monthly time scales, rates of 3H-leucine incorporation in both light and dark were significantly related to temperature. Our results suggest that in the subtropical oligotrophic Pacific, bacterial production appears decoupled from primary production as a result of seasonal-scale variations in temperature and light.
Formation of hydrogen peroxide from illuminated polar snows and frozen solutions of model compounds
NASA Astrophysics Data System (ADS)
Hullar, Ted; Patten, Kelley; Anastasio, Cort
2012-08-01
Hydrogen peroxide (HOOH) is an important trace constituent in snow and ice, including in Arctic and Antarctic ice cores. To better understand the budget of snowpack HOOH, here we examine its production in illuminated snow and ice. To evaluate what types of compounds might be important photochemical sources of HOOH, we first illuminated laboratory ice samples containing 10 different model organic compounds: guaiacol, phenol, syringol, benzoate, formate, octanal, octanoic acid, octanedioic acid, phenylalanine, and mixtures of oxalate with iron (III). Half of these compounds produced little or no HOOH during illumination, but two classes of compounds were very reactive: phenolic compounds (with rates of HOOH of 6-62 nM-HOOH h-1 μM-1-phenolic) and mixtures of Fe(III) with a stoichiometric excess of oxalate (with rates of HOOH production as high as 2,000,000 nM h-1 per μM iron). To quantify rates of HOOH production in the environment we also illuminated snow samples collected from the Arctic and Antarctic. The average (±1σ) HOOH production rate in these samples was low, 5.3 ± 5.0 nM h-1 and replicate measurements showed high variability. In some natural samples there was an initial burst of HOOH production (with a rate approximately 10 times higher than the average production rate), followed by reduced rates at subsequent time points. Although our laboratory ice samples reveal that illuminated organics and metal-organic complexes can form HOOH, the low rates of HOOH formation in the Arctic and Antarctic snow samples suggest this process has only a modest impact on the HOOH budget in the snowpack.
Study of Cleanliness of High Nitrogen Steel in ESR
NASA Astrophysics Data System (ADS)
Xuwei, Tang; Rong, Zhu
This paper compares inclusions in high nitrogen steel before and after ESR process, analyzes the influence of slag systems and total oxygen content in consumable ingots. The total oxygen content is reduced apparently during ESR process, which indicates good effects on removal of inclusions. In the experiment, it shows that different slag systems will affect the result of inclusions removal significantly; proper w(CaO/Al2O3) will reduce the level of inclusions and total oxygen content in ESR ingots. In ESR process, the type and chemical composition of inclusions have no difference when oxygen content in consumable ingots is different, which means O content in consumable ingots have no direct relationship with cleanliness of ESR ingots. In typical inclusions, w(MnO)/w(MnO+Al2O3)≈0.23 0.32. The total oxygen content of ESR ingots keeps between 20 30ppm when the oxygen contents in consumable ingots are diverse from 40 to 100ppm. Meanwhile, this paper studies desulfurization process of high nitrogen steel in ESR, analyzes the influence of slag systems a nd remelting rates on desulfurization efficiency. The results indicate that the average size and quant ity of sulfide inclusion decrease after ESR process. The typical inclusion after ESR process is MnS+Al2O3. Slag system with proper CaO content has higher sulfur partition ratio, which leads to better desulfurization effect. The desulfurization rate changes greatly with different remelting rates, which indicates the kinetic parameter has more influence in desulfurization. The reason of this phenomenon is that the process of desulfurization can be considered as a non-equilibrium reaction, which differs with thermodynamic equilibrium. In kinetic study, it is founded that the desulfurization efficiency increases with higher remelting area, sulfur partition and lower remelting rate, which is different from experiment. The desulfurization efficiency decreases firstly and then recovers when remelting rate drops. The enrichment of sulfide in slag results in resulfurization in steel, which leads to lower desulfurization efficiency.
McCarty, L Kelsey; Saddawi-Konefka, Daniel; Gargan, Lauren M; Driscoll, William D; Walsh, John L; Peterfreund, Robert A
2014-12-01
Process improvement in healthcare delivery settings can be difficult, even when there is consensus among clinicians about a clinical practice or desired outcome. Airway management is a medical intervention fundamental to the delivery of anesthesia care. Like other medical interventions, a detailed description of the management methods should be documented. Despite this expectation, airway documentation is often insufficient. The authors hypothesized that formal adoption of process improvement methods could be used to increase the rate of "complete" airway management documentation. The authors defined a set of criteria as a local practice standard of "complete" airway management documentation. The authors then employed selected process improvement methodologies over 13 months in three iterative and escalating phases to increase the percentage of records with complete documentation. The criteria were applied retrospectively to determine the baseline frequency of complete records, and prospectively to measure the impact of process improvements efforts over the three phases of implementation. Immediately before the initial intervention, a retrospective review of 23,011 general anesthesia cases over 6 months showed that 13.2% of patient records included complete documentation. At the conclusion of the 13-month improvement effort, documentation improved to a completion rate of 91.6% (P<0.0001). During the subsequent 21 months, the completion rate was sustained at an average of 90.7% (SD, 0.9%) across 82,571 general anesthetic records. Systematic application of process improvement methodologies can improve airway documentation and may be similarly effective in improving other areas of anesthesia clinical practice.
Zhang, Zhongqi; Zhang, Aming; Xiao, Gang
2012-06-05
Protein hydrogen/deuterium exchange (HDX) followed by protease digestion and mass spectrometric (MS) analysis is accepted as a standard method for studying protein conformation and conformational dynamics. In this article, an improved HDX MS platform with fully automated data processing is described. The platform significantly reduces systematic and random errors in the measurement by introducing two types of corrections in HDX data analysis. First, a mixture of short peptides with fast HDX rates is introduced as internal standards to adjust the variations in the extent of back exchange from run to run. Second, a designed unique peptide (PPPI) with slow intrinsic HDX rate is employed as another internal standard to reflect the possible differences in protein intrinsic HDX rates when protein conformations at different solution conditions are compared. HDX data processing is achieved with a comprehensive HDX model to simulate the deuterium labeling and back exchange process. The HDX model is implemented into the in-house developed software MassAnalyzer and enables fully unattended analysis of the entire protein HDX MS data set starting from ion detection and peptide identification to final processed HDX output, typically within 1 day. The final output of the automated data processing is a set (or the average) of the most possible protection factors for each backbone amide hydrogen. The utility of the HDX MS platform is demonstrated by exploring the conformational transition of a monoclonal antibody by increasing concentrations of guanidine.
29 CFR 548.303 - Average earnings for each type of work.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 3 2011-07-01 2011-07-01 false Average earnings for each type of work. 548.303 Section 548... REGULATIONS AUTHORIZATION OF ESTABLISHED BASIC RATES FOR COMPUTING OVERTIME PAY Interpretations Authorized Basic Rates § 548.303 Average earnings for each type of work. (a) Section 548.3(c) authorizes as an...
Decadal Trends of Atlantic Basin Tropical Cyclones (1950-1999)
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2001-01-01
Ten-year moving averages of the seasonal rates for 'named storms,' tropical storms, hurricanes, and major (or intense) hurricanes in the Atlantic basin suggest that the present epoch is one of enhanced activity, marked by seasonal rates typically equal to or above respective long-term median rates. As an example, the 10-year moving average of the seasonal rates for named storms is now higher than for any previous year over the past 50 years, measuring 10.65 in 1994, or 2.65 units higher than its median rate of 8. Also, the 10-year moving average for tropical storms has more than doubled, from 2.15 in 1955 to 4.60 in 1992, with 16 of the past 20 years having a seasonal rate of three or more (the median rate). For hurricanes and major hurricanes, their respective 10-year moving averages turned upward, rising above long-term median rates (5.5 and 2, respectively) in 1992, a response to the abrupt increase in seasonal rates that occurred in 1995. Taken together, the outlook for future hurricane seasons is for all categories of Atlantic basin tropical cyclones to have seasonal rates at levels equal to or above long-term median rates, especially during non-El Nino-related seasons. Only during El Nino-related seasons does it appear likely that seasonal rates might be slightly diminished.
Code of Federal Regulations, 2010 CFR
2010-04-01
... REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Gain and Loss... equal to the product of— (i) The life insurance company's average equity base for the taxable year...— (i) The imputed earnings rate for the taxable year; over (ii) The average mutual earning rate for the...
Electrosprayed Cerium Oxide Nanoparticles
NASA Astrophysics Data System (ADS)
Azar, Pedram Bagherzadeh; Tavanai, Hossein; Allafchian, Ali Reza
2018-04-01
Cerium oxide nanoparticles were fabricated via the calcination of electrosprayed polyvinyl alcohol (PVA)/cerium nitrate nanoparticles. The effect of material variables of PVA/cerium nitrate electrospraying solution, i.e. viscosity, surface tension and electrical conductivity, as well as important process variables like voltage, nozzle-collector distance and feed rate on cerium oxide nanoparticle size, are investigated. Scanning electron microscopy and Fourier-transform infrared (FTIR) spectroscopy analysis have also been carried out. The results showed that electrospraying of PVA/cerium nitrate (25% w/v) was only possible with PVA concentrations in the range of 5-8% w/v. With other conditions constant, decreasing PVA concentration, decreasing feed rate, increasing nozzle-collector distance and increasing voltage decreased the size of the final cerium oxide nanoparticles. The gross average size of all cerium oxide nanoparticles obtained in this work was about 80 nm. FTIR analysis proved the formation of cerium oxide after the calcination process.
FPGA Online Tracking Algorithm for the PANDA Straw Tube Tracker
NASA Astrophysics Data System (ADS)
Liang, Yutie; Ye, Hua; Galuska, Martin J.; Gessler, Thomas; Kuhn, Wolfgang; Lange, Jens Soren; Wagner, Milan N.; Liu, Zhen'an; Zhao, Jingzhou
2017-06-01
A novel FPGA based online tracking algorithm for helix track reconstruction in a solenoidal field, developed for the PANDA spectrometer, is described. Employing the Straw Tube Tracker detector with 4636 straw tubes, the algorithm includes a complex track finder, and a track fitter. Implemented in VHDL, the algorithm is tested on a Xilinx Virtex-4 FX60 FPGA chip with different types of events, at different event rates. A processing time of 7 $\\mu$s per event for an average of 6 charged tracks is obtained. The momentum resolution is about 3\\% (4\\%) for $p_t$ ($p_z$) at 1 GeV/c. Comparing to the algorithm running on a CPU chip (single core Intel Xeon E5520 at 2.26 GHz), an improvement of 3 orders of magnitude in processing time is obtained. The algorithm can handle severe overlapping of events which are typical for interaction rates above 10 MHz.
Optimal growth trajectories with finite carrying capacity.
Caravelli, F; Sindoni, L; Caccioli, F; Ududec, C
2016-08-01
We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.
Digital codec for real-time processing of broadcast quality video signals at 1.8 bits/pixel
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary JO; Whyte, Wayne A., Jr.
1989-01-01
The authors present the hardware implementation of a digital television bandwidth compression algorithm which processes standard NTSC (National Television Systems Committee) composite color television signals and produces broadcast-quality video in real time at an average of 1.8 b/pixel. The sampling rate used with this algorithm results in 768 samples over the active portion of each video line by 512 active video lines per video frame. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a nonadaptive predictor, nonuniform quantizer, and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The nonadaptive predictor and multilevel Huffman coder combine to set this technique apart from prior-art DPCM encoding algorithms. The authors describe the data compression algorithm and the hardware implementation of the codec and provide performance results.
Mean first passage time of active Brownian particle in one dimension
NASA Astrophysics Data System (ADS)
Scacchi, A.; Sharma, A.
2018-02-01
We investigate the mean first passage time of an active Brownian particle in one dimension using numerical simulations. The activity in one dimension is modelled as a two state model; the particle moves with a constant propulsion strength but its orientation switches from one state to other as in a random telegraphic process. We study the influence of a finite resetting rate r on the mean first passage time to a fixed target of a single free active Brownian particle and map this result using an effective diffusion process. As in the case of a passive Brownian particle, we can find an optimal resetting rate r* for an active Brownian particle for which the target is found with the minimum average time. In the case of the presence of an external potential, we find good agreement between the theory and numerical simulations using an effective potential approach.
Optimal growth trajectories with finite carrying capacity
NASA Astrophysics Data System (ADS)
Caravelli, F.; Sindoni, L.; Caccioli, F.; Ududec, C.
2016-08-01
We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.
27 CFR 19.249 - Average effective tax rate.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Average effective tax rate. 19.249 Section 19.249 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Distilled Spirits Taxes Effective Tax Rates § 19...
Social Inequality and Labor Force Participation.
ERIC Educational Resources Information Center
King, Jonathan
The labor force participation rates of whites, blacks, and Spanish-Americans, grouped by sex, are explained in a linear regression model fitted with 1970 U. S. Census data on Standard Metropolitan Statistical Area (SMSA). The explanatory variables are: average age, average years of education, vocational training rate, disabled rate, unemployment…
20 CFR 654.5 - Classification of labor surplus areas.
Code of Federal Regulations, 2010 CFR
2010-04-01
... unemployment rate for all civilian workers in the civil jurisdiction for the reference period is (1) 120 percent of the national average unemployment rate for civilian workers or higher for the reference period... shall be classified as a labor surplus area if the average unemployment rate for all civilian workers...
42 CFR 486.318 - Condition: Outcome measures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...
42 CFR 486.318 - Condition: Outcome measures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...
42 CFR 486.318 - Condition: Outcome measures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...
42 CFR 486.318 - Condition: Outcome measures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...
Property-process relations in simulated clinical abrasive adjusting of dental ceramics.
Yin, Ling
2012-12-01
This paper reports on property-process correlations in simulated clinical abrasive adjusting of a wide range of dental restorative ceramics using a dental handpiece and diamond burs. The seven materials studied included four mica-containing glass ceramics, a feldspathic porcelain, a glass-infiltrated alumina, and a yttria-stabilized tetragonal zirconia. The abrasive adjusting process was conducted under simulated clinical conditions using diamond burs and a clinical dental handpiece. An attempt was made to establish correlations between process characteristics in terms of removal rate, chipping damage, and surface finish and material mechanical properties of hardness, fracture toughness and Young's modulus. The results show that the removal rate is mainly a function of hardness, which decreases nonlinearly with hardness. No correlations were noted between the removal rates and the complex relations of hardness, Young's modulus and fracture toughness. Surface roughness was primarily a linear function of diamond grit size and was relatively independent of materials. Chipping damage in terms of the average chipping width decreased with fracture toughness except for glass-infiltrated alumina. It also had higher linear correlations with critical strain energy release rates (R²=0.66) and brittleness (R²=0.62) and a lower linear correlation with indices of brittleness (R²=0.32). Implications of these results can provide guidance for the microstructural design of dental ceramics, optimize performance, and guide the proper selection of technical parameters in clinical abrasive adjusting conducted by dental practitioners. Copyright © 2012 Elsevier Ltd. All rights reserved.
Decreasing patient identification band errors by standardizing processes.
Walley, Susan Chu; Berger, Stephanie; Harris, Yolanda; Gallizzi, Gina; Hayes, Leslie
2013-04-01
Patient identification (ID) bands are an essential component in patient ID. Quality improvement methodology has been applied as a model to reduce ID band errors although previous studies have not addressed standardization of ID bands. Our specific aim was to decrease ID band errors by 50% in a 12-month period. The Six Sigma DMAIC (define, measure, analyze, improve, and control) quality improvement model was the framework for this study. ID bands at a tertiary care pediatric hospital were audited from January 2011 to January 2012 with continued audits to June 2012 to confirm the new process was in control. After analysis, the major improvement strategy implemented was standardization of styles of ID bands and labels. Additional interventions included educational initiatives regarding the new ID band processes and disseminating institutional and nursing unit data. A total of 4556 ID bands were audited with a preimprovement ID band error average rate of 9.2%. Significant variation in the ID band process was observed, including styles of ID bands. Interventions were focused on standardization of the ID band and labels. The ID band error rate improved to 5.2% in 9 months (95% confidence interval: 2.5-5.5; P < .001) and was maintained for 8 months. Standardization of ID bands and labels in conjunction with other interventions resulted in a statistical decrease in ID band error rates. This decrease in ID band error rates was maintained over the subsequent 8 months.
Scaling theory in a model of corrosion and passivation.
Aarão Reis, F D A; Stafiej, Janusz; Badiali, J-P
2006-09-07
We study a model for corrosion and passivation of a metallic surface after small damage of its protective layer using scaling arguments and simulation. We focus on the transition between an initial regime of slow corrosion rate (pit nucleation) to a regime of rapid corrosion (propagation of the pit), which takes place at the so-called incubation time. The model is defined in a lattice in which the states of the sites represent the possible states of the metal (bulk, reactive, and passive) and the solution (neutral, acidic, or basic). Simple probabilistic rules describe passivation of the metal surface, dissolution of the passive layer, which is enhanced in acidic media, and spatially separated electrochemical reactions, which may create pH inhomogeneities in the solution. On the basis of a suitable matching of characteristic times of creation and annihilation of pH inhomogeneities in the solution, our scaling theory estimates the average radius of the dissolved region at the incubation time as a function of the model parameters. Among the main consequences, that radius decreases with the rate of spatially separated reactions and the rate of dissolution in acidic media, and it increases with the diffusion coefficient of H(+) and OH(-) ions in solution. The average incubation time can be written as the sum of a series of characteristic times for the slow dissolution in neutral media, until significant pH inhomogeneities are observed in the dissolved cavity. Despite having a more complex dependence on the model parameters, it is shown that the average incubation time linearly increases with the rate of dissolution in neutral media, under the reasonable assumption that this is the slowest rate of the process. Our theoretical predictions are expected to apply in realistic ranges of values of the model parameters. They are confirmed by numerical simulation in two-dimensional lattices, and the expected extension of the theory to three dimensions is discussed.
Documenting pharmacist interventions on an intranet.
Simonian, Armen I
2003-01-15
The process of developing and implementing an intranet Web site for clinical intervention documentation is described. An inpatient pharmacy department initiated an organizationwide effort to improve documentation of interventions by pharmacists at its seven hospitals to achieve real-time capture of meaningful benchmarking data. Standardization of intervention types would allow the health system to contrast and compare medication use, process improvement, and patient care initiatives among its hospitals. After completing a needs assessment and reviewing current methodologies, a computerized tracking tool was developed in-house and integrated with the organization's intranet. Representatives from all hospitals agreed on content and functionality requirements for the Web site. The site was completed and activated in February 2002. Before this Web site was established, the most documented intervention types were Renal Adjustment and Clarify Dose, with a daily average of four and three, respectively. After site activation, daily averages for Renal Adjustment remained unchanged, but Clarify Dose is now documented nine times per day. Drug Information and i.v.-to-p.o. intervention types, which previously averaged less than one intervention per day, are now documented an average of four times daily. Approximately 91% of staff pharmacists are using this site. Future plans for this site include enhanced accessibility to the site with wireless personal digital assistants. The design and implementation of an intranet Web site to document pharmacists' interventions doubled the rate of intervention documentation and standardized the intervention types among hospitals in the health system.
12 CFR 3.41 - Operational requirements for securitization exposures.
Code of Federal Regulations, 2014 CFR
2014-01-01
... loans 30, 60, and 90 days past due; default rates; prepayment rates; loans in foreclosure; property types; occupancy; average credit score or other measures of creditworthiness; average LTV ratio; and...
12 CFR 324.41 - Operational requirements for securitization exposures.
Code of Federal Regulations, 2014 CFR
2014-01-01
... loans 30, 60, and 90 days past due; default rates; prepayment rates; loans in foreclosure; property types; occupancy; average credit score or other measures of creditworthiness; average LTV ratio; and...
The Strain-Hardening Behavior of TZAV-30 Alloy After Various Heat Treatments
NASA Astrophysics Data System (ADS)
Liang, S. X.; Yin, L. X.; Zheng, L. Y.; Ma, M. Z.; Liu, R. P.
2016-02-01
The Ti-Zr-Al-V series titanium alloys with excellent mechanical properties and low density exhibit tremendous application potential as structural materials in aviation, automotive, and navigation industries. The strain-hardening behavior of Ti-30Zr-5Al-3V (wt.%, TZAV-30) alloy with various heat treatments is investigated in this study. Experimental results show that strain-hardening behavior of the examined alloy depends on the heat treatment process. The average strain-hardening exponent, n, is approximately 0.061 for WA specimen (825 °C/0.5 h/water quenching + 600 °C/4 h/air cooling), 0.068 for FC (850 °C/0.5 h/furnace cooling), 0.121 for AC (850 °C/0.5 h/air cooling), and 0.412 for WQ (850 °C/0.5 h/water quenching). Analysis of strain-hardening rate versus true strain curves indicates that higher n of AC specimen results from the lower degradation rate of strain-hardening rate with strain, and the ultrahigh n of WQ specimen is attributed to the evident increase in strain-hardening rate at the true strain from 0.04 to 0.06. Phase constitution and microstructural analyses reveal that the n of the examined alloy with α + β phases increases with the increase in the relative content of the retained β phase but is independent of average thickness of α plates. The increase in strain-hardening rate in WQ specimen depends on metastable α″ martensite and martensitic transition induced by tensile stress.