Sample records for average time constant

  1. How the growth and freeboard of continents may relate to geometric and kinematic parameters of mid-ocean spreading ridges

    USGS Publications Warehouse

    Howell, D.G.

    1989-01-01

    If the volume of continents has been growing since 4 Ga then the area of the ocean basins must have been shrinking. Therefore, by inferring a constant continental freeboard, in addition to constant continental crustal thicknesses and seawater volume, it is possible to calculate the necessary combinations of increased ridge lengths and spreading rates required to displace the seawater in the larger oceans of the past in order to maintain the constant freeboard. A reasonable choice from the various possibilities is that at 4 Ga ago, the ridge length and spreading rates were ca. 2.5 times greater than the averages of these parameters during the past 200 Ma. By 2.5 Ga ago the ridge length and spreading rate decreased to about 1.8 times the recent average and by 1 Ga ago these features became reduced to approximately 1.4 times recent averages. ?? 1989.

  2. Scale-invariant Green-Kubo relation for time-averaged diffusivity

    NASA Astrophysics Data System (ADS)

    Meyer, Philipp; Barkai, Eli; Kantz, Holger

    2017-12-01

    In recent years it was shown both theoretically and experimentally that in certain systems exhibiting anomalous diffusion the time- and ensemble-averaged mean-squared displacement are remarkably different. The ensemble-averaged diffusivity is obtained from a scaling Green-Kubo relation, which connects the scale-invariant nonstationary velocity correlation function with the transport coefficient. Here we obtain the relation between time-averaged diffusivity, usually recorded in single-particle tracking experiments, and the underlying scale-invariant velocity correlation function. The time-averaged mean-squared displacement is given by 〈δ2¯〉 ˜2 DνtβΔν -β , where t is the total measurement time and Δ is the lag time. Here ν is the anomalous diffusion exponent obtained from ensemble-averaged measurements 〈x2〉 ˜tν , while β ≥-1 marks the growth or decline of the kinetic energy 〈v2〉 ˜tβ . Thus, we establish a connection between exponents that can be read off the asymptotic properties of the velocity correlation function and similarly for the transport constant Dν. We demonstrate our results with nonstationary scale-invariant stochastic and deterministic models, thereby highlighting that systems with equivalent behavior in the ensemble average can differ strongly in their time average. If the averaged kinetic energy is finite, β =0 , the time scaling of 〈δ2¯〉 and 〈x2〉 are identical; however, the time-averaged transport coefficient Dν is not identical to the corresponding ensemble-averaged diffusion constant.

  3. Finite-Temperature Behavior of PdH x Elastic Constants Computed by Direct Molecular Dynamics

    DOE PAGES

    Zhou, X. W.; Heo, T. W.; Wood, B. C.; ...

    2017-05-30

    In this paper, robust time-averaged molecular dynamics has been developed to calculate finite-temperature elastic constants of a single crystal. We find that when the averaging time exceeds a certain threshold, the statistical errors in the calculated elastic constants become very small. We applied this method to compare the elastic constants of Pd and PdH 0.6 at representative low (10 K) and high (500 K) temperatures. The values predicted for Pd match reasonably well with ultrasonic experimental data at both temperatures. In contrast, the predicted elastic constants for PdH 0.6 only match well with ultrasonic data at 10 K; whereas, atmore » 500 K, the predicted values are significantly lower. We hypothesize that at 500 K, the facile hydrogen diffusion in PdH 0.6 alters the speed of sound, resulting in significantly reduced values of predicted elastic constants as compared to the ultrasonic experimental data. Finally, literature mechanical testing experiments seem to support this hypothesis.« less

  4. Spike-frequency adaptation in the inferior colliculus.

    PubMed

    Ingham, Neil J; McAlpine, David

    2004-02-01

    We investigated spike-frequency adaptation of neurons sensitive to interaural phase disparities (IPDs) in the inferior colliculus (IC) of urethane-anesthetized guinea pigs using a stimulus paradigm designed to exclude the influence of adaptation below the level of binaural integration. The IPD-step stimulus consists of a binaural 3,000-ms tone, in which the first 1,000 ms is held at a neuron's least favorable ("worst") IPD, adapting out monaural components, before being stepped rapidly to a neuron's most favorable ("best") IPD for 300 ms. After some variable interval (1-1,000 ms), IPD is again stepped to the best IPD for 300 ms, before being returned to a neuron's worst IPD for the remainder of the stimulus. Exponential decay functions fitted to the response to best-IPD steps revealed an average adaptation time constant of 52.9 +/- 26.4 ms. Recovery from adaptation to best IPD steps showed an average time constant of 225.5 +/- 210.2 ms. Recovery time constants were not correlated with adaptation time constants. During the recovery period, adaptation to a 2nd best-IPD step followed similar kinetics to adaptation during the 1st best-IPD step. The mean adaptation time constant at stimulus onset (at worst IPD) was 34.8 +/- 19.7 ms, similar to the 38.4 +/- 22.1 ms recorded to contralateral stimulation alone. Individual time constants after stimulus onset were correlated with each other but not with time constants during the best-IPD step. We conclude that such binaurally derived measures of adaptation reflect processes that occur above the level of exclusively monaural pathways, and subsequent to the site of primary binaural interaction.

  5. Point-ahead limitation on reciprocity tracking. [in earth-space optical link

    NASA Technical Reports Server (NTRS)

    Shapiro, J. H.

    1975-01-01

    The average power received at a spacecraft from a reciprocity-tracking transmitter is shown to be the free-space diffraction-limited result times a gain-reduction factor that is due to the point-ahead requirement. For a constant-power transmitter, the gain-reduction factor is approximately equal to the appropriate spherical-wave mutual-coherence function. For a constant-average-power transmitter, an exact expression is obtained for the gain-reduction factor.

  6. Dielectric properties of single wall carbon nanotubes-based gelatin phantoms

    NASA Astrophysics Data System (ADS)

    Altarawneh, M. M.; Alharazneh, G. A.; Al-Madanat, O. Y.

    In this work, we report the dielectric properties of Single wall Carbon Nanotubes (SWCNTs)-based phantom that is mainly composed of gelatin and water. The fabricated gelatin-based phantom with desired dielectric properties was fabricated and doped with different concentrations of SWCNTs (e.g., 0%, 0.05%, 0.10%, 0.15%, 0.2%, 0.4% and 0.6%). The dielectric constants (real ɛ‧ and imaginary ɛ‧‧) were measured at different positions for each sample as a function of frequency (0.5-20GHz) and concentrations of SWCNTs and their averages were found. The Cole-Cole plot (ɛ‧ versus ɛ‧‧) was obtained for each concentration of SWCNTs and was used to obtain the static dielectric constant ɛs, the dielectric constant at the high limit of frequency ɛ∞ and the average relaxation time τ. The measurements showed that the fabricated samples are in good homogeneity and the SWCNTs are dispersed well in the samples as an acceptable standard deviation is achieved. The study showed a linear increase in the static dielectric constant ɛs and invariance of the average relaxation time τ and the value of ɛ∞ at room temperature for the investigated concentrations of SWCNTs.

  7. Effect of positive pulse charge waveforms on the energy efficiency of lead-acid traction cells

    NASA Technical Reports Server (NTRS)

    Smithrick, J. J.

    1981-01-01

    The effects of four different charge methods on the energy conversion efficiency of 300 ampere hour lead acid traction cells were investigated. Three of the methods were positive pulse charge waveforms; the fourth, a constant current method, was used as a baseline of comparison. The positive pulse charge waveforms were: 120 Hz full wave rectified sinusoidal; 120 Hz silicon controlled rectified; and 1 kHz square wave. The constant current charger was set at the time average pulse current of each pulse waveform, which was 150 amps. The energy efficiency does not include charger losses. The lead acid traction cells were charged to 70 percent of rated ampere hour capacity in each case. The results of charging the cells using the three different pulse charge waveforms indicate there was no significant difference in energy conversion efficiency when compared to constant current charging at the time average pulse current value.

  8. LINEAR COUNT-RATE METER

    DOEpatents

    Henry, J.J.

    1961-09-01

    A linear count-rate meter is designed to provide a highly linear output while receiving counting rates from one cycle per second to 100,000 cycles per second. Input pulses enter a linear discriminator and then are fed to a trigger circuit which produces positive pulses of uniform width and amplitude. The trigger circuit is connected to a one-shot multivibrator. The multivibrator output pulses have a selected width. Feedback means are provided for preventing transistor saturation in the multivibrator which improves the rise and decay times of the output pulses. The multivibrator is connected to a diode-switched, constant current metering circuit. A selected constant current is switched to an averaging circuit for each pulse received, and for a time determined by the received pulse width. The average output meter current is proportional to the product of the counting rate, the constant current, and the multivibrator output pulse width.

  9. [Evaluation of the influence of humidity and temperature on the drug stability by initial average rate experiment].

    PubMed

    He, Ning; Sun, Hechun; Dai, Miaomiao

    2014-05-01

    To evaluate the influence of temperature and humidity on the drug stability by initial average rate experiment, and to obtained the kinetic parameters. The effect of concentration error, drug degradation extent, humidity and temperature numbers, humidity and temperature range, and average humidity and temperature on the accuracy and precision of kinetic parameters in the initial average rate experiment was explored. The stability of vitamin C, as a solid state model, was investigated by an initial average rate experiment. Under the same experimental conditions, the kinetic parameters obtained from this proposed method were comparable to those from classical isothermal experiment at constant humidity. The estimates were more accurate and precise by controlling the extent of drug degradation, changing humidity and temperature range, or by setting the average temperature closer to room temperature. Compared with isothermal experiments at constant humidity, our proposed method saves time, labor, and materials.

  10. Solid-state selective (13)C excitation and spin diffusion NMR to resolve spatial dimensions in plant cell walls.

    PubMed

    Foston, Marcus; Katahira, Rui; Gjersing, Erica; Davis, Mark F; Ragauskas, Arthur J

    2012-02-15

    The average spatial dimensions between major biopolymers within the plant cell wall can be resolved using a solid-state NMR technique referred to as a (13)C cross-polarization (CP) SELDOM (selectively by destruction of magnetization) with a mixing time delay for spin diffusion. Selective excitation of specific aromatic lignin carbons indicates that lignin is in close proximity to hemicellulose followed by amorphous and finally crystalline cellulose. (13)C spin diffusion time constants (T(SD)) were extracted using a two-site spin diffusion theory developed for (13)C nuclei under magic angle spinning (MAS) conditions. These time constants were then used to calculate an average lower-limit spin diffusion length between chemical groups within the plant cell wall. The results on untreated (13)C enriched corn stover stem reveal that the lignin carbons are, on average, located at distances ∼0.7-2.0 nm from the carbons in hemicellulose and cellulose, whereas the pretreated material had larger separations.

  11. Assessing Chemical Retention Process Controls in Ponds

    NASA Astrophysics Data System (ADS)

    Torgersen, T.; Branco, B.; John, B.

    2002-05-01

    Small ponds are a ubiquitous component of the landscape and have earned a reputation as effective chemical retention devices. The most common characterization of pond chemical retention is the retention coefficient, Ri= ([Ci]inflow-[Ci] outflow)/[Ci]inflow. However, this parameter varies widely in one pond with time and among ponds. We have re-evaluated literature reported (Borden et al., 1998) monthly average retention coefficients for two ponds in North Carolina. Employing a simple first order model that includes water residence time, the first order process responsible for species removal have been separated from the water residence time over which it acts. Assuming the rate constant for species removal is constant within the pond (arguable at least), the annual average rate constant for species removal is generated. Using the annual mean rate constant for species removal and monthly water residence times results in a significantly enhanced predictive capability for Davis Pond during most months of the year. Predictive ability remains poor in Davis Pond during winter/unstratified periods when internal loading of P and N results in low to negative chemical retention. Predictive ability for Piedmont Pond (which has numerous negative chemical retention periods) is improved but not to the same extent as Davis Pond. In Davis Pond, the rate constant for sediment removal (each month) is faster than the rate constant for water and explains the good predictability for sediment retention. However, the removal rate constant for P and N is slower than the removal rate constant for sediment (longer water column residence time for P,N than for sediment). Thus sedimentation is not an overall control on nutrient retention. Additionally, the removal rate constant for P is slower than for TOC (TOC is not the dominate removal process for P) and N is removed slower than P (different in pond controls). For Piedmont Pond, sediment removal rate constants are slower than the removal rate constant for water indicating significant sediment resuspension episodes. It appears that these sediment resuspension events are aperiodic and control the loading and the chemical retention capability of Piedmont Pond for N,P,TOC. These calculated rate constants reflect the differing internal loading processes for each component and suggest means and mechanisms for the use of ponds in water quality management.

  12. Comparison of TID Effects in Space-Like Variable Dose Rates and Constant Dose Rates

    NASA Technical Reports Server (NTRS)

    Harris, Richard D.; McClure, Steven S.; Rax, Bernard G.; Evans, Robin W.; Jun, Insoo

    2008-01-01

    The degradation of the LM193 dual voltage comparator has been studied at different TID dose rate profiles, including several different constant dose rates and a variable dose rate that simulates the behavior of a solar flare. A comparison of results following constant dose rate vs. variable dose rates is made to explore how well the constant dose rates used for typical part testing predict the performance during a simulated space-like mission. Testing at a constant dose rate equal to the lowest dose rate seen during the simulated flare provides an extremely conservative estimate of the overall amount of degradation. A constant dose rate equal to the average dose rate is also more conservative than the variable rate. It appears that, for this part, weighting the dose rates by the amount of total dose received at each rate (rather than the amount of time at each dose rate) results in an average rate that produces an amount of degradation that is a reasonable approximation to that received by the variable rate.

  13. Stability of Nonlinear Systems with Unknown Time-varying Feedback Delay

    NASA Astrophysics Data System (ADS)

    Chunodkar, Apurva A.; Akella, Maruthi R.

    2013-12-01

    This paper considers the problem of stabilizing a class of nonlinear systems with unknown bounded delayed feedback wherein the time-varying delay is 1) piecewise constant 2) continuous with a bounded rate. We also consider application of these results to the stabilization of rigid-body attitude dynamics. In the first case, the time-delay in feedback is modeled specifically as a switch among an arbitrarily large set of unknown constant values with a known strict upper bound. The feedback is a linear function of the delayed states. In the case of linear systems with switched delay feedback, a new sufficiency condition for average dwell time result is presented using a complete type Lyapunov-Krasovskii (L-K) functional approach. Further, the corresponding switched system with nonlinear perturbations is proven to be exponentially stable inside a well characterized region of attraction for an appropriately chosen average dwell time. In the second case, the concept of the complete type L-K functional is extended to a class of nonlinear time-delay systems with unknown time-varying time-delay. This extension ensures stability robustness to time-delay in the control design for all values of time-delay less than the known upper bound. Model-transformation is used in order to partition the nonlinear system into a nominal linear part that is exponentially stable with a bounded perturbation. We obtain sufficient conditions which ensure exponential stability inside a region of attraction estimate. A constructive method to evaluate the sufficient conditions is presented together with comparison with the corresponding constant and piecewise constant delay. Numerical simulations are performed to illustrate the theoretical results of this paper.

  14. Temporal Context in Concurrent Chains: I. Terminal-Link Duration

    ERIC Educational Resources Information Center

    Grace, Randolph C.

    2004-01-01

    Two experiments are reported in which the ratio of the average times spent in the terminal and initial links ("Tt/Ti") in concurrent chains was varied. In Experiment 1, pigeons responded in a three-component procedure in which terminal-link variable-interval schedules were in constant ratio, but their average duration increased across components…

  15. Time constant determination for electrical equivalent of biological cells

    NASA Astrophysics Data System (ADS)

    Dubey, Ashutosh Kumar; Dutta-Gupta, Shourya; Kumar, Ravi; Tewari, Abhishek; Basu, Bikramjit

    2009-04-01

    The electric field interactions with biological cells are of significant interest in various biophysical and biomedical applications. In order to study such important aspect, it is necessary to evaluate the time constant in order to estimate the response time of living cells in the electric field (E-field). In the present study, the time constant is evaluated by considering the hypothesis of electrical analog of spherical shaped cells and assuming realistic values for capacitance and resistivity properties of cell/nuclear membrane, cytoplasm, and nucleus. In addition, the resistance of cytoplasm and nucleoplasm was computed based on simple geometrical considerations. Importantly, the analysis on the basis of first principles shows that the average values of time constant would be around 2-3 μs, assuming the theoretical capacitance values and the analytically computed resistance values. The implication of our analytical solution has been discussed in reference to the cellular adaptation processes such as atrophy/hypertrophy as well as the variation in electrical transport properties of cellular membrane/cytoplasm/nuclear membrane/nucleoplasm.

  16. A double medium model for diffusion in fluid-bearing rock

    NASA Astrophysics Data System (ADS)

    Wang, H. F.

    1993-09-01

    The concept of a double porosity medium to model fluid flow in fractured rock has been applied to model diffusion in rock containing a small amount of a continuous fluid phase that surrounds small volume elements of the solid matrix. The model quantifies the relative role of diffusion in the fluid and solid phases of the rock. The fluid is the fast diffusion path, but the solid contains the volumetrically significant amount of the diffusing species. The double medium model consists of two coupled differential equations. One equation is the diffusion equation for the fluid concentration; it contains a source term for change in the average concentration of the diffusing species in the solid matrix. The second equation represents the assumption that the change in average concentration in a solid element is proportional to the difference between the average concentration in the solid and the concentration in the fluid times the solid-fluid partition coefficient. The double medium model is shown to apply to laboratory data on iron diffusion in fluid-bearing dunite and to measured oxygen isotope ratios at marble-metagranite contacts. In both examples, concentration profiles are calculated for diffusion taking place at constant temperature, where a boundary value changes suddenly and is subsequently held constant. Knowledge of solid diffusivities can set a lower bound to the length of time over which diffusion occurs, but only the product of effective fluid diffusivity and time is constrained for times longer than the characteristic solid diffusion time. The double medium results approach a local, grain-scale equilibrium model for times that are large relative to the time constant for solid diffusion.

  17. First Measurements of the HCFC-142b Trend from Atmospheric Chemistry Experiment (ACE) Solar Occultation Spectra

    NASA Technical Reports Server (NTRS)

    Rinsland, Curtis P.; Chiou, Linda; Boone,Chris; Bernath, Peter; Mahieu, Emmanuel

    2009-01-01

    The first measurement of the HCFC-142b (CH3CClF2) trend near the tropopause has been derived from volume mixing ratio (VMR) measurements at northern and southern hemisphere mid-latitudes for the 2004-2008 time period from spaceborne solar occultation observations recorded at 0.02/cm resolution with the ACE (atmospheric chemistry experiment) Fourier transform spectrometer. The HCFC-142b molecule is currently the third most abundant HCFC (hydrochlorofluorocarbon) in the atmosphere and ACE measurements over this time span show a continuous rise in its volume mixing ratio. Monthly average measurements at northern and southern hemisphere midlatitudes have similar increase rates that are consistent with surface trend measurements for a similar time span. A mean northern hemisphere profile for the time span shows a near constant VMR at 8-20km altitude range, consistent on average for the same time span with in situ results. The nearly constant vertical VMR profile also agrees with model predictions of a long lifetime in the lower atmosphere.

  18. Stiffness and relaxation components of the exponential and logistic time constants may be used to derive a load-independent index of isovolumic pressure decay.

    PubMed

    Shmuylovich, Leonid; Kovács, Sándor J

    2008-12-01

    In current practice, empirical parameters such as the monoexponential time constant tau or the logistic model time constant tauL are used to quantitate isovolumic relaxation. Previous work indicates that tau and tauL are load dependent. A load-independent index of isovolumic pressure decline (LIIIVPD) does not exist. In this study, we derive and validate a LIIIVPD. Recently, we have derived and validated a kinematic model of isovolumic pressure decay (IVPD), where IVPD is accurately predicted by the solution to an equation of motion parameterized by stiffness (Ek), relaxation (tauc), and pressure asymptote (Pinfinity) parameters. In this study, we use this kinematic model to predict, derive, and validate the load-independent index MLIIIVPD. We predict that the plot of lumped recoil effects [Ek.(P*max-Pinfinity)] versus resistance effects [tauc.(dP/dtmin)], defined by a set of load-varying IVPD contours, where P*max is maximum pressure and dP/dtmin is the minimum first derivative of pressure, yields a linear relation with a constant (i.e., load independent) slope MLIIIVPD. To validate the load independence, we analyzed an average of 107 IVPD contours in 25 subjects (2,669 beats total) undergoing diagnostic catheterization. For the group as a whole, we found the Ek.(P*max-Pinfinity) versus tauc.(dP/dtmin) relation to be highly linear, with the average slope MLIIIVPD=1.107+/-0.044 and the average r2=0.993+/-0.006. For all subjects, MLIIIVPD was found to be linearly correlated to the subject averaged tau (r2=0.65), tauL(r2=0.50), and dP/dtmin (r2=0.63), as well as to ejection fraction (r2=0.52). We conclude that MLIIIVPD is a LIIIVPD because it is load independent and correlates with conventional IVPD parameters. Further validation of MLIIIVPD in selected pathophysiological settings is warranted.

  19. Fast gradient separation by very high pressure liquid chromatography: reproducibility of analytical data and influence of delay between successive runs.

    PubMed

    Stankovicha, Joseph J; Gritti, Fabrice; Beaver, Lois Ann; Stevensona, Paul G; Guiochon, Georges

    2013-11-29

    Five methods were used to implement fast gradient separations: constant flow rate, constant column-wall temperature, constant inlet pressure at moderate and high pressures (controlled by a pressure controller),and programmed flow constant pressure. For programmed flow constant pressure, the flow rates and gradient compositions are controlled using input into the method instead of the pressure controller. Minor fluctuations in the inlet pressure do not affect the mobile phase flow rate in programmed flow. There producibilities of the retention times, the response factors, and the eluted band width of six successive separations of the same sample (9 components) were measured with different equilibration times between 0 and 15 min. The influence of the length of the equilibration time on these reproducibilities is discussed. The results show that the average column temperature may increase from one separation to the next and that this contributes to fluctuation of the results.

  20. An improved procedure for determining grain boundary diffusion coefficients from averaged concentration profiles

    NASA Astrophysics Data System (ADS)

    Gryaznov, D.; Fleig, J.; Maier, J.

    2008-03-01

    Whipple's solution of the problem of grain boundary diffusion and Le Claire's relation, which is often used to determine grain boundary diffusion coefficients, are examined for a broad range of ratios of grain boundary to bulk diffusivities Δ and diffusion times t. Different reasons leading to errors in determining the grain boundary diffusivity (DGB) when using Le Claire's relation are discussed. It is shown that nonlinearities of the diffusion profiles in lnCav-y6/5 plots and deviations from "Le Claire's constant" (-0.78) are the major error sources (Cav=averaged concentration, y =coordinate in diffusion direction). An improved relation (replacing Le Claire's constant) is suggested for analyzing diffusion profiles particularly suited for small diffusion lengths (short times) as often required in diffusion experiments on nanocrystalline materials.

  1. RESIDENCE TIME DISTRIBUTION OF FLUIDS IN STIRRED ANNULAR PHOTOREACTORS

    EPA Science Inventory

    When gases flow through an annular photoreactor at constant rate, some of the gas spends more or less than the average residence time in the reactor. This spread of residence time can have an important effect on the performance of the reactor. this study tested how the residence...

  2. Pulse charging of lead-acid traction cells

    NASA Technical Reports Server (NTRS)

    Smithrick, J. J.

    1980-01-01

    Pulse charging, as a method of rapidly and efficiently charging 300 amp-hour lead-acid traction cells for an electric vehicle application was investigated. A wide range of charge pulse current square waveforms were investigated and the results were compared to constant current charging at the time averaged pulse current values. Representative pulse current waveforms were: (1) positive waveform-peak charge pulse current of 300 amperes (amps), discharge pulse-current of zero amps, and a duty cycle of about 50%; (2) Romanov waveform-peak charge pulse current of 300 amps, peak discharge pulse current of 15 amps, and a duty of 50%; and (3) McCulloch waveform peak charge pulse current of 193 amps, peak discharge pulse current of about 575 amps, and a duty cycle of 94%. Experimental results indicate that on the basis of amp-hour efficiency, pulse charging offered no significant advantage as a method of rapidly charging 300 amp-hour lead-acid traction cells when compared to constant current charging at the time average pulse current value. There were, however, some disadvantages of pulse charging in particular a decrease in charge amp-hour and energy efficiencies and an increase in cell electrolyte temperature. The constant current charge method resulted in the best energy efficiency with no significant sacrifice of charge time or amp-hour output. Whether or not pulse charging offers an advantage over constant current charging with regard to the cell charge/discharge cycle life is unknown at this time.

  3. COMPARISON OF 24H AVERAGE VOC MONITORING RESULTS FOR RESIDENTIAL INDOOR AND OUTDOOR AIR USING CARBOPACK X-FILLED DIFFUSIVE SAMPLERS AND ACTIVE SAMPLING - A PILOT STUDY

    EPA Science Inventory

    Analytical results obtained by thermal desorption GC/MS for 24h diffusive sampling of 11 volatile organic compounds (VOCs) are compared with results of time-averaged active sampling at a known constant flow rate. Air samples were collected with co-located duplicate diffusive samp...

  4. Soil Moisture Content Estimation using GPR Reflection Travel Time

    NASA Astrophysics Data System (ADS)

    Lunt, I. A.; Hubbard, S. S.; Rubin, Y.

    2003-12-01

    Ground-penetrating radar (GPR) reflection travel time data were used to estimate changes in soil water content under a range of soil saturation conditions throughout the growing season at a California winery. Data were collected during four data acquisition campaigns over an 80 by 180 m area using 100 MHz surface GPR antennae. GPR reflections were associated with a thin, low permeability clay layer located between 0.8 to 1.3 m below the ground surface that was calibrated with borehole information and mapped across the study area. Field infiltration tests and neutron probe logs suggest that the thin clay layer inhibited vertical water flow, and was coincident with high volumetric water content (VWC) values. The GPR reflection two-way travel time and the depth of the reflector at borehole locations were used to calculate an average dielectric constant for soils above the reflector. A site-specific relationship between the dielectric constant and VWC was then used to estimate the depth-averaged VWC of the soils above the reflector. Compared to average VWC measurements from calibrated neutron probe logs over the same depth interval, the average VWC estimates obtained from GPR reflections had an RMS error of 2 percent. We also investigated the estimation of VWC using reflections associated with an advancing water front, and found that estimates of average VWC to the water front could be obtained with similar accuracy. These results suggested that the two-way travel time to a GPR reflection associated with a geological surface or wetting front can be used under natural conditions to obtain estimates of average water content when borehole control is available. The GPR reflection method therefore has potential for monitoring soil water content over large areas and under variable hydrological conditions.

  5. Effects of correlations and fees in random multiplicative environments: Implications for portfolio management.

    PubMed

    Alper, Ofer; Somekh-Baruch, Anelia; Pirvandy, Oz; Schaps, Malka; Yaari, Gur

    2017-08-01

    Geometric Brownian motion (GBM) is frequently used to model price dynamics of financial assets, and a weighted average of multiple GBMs is commonly used to model a financial portfolio. Diversified portfolios can lead to an increased exponential growth compared to a single asset by effectively reducing the effective noise. The sum of GBM processes is no longer a log-normal process and has a complex statistical properties. The nonergodicity of the weighted average process results in constant degradation of the exponential growth from the ensemble average toward the time average. One way to stay closer to the ensemble average is to maintain a balanced portfolio: keep the relative weights of the different assets constant over time. To keep these proportions constant, whenever assets values change, it is necessary to rebalance their relative weights, exposing this strategy to fees (transaction costs). Two strategies that were suggested in the past for cases that involve fees are rebalance the portfolio periodically and rebalance it in a partial way. In this paper, we study these two strategies in the presence of correlations and fees. We show that using periodic and partial rebalance strategies, it is possible to maintain a steady exponential growth while minimizing the losses due to fees. We also demonstrate how these redistribution strategies perform in a phenomenal way on real-world market data, despite the fact that not all assumptions of the model hold in these real-world systems. Our results have important implications for stochastic dynamics in general and to portfolio management in particular, as we show that there is a superior alternative to the common buy-and-hold strategy, even in the presence of correlations and fees.

  6. Effects of correlations and fees in random multiplicative environments: Implications for portfolio management

    NASA Astrophysics Data System (ADS)

    Alper, Ofer; Somekh-Baruch, Anelia; Pirvandy, Oz; Schaps, Malka; Yaari, Gur

    2017-08-01

    Geometric Brownian motion (GBM) is frequently used to model price dynamics of financial assets, and a weighted average of multiple GBMs is commonly used to model a financial portfolio. Diversified portfolios can lead to an increased exponential growth compared to a single asset by effectively reducing the effective noise. The sum of GBM processes is no longer a log-normal process and has a complex statistical properties. The nonergodicity of the weighted average process results in constant degradation of the exponential growth from the ensemble average toward the time average. One way to stay closer to the ensemble average is to maintain a balanced portfolio: keep the relative weights of the different assets constant over time. To keep these proportions constant, whenever assets values change, it is necessary to rebalance their relative weights, exposing this strategy to fees (transaction costs). Two strategies that were suggested in the past for cases that involve fees are rebalance the portfolio periodically and rebalance it in a partial way. In this paper, we study these two strategies in the presence of correlations and fees. We show that using periodic and partial rebalance strategies, it is possible to maintain a steady exponential growth while minimizing the losses due to fees. We also demonstrate how these redistribution strategies perform in a phenomenal way on real-world market data, despite the fact that not all assumptions of the model hold in these real-world systems. Our results have important implications for stochastic dynamics in general and to portfolio management in particular, as we show that there is a superior alternative to the common buy-and-hold strategy, even in the presence of correlations and fees.

  7. Annealed importance sampling with constant cooling rate

    NASA Astrophysics Data System (ADS)

    Giovannelli, Edoardo; Cardini, Gianni; Gellini, Cristina; Pietraperzia, Giangaetano; Chelli, Riccardo

    2015-02-01

    Annealed importance sampling is a simulation method devised by Neal [Stat. Comput. 11, 125 (2001)] to assign weights to configurations generated by simulated annealing trajectories. In particular, the equilibrium average of a generic physical quantity can be computed by a weighted average exploiting weights and estimates of this quantity associated to the final configurations of the annealed trajectories. Here, we review annealed importance sampling from the perspective of nonequilibrium path-ensemble averages [G. E. Crooks, Phys. Rev. E 61, 2361 (2000)]. The equivalence of Neal's and Crooks' treatments highlights the generality of the method, which goes beyond the mere thermal-based protocols. Furthermore, we show that a temperature schedule based on a constant cooling rate outperforms stepwise cooling schedules and that, for a given elapsed computer time, performances of annealed importance sampling are, in general, improved by increasing the number of intermediate temperatures.

  8. Light-Curing Volumetric Shrinkage in Dimethacrylate-Based Dental Composites by Nanoindentation and PAL Study.

    PubMed

    Shpotyuk, Olha; Adamiak, Stanislaw; Bezvushko, Elvira; Cebulski, Jozef; Iskiv, Maryana; Shpotyuk, Oleh; Balitska, Valentina

    2017-12-01

    Light-curing volumetric shrinkage in dimethacrylate-based dental resin composites Dipol® is examined through comprehensive kinetics research employing nanoindentation measurements and nanoscale atomic-deficient study with lifetime spectroscopy of annihilating positrons. Photopolymerization kinetics determined through nanoindentation testing is shown to be described via single-exponential relaxation function with character time constants reaching respectively 15.0 and 18.7 s for nanohardness and elastic modulus. Atomic-deficient characteristics of composites are extracted from positron lifetime spectra parameterized employing unconstrained x3-term fitting. The tested photopolymerization kinetics can be adequately reflected in time-dependent changes observed in average positron lifetime (with 17.9 s time constant) and fractional free volume of positronium traps (with 18.6 s time constant). This correlation proves that fragmentation of free-volume positronium-trapping sites accompanied by partial positronium-to-positron traps conversion determines the light-curing volumetric shrinkage in the studied composites.

  9. Estimating average annual per cent change in trend analysis

    PubMed Central

    Clegg, Limin X; Hankey, Benjamin F; Tiwari, Ram; Feuer, Eric J; Edwards, Brenda K

    2009-01-01

    Trends in incidence or mortality rates over a specified time interval are usually described by the conventional annual per cent change (cAPC), under the assumption of a constant rate of change. When this assumption does not hold over the entire time interval, the trend may be characterized using the annual per cent changes from segmented analysis (sAPCs). This approach assumes that the change in rates is constant over each time partition defined by the transition points, but varies among different time partitions. Different groups (e.g. racial subgroups), however, may have different transition points and thus different time partitions over which they have constant rates of change, making comparison of sAPCs problematic across groups over a common time interval of interest (e.g. the past 10 years). We propose a new measure, the average annual per cent change (AAPC), which uses sAPCs to summarize and compare trends for a specific time period. The advantage of the proposed AAPC is that it takes into account the trend transitions, whereas cAPC does not and can lead to erroneous conclusions. In addition, when the trend is constant over the entire time interval of interest, the AAPC has the advantage of reducing to both cAPC and sAPC. Moreover, because the estimated AAPC is based on the segmented analysis over the entire data series, any selected subinterval within a single time partition will yield the same AAPC estimate—that is it will be equal to the estimated sAPC for that time partition. The cAPC, however, is re-estimated using data only from that selected subinterval; thus, its estimate may be sensitive to the subinterval selected. The AAPC estimation has been incorporated into the segmented regression (free) software Joinpoint, which is used by many registries throughout the world for characterizing trends in cancer rates. Copyright © 2009 John Wiley & Sons, Ltd. PMID:19856324

  10. Determination of the attractive force, adhesive force, adhesion energy and Hamaker constant of soot particles generated from a premixed methane/oxygen flame by AFM

    NASA Astrophysics Data System (ADS)

    Liu, Ye; Song, Chonglin; Lv, Gang; Chen, Nan; Zhou, Hua; Jing, Xiaojun

    2018-03-01

    Atomic force microscopy (AFM) was used to characterize the attractive force, adhesive force and adhesion energy between an AFM probe tip and nanometric soot particle generated by a premixed methane/oxygen flame. Different attractive force distributions were found when increasing the height above burner (HAB), with forces ranging from 1.1-3.5 nN. As the HAB was increased, the average attractive force initially increased, briefly decreased, and then underwent a gradual increase, with a maximum of 2.54 nN observed at HAB = 25 mm. The mean adhesive force was 6.5-7.5 times greater than the mean attractive force at the same HAB, and values were in the range of 13.5-24.5 nN. The adhesion energy was in the range of 2.0-5.6 × 10-17 J. The variations observed in the average adhesion energy with increasing HAB were different from those of the average adhesion force, implying that the stretched length of soot particles is an important factor affecting the average adhesion energy. The Hamaker constants of the soot particles generated at different HABs were determined from AFM force-separation curves. The average Hamaker constant exhibited a clear correlation with the graphitization degree of soot particles as obtained from Raman spectroscopy.

  11. Calculating Time-Integral Quantities in Depletion Calculations

    DOE PAGES

    Isotalo, Aarno

    2016-06-02

    A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less

  12. Diffusion in biased turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlad, M.; Spineanu, F.; Misguich, J. H.

    2001-06-01

    Particle transport in two-dimensional divergence-free stochastic velocity fields with constant average is studied. Analytical expressions for the Lagrangian velocity correlation and for the time-dependent diffusion coefficients are obtained. They apply to stationary and homogeneous Gaussian velocity fields.

  13. Modeling the influence of preferential flow on the spatial variability and time-dependence of mineral weathering rates

    DOE PAGES

    Pandey, Sachin; Rajaram, Harihar

    2016-12-05

    Inferences of weathering rates from laboratory and field observations suggest significant scale and time-dependence. Preferential flow induced by heterogeneity (manifest as permeability variations or discrete fractures) has been suggested as one potential mechanism causing scale/time-dependence. In this paper, we present a quantitative evaluation of the influence of preferential flow on weathering rates using reactive transport modeling. Simulations were performed in discrete fracture networks (DFNs) and correlated random permeability fields (CRPFs), and compared to simulations in homogeneous permeability fields. The simulations reveal spatial variability in the weathering rate, multidimensional distribution of reactions zones, and the formation of rough weathering interfaces andmore » corestones due to preferential flow. In the homogeneous fields and CRPFs, the domain-averaged weathering rate is initially constant as long as the weathering front is contained within the domain, reflecting equilibrium-controlled behavior. The behavior in the CRPFs was influenced by macrodispersion, with more spread-out weathering profiles, an earlier departure from the initial constant rate and longer persistence of weathering. DFN simulations exhibited a sustained time-dependence resulting from the formation of diffusion-controlled weathering fronts in matrix blocks, which is consistent with the shrinking core mechanism. A significant decrease in the domain-averaged weathering rate is evident despite high remaining mineral volume fractions, but the decline does not follow a math formula dependence, characteristic of diffusion, due to network scale effects and advection-controlled behavior near the inflow boundary. Finally, the DFN simulations also reveal relatively constant horizontally averaged weathering rates over a significant depth range, challenging the very notion of a weathering front.« less

  14. Modeling the influence of preferential flow on the spatial variability and time-dependence of mineral weathering rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandey, Sachin; Rajaram, Harihar

    Inferences of weathering rates from laboratory and field observations suggest significant scale and time-dependence. Preferential flow induced by heterogeneity (manifest as permeability variations or discrete fractures) has been suggested as one potential mechanism causing scale/time-dependence. In this paper, we present a quantitative evaluation of the influence of preferential flow on weathering rates using reactive transport modeling. Simulations were performed in discrete fracture networks (DFNs) and correlated random permeability fields (CRPFs), and compared to simulations in homogeneous permeability fields. The simulations reveal spatial variability in the weathering rate, multidimensional distribution of reactions zones, and the formation of rough weathering interfaces andmore » corestones due to preferential flow. In the homogeneous fields and CRPFs, the domain-averaged weathering rate is initially constant as long as the weathering front is contained within the domain, reflecting equilibrium-controlled behavior. The behavior in the CRPFs was influenced by macrodispersion, with more spread-out weathering profiles, an earlier departure from the initial constant rate and longer persistence of weathering. DFN simulations exhibited a sustained time-dependence resulting from the formation of diffusion-controlled weathering fronts in matrix blocks, which is consistent with the shrinking core mechanism. A significant decrease in the domain-averaged weathering rate is evident despite high remaining mineral volume fractions, but the decline does not follow a math formula dependence, characteristic of diffusion, due to network scale effects and advection-controlled behavior near the inflow boundary. Finally, the DFN simulations also reveal relatively constant horizontally averaged weathering rates over a significant depth range, challenging the very notion of a weathering front.« less

  15. Improvement in QEPAS system utilizing a second harmonic based wavelength calibration technique

    NASA Astrophysics Data System (ADS)

    Zhang, Qinduan; Chang, Jun; Wang, Fupeng; Wang, Zongliang; Xie, Yulei; Gong, Weihua

    2018-05-01

    A simple laser wavelength calibration technique, based on second harmonic signal, is demonstrated in this paper to improve the performance of quartz enhanced photoacoustic spectroscopy (QEPAS) gas sensing system, e.g. improving the signal to noise ratio (SNR), detection limit and long-term stability. Constant current, corresponding to the gas absorption line, combining f/2 frequency sinusoidal signal are used to drive the laser (constant driving mode), a software based real-time wavelength calibration technique is developed to eliminate the wavelength drift due to ambient fluctuations. Compared to conventional wavelength modulation spectroscopy (WMS), this method allows lower filtering bandwidth and averaging algorithm applied to QEPAS system, improving SNR and detection limit. In addition, the real-time wavelength calibration technique guarantees the laser output is modulated steadily at gas absorption line. Water vapor is chosen as an objective gas to evaluate its performance compared to constant driving mode and conventional WMS system. The water vapor sensor was designed insensitive to the incoherent external acoustic noise by the numerical averaging technique. As a result, the SNR increases 12.87 times in wavelength calibration technique based system compared to conventional WMS system. The new system achieved a better linear response (R2 = 0 . 9995) in concentration range from 300 to 2000 ppmv, and achieved a minimum detection limit (MDL) of 630 ppbv.

  16. An optimal policy for deteriorating items with time-proportional deterioration rate and constant and time-dependent linear demand rate

    NASA Astrophysics Data System (ADS)

    Singh, Trailokyanath; Mishra, Pandit Jagatananda; Pattanayak, Hadibandhu

    2017-12-01

    In this paper, an economic order quantity (EOQ) inventory model for a deteriorating item is developed with the following characteristics: (i) The demand rate is deterministic and two-staged, i.e., it is constant in first part of the cycle and linear function of time in the second part. (ii) Deterioration rate is time-proportional. (iii) Shortages are not allowed to occur. The optimal cycle time and the optimal order quantity have been derived by minimizing the total average cost. A simple solution procedure is provided to illustrate the proposed model. The article concludes with a numerical example and sensitivity analysis of various parameters as illustrations of the theoretical results.

  17. Problems encountered in fluctuating flame temperature measurements by thermocouple.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donaldson, A. Burl; Lucero, Ralph E.; Gill, Walter

    2008-11-01

    Some thermocouple experiments were carried out in order to obtain sensitivity of thermocouple readings to fluctuations in flames and to determine if the average thermocouple reading was representative of the local volume temperature for fluctuating flames. The thermocouples considered were an exposed junction thermocouple and a fully sheathed thermocouple with comparable time constants. Either the voltage signal or indicated temperature for each test was recorded at sampling rates between 300-4,096 Hz. The trace was then plotted with respect to time or sample number so that time variation in voltage or temperature could be visualized and the average indicated temperature couldmore » be determined. For experiments where high sampling rates were used, the signal was analyzed using Fast Fourier Transforms (FFT) to determine the frequencies present in the thermocouple signal. This provided a basic observable as to whether or not the probe was able to follow flame oscillations. To enhance oscillations, for some experiments, the flame was forced. An analysis based on thermocouple time constant, coupled with the transfer function for a sinusoidal input was tested against the experimental results.« less

  18. Problems Encountered in Fluctuating Flame Temperature Measurements by Thermocouple

    PubMed Central

    Yilmaz, Nadir; Gill, Walt; Donaldson, A. Burl; Lucero, Ralph E.

    2008-01-01

    Some thermocouple experiments were carried out in order to obtain sensitivity of thermocouple readings to fluctuations in flames and to determine if the average thermocouple reading was representative of the local volume temperature for fluctuating flames. The thermocouples considered were an exposed junction thermocouple and a fully sheathed thermocouple with comparable time constants. Either the voltage signal or indicated temperature for each test was recorded at sampling rates between 300-4,096 Hz. The trace was then plotted with respect to time or sample number so that time variation in voltage or temperature could be visualized and the average indicated temperature could be determined. For experiments where high sampling rates were used, the signal was analyzed using Fast Fourier Transforms (FFT) to determine the frequencies present in the thermocouple signal. This provided a basic observable as to whether or not the probe was able to follow flame oscillations. To enhance oscillations, for some experiments, the flame was forced. An analysis based on thermocouple time constant, coupled with the transfer function for a sinusoidal input was tested against the experimental results. PMID:27873964

  19. Problems Encountered in Fluctuating Flame Temperature Measurements by Thermocouple.

    PubMed

    Yilmaz, Nadir; Gill, Walt; Donaldson, A Burl; Lucero, Ralph E

    2008-12-04

    Some thermocouple experiments were carried out in order to obtain sensitivity of thermocouple readings to fluctuations in flames and to determine if the average thermocouple reading was representative of the local volume temperature for fluctuating flames. The thermocouples considered were an exposed junction thermocouple and a fully sheathed thermocouple with comparable time constants. Either the voltage signal or indicated temperature for each test was recorded at sampling rates between 300-4,096 Hz. The trace was then plotted with respect to time or sample number so that time variation in voltage or temperature could be visualized and the average indicated temperature could be determined. For experiments where high sampling rates were used, the signal was analyzed using Fast Fourier Transforms (FFT) to determine the frequencies present in the thermocouple signal. This provided a basic observable as to whether or not the probe was able to follow flame oscillations. To enhance oscillations, for some experiments, the flame was forced. An analysis based on thermocouple time constant, coupled with the transfer function for a sinusoidal input was tested against the experimental results.

  20. Estimating Energy Conversion Efficiency of Thermoelectric Materials: Constant Property Versus Average Property Models

    NASA Astrophysics Data System (ADS)

    Armstrong, Hannah; Boese, Matthew; Carmichael, Cody; Dimich, Hannah; Seay, Dylan; Sheppard, Nathan; Beekman, Matt

    2017-01-01

    Maximum thermoelectric energy conversion efficiencies are calculated using the conventional "constant property" model and the recently proposed "cumulative/average property" model (Kim et al. in Proc Natl Acad Sci USA 112:8205, 2015) for 18 high-performance thermoelectric materials. We find that the constant property model generally predicts higher energy conversion efficiency for nearly all materials and temperature differences studied. Although significant deviations are observed in some cases, on average the constant property model predicts an efficiency that is a factor of 1.16 larger than that predicted by the average property model, with even lower deviations for temperature differences typical of energy harvesting applications. Based on our analysis, we conclude that the conventional dimensionless figure of merit ZT obtained from the constant property model, while not applicable for some materials with strongly temperature-dependent thermoelectric properties, remains a simple yet useful metric for initial evaluation and/or comparison of thermoelectric materials, provided the ZT at the average temperature of projected operation, not the peak ZT, is used.

  1. Time prediction of failure a type of lamps by using general composite hazard rate model

    NASA Astrophysics Data System (ADS)

    Riaman; Lesmana, E.; Subartini, B.; Supian, S.

    2018-03-01

    This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.

  2. Fast shuttling of a particle under weak spring-constant noise of the moving trap

    NASA Astrophysics Data System (ADS)

    Lu, Xiao-Jing; Ruschhaupt, A.; Muga, J. G.

    2018-05-01

    We investigate the excitation of a quantum particle shuttled in a harmonic trap with weak spring-constant colored noise. The Ornstein-Uhlenbeck model for the noise correlation function describes a wide range of possible noises, in particular for short correlation times the white-noise limit examined by Lu et al. [Phys. Rev. A 89, 063414 (2014)], 10.1103/PhysRevA.89.063414 and, by averaging over correlation times, "1 /f flicker noise." We find expressions for the excitation energy in terms of static (independent of trap motion) and dynamical sensitivities, with opposite behavior with respect to shuttling time, and demonstrate that the excitation can be reduced by proper process timing and design of the trap trajectory.

  3. LR: Compact connectivity representation for triangle meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurung, T; Luffel, M; Lindstrom, P

    2011-01-28

    We propose LR (Laced Ring) - a simple data structure for representing the connectivity of manifold triangle meshes. LR provides the option to store on average either 1.08 references per triangle or 26.2 bits per triangle. Its construction, from an input mesh that supports constant-time adjacency queries, has linear space and time complexity, and involves ordering most vertices along a nearly-Hamiltonian cycle. LR is best suited for applications that process meshes with fixed connectivity, as any changes to the connectivity require the data structure to be rebuilt. We provide an implementation of the set of standard random-access, constant-time operators formore » traversing a mesh, and show that LR often saves both space and traversal time over competing representations.« less

  4. Soil moisture content estimation using ground-penetrating radar reflection data

    NASA Astrophysics Data System (ADS)

    Lunt, I. A.; Hubbard, S. S.; Rubin, Y.

    2005-06-01

    Ground-penetrating radar (GPR) reflection travel time data were used to estimate changes in soil water content under a range of soil saturation conditions throughout the growing season at a California winery. Data were collected during three data acquisition campaigns over an 80 by 180 m area using 100 MHz surface GPR antennas. GPR reflections were associated with a thin, low permeability clay layer located 0.8-1.3 m below the ground surface that was identified from borehole information and mapped across the study area. Field infiltration tests and neutron probe logs suggest that the thin clay layer inhibited vertical water flow, and was coincident with high volumetric water content (VWC) values. The GPR reflection two-way travel time and the depth of the reflector at the borehole locations were used to calculate an average dielectric constant for soils above the reflector. A site-specific relationship between the dielectric constant and VWC was then used to estimate the depth-averaged VWC of the soils above the reflector. Compared to average VWC measurements from calibrated neutron probe logs over the same depth interval, the average VWC estimates obtained from GPR reflections had an RMS error of 0.018 m 3 m -3. These results suggested that the two-way travel time to a GPR reflection associated with a geological surface could be used under natural conditions to obtain estimates of average water content when borehole control is available and the reflection strength is sufficient. The GPR reflection method therefore, has potential for monitoring soil water content over large areas and under variable hydrological conditions.

  5. Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.

    PubMed

    Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano

    2008-07-01

    Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.

  6. Application of growth-phase based light-feeding strategies to simultaneously enhance Chlorella vulgaris growth and lipid accumulation.

    PubMed

    Sun, Yahui; Liao, Qiang; Huang, Yun; Xia, Ao; Fu, Qian; Zhu, Xun; Fu, Jingwei; Li, Jun

    2018-05-01

    Considering the variations of optimal light intensity required by microalgae cells along with growth phases, growth-phase light-feeding strategies were proposed and verified in this paper, aiming at boosting microalgae lipid productivity from the perspective of light conditions optimization. Experimental results demonstrate that under an identical time-averaged light intensity, the light-feeding strategies characterized by stepwise incremental light intensities showed a positive effect on biomass and lipid accumulation. The lipid productivity (235.49 mg L -1  d -1 ) attained under light-feeding strategy V (time-averaged light intensity: 225 μmol m -2  s -1 ) was 52.38% higher over that obtained under a constant light intensity of 225 μmol m -2  s -1 . Subsequently, based on light-feeding strategy V, microalgae lipid productivity was further elevated to 312.92 mg L -1  d -1 employing a two-stage based light-feeding strategy V 560 (time-averaged light intensity: 360 μmol m -2  s -1 ), which was 79.63% higher relative to that achieved under a constant light intensity of 360 μmol m -2  s -1 . Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. 14 CFR Appendix N to Part 25 - Fuel Tank Flammability Exposure and Reliability Analysis

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the performance of a flammability reduction means (FRM) if installed. (c) The following definitions... average fuel temperature within the fuel tank or different sections of the tank if the tank is subdivided... the flight time, and the post-flight time is a constant 30 minutes. (c) Flammable. With respect to a...

  8. 14 CFR Appendix N to Part 25 - Fuel Tank Flammability Exposure and Reliability Analysis

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... the performance of a flammability reduction means (FRM) if installed. (c) The following definitions... average fuel temperature within the fuel tank or different sections of the tank if the tank is subdivided... the flight time, and the post-flight time is a constant 30 minutes. (c) Flammable. With respect to a...

  9. An approximate method for solution to variable moment of inertia problems

    NASA Technical Reports Server (NTRS)

    Beans, E. W.

    1981-01-01

    An approximation method is presented for reducing a nonlinear differential equation (for the 'weather vaning' motion of a wind turbine) to an equivalent constant moment of inertia problem. The integrated average of the moment of inertia is determined. Cycle time was found to be the equivalent cycle time if the rotating speed is 4 times greater than the system's minimum natural frequency.

  10. Application handbook for a Standardized Control Module (SCM) for DC-DC converters, volume 1

    NASA Astrophysics Data System (ADS)

    Lee, F. C.; Mahmoud, M. F.; Yu, Y.

    1980-04-01

    The standardized control module (SCM) was developed for application in the buck, boost and buck/boost DC-DC converters. The SCM used multiple feedback loops to provide improved input line and output load regulation, stable feedback control system, good dynamic transient response and adaptive compensation of the control loop for changes in open loop gain and output filter time constraints. The necessary modeling and analysis tools to aid the design engineer in the application of the SCM to DC-DC Converters were developed. The SCM functional block diagram and the different analysis techniques were examined. The average time domain analysis technique was chosen as the basic analytical tool. The power stage transfer functions were developed for the buck, boost and buck/boost converters. The analog signal and digital signal processor transfer functions were developed for the three DC-DC Converter types using the constant on time, constant off time and constant frequency control laws.

  11. Application handbook for a Standardized Control Module (SCM) for DC-DC converters, volume 1

    NASA Technical Reports Server (NTRS)

    Lee, F. C.; Mahmoud, M. F.; Yu, Y.

    1980-01-01

    The standardized control module (SCM) was developed for application in the buck, boost and buck/boost DC-DC converters. The SCM used multiple feedback loops to provide improved input line and output load regulation, stable feedback control system, good dynamic transient response and adaptive compensation of the control loop for changes in open loop gain and output filter time constraints. The necessary modeling and analysis tools to aid the design engineer in the application of the SCM to DC-DC Converters were developed. The SCM functional block diagram and the different analysis techniques were examined. The average time domain analysis technique was chosen as the basic analytical tool. The power stage transfer functions were developed for the buck, boost and buck/boost converters. The analog signal and digital signal processor transfer functions were developed for the three DC-DC Converter types using the constant on time, constant off time and constant frequency control laws.

  12. Practical Algorithms for the Longest Common Extension Problem

    NASA Astrophysics Data System (ADS)

    Ilie, Lucian; Tinta, Liviu

    The Longest Common Extension problem considers a string s and computes, for each of a number of pairs (i,j), the longest substring of s that starts at both i and j. It appears as a subproblem in many fundamental string problems and can be solved by linear-time preprocessing of the string that allows (worst-case) constant-time computation for each pair. The two known approaches use powerful algorithms: either constant-time computation of the Lowest Common Ancestor in trees or constant-time computation of Range Minimum Queries (RMQ) in arrays. We show here that, from practical point of view, such complicated approaches are not needed. We give two very simple algorithms for this problem that require no preprocessing. The first needs only the string and is significantly faster than all previous algorithms on the average. The second combines the first with a direct RMQ computation on the Longest Common Prefix array. It takes advantage of the superior speed of the cache memory and is the fastest on virtually all inputs.

  13. Hemodynamic response to exercise and head-up tilt of patients implanted with a rotary blood pump: a computational modeling study.

    PubMed

    Lim, Einly; Salamonsen, Robert Francis; Mansouri, Mahdi; Gaddum, Nicholas; Mason, David Glen; Timms, Daniel L; Stevens, Michael Charles; Fraser, John; Akmeliawati, Rini; Lovell, Nigel Hamilton

    2015-02-01

    The present study investigates the response of implantable rotary blood pump (IRBP)-assisted patients to exercise and head-up tilt (HUT), as well as the effect of alterations in the model parameter values on this response, using validated numerical models. Furthermore, we comparatively evaluate the performance of a number of previously proposed physiologically responsive controllers, including constant speed, constant flow pulsatility index (PI), constant average pressure difference between the aorta and the left atrium, constant average differential pump pressure, constant ratio between mean pump flow and pump flow pulsatility (ratioP I or linear Starling-like control), as well as constant left atrial pressure ( P l a ¯ ) control, with regard to their ability to increase cardiac output during exercise while maintaining circulatory stability upon HUT. Although native cardiac output increases automatically during exercise, increasing pump speed was able to further improve total cardiac output and reduce elevated filling pressures. At the same time, reduced venous return associated with upright posture was not shown to induce left ventricular (LV) suction. Although P l a ¯ control outperformed other control modes in its ability to increase cardiac output during exercise, it caused a fall in the mean arterial pressure upon HUT, which may cause postural hypotension or patient discomfort. To the contrary, maintaining constant average pressure difference between the aorta and the left atrium demonstrated superior performance in both exercise and HUT scenarios. Due to their strong dependence on the pump operating point, PI and ratioPI control performed poorly during exercise and HUT. Our simulation results also highlighted the importance of the baroreflex mechanism in determining the response of the IRBP-assisted patients to exercise and postural changes, where desensitized reflex response attenuated the percentage increase in cardiac output during exercise and substantially reduced the arterial pressure upon HUT. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  14. Ultrasonic measurements of breast viscoelasticity.

    PubMed

    Sridhar, Mallika; Insana, Michael F

    2007-12-01

    In vivo measurements of the viscoelastic properties of breast tissue are described. Ultrasonic echo frames were recorded from volunteers at 5 fps while applying a uniaxial compressive force (1-20 N) within a 1 s ramp time and holding the force constant for up to 200 s. A time series of strain images was formed from the echo data, spatially averaged viscous creep curves were computed, and viscoelastic strain parameters were estimated by fitting creep curves to a second-order Voigt model. The useful strain bandwidth from this quasi-static ramp stimulus was 10(-2) < or = omega < or = 10(0) rad/s (0.0016-0.16 Hz). The stress-strain curves for normal glandular tissues are linear when the surface force applied is between 2 and 5 N. In this range, the creep response was characteristic of biphasic viscoelastic polymers, settling to a constant strain (arrheodictic) after 100 s. The average model-based retardance time constants for the viscoelastic response were 3.2 +/- 0.8 and 42.0 +/- 28 s. Also, the viscoelastic strain amplitude was approximately equal to that of the elastic strain. Above 5 N of applied force, however, the response of glandular tissue became increasingly nonlinear and rheodictic, i.e., tissue creep never reached a plateau. Contrasting in vivo breast measurements with those in gelatin hydrogels, preliminary ideas regarding the mechanisms for viscoelastic contrast are emerging.

  15. Ultrasonic measurements of breast viscoelasticity

    PubMed Central

    Sridhar, Mallika; Insana, Michael F.

    2009-01-01

    In vivo measurements of the viscoelastic properties of breast tissue are described. Ultrasonic echo frames were recorded from volunteers at 5 fps while applying a uniaxial compressive force (1–20 N) within a 1 s ramp time and holding the force constant for up to 200 s. A time series of strain images was formed from the echo data, spatially averaged viscous creep curves were computed, and viscoelastic strain parameters were estimated by fitting creep curves to a second-order Voigt model. The useful strain bandwidth from this quasi-static ramp stimulus was 10−2 ≤ ω ≤ 100 rad/s (0.0016–0.16 Hz). The stress-strain curves for normal glandular tissues are linear when the surface force applied is between 2 and 5 N. In this range, the creep response was characteristic of biphasic viscoelastic polymers, settling to a constant strain (arrheodictic) after 100 s. The average model-based retardance time constants for the viscoelastic response were 3.2±0.8 and 42.0±28 s. Also, the viscoelastic strain amplitude was approximately equal to that of the elastic strain. Above 5 N of applied force, however, the response of glandular tissue became increasingly nonlinear and rheodictic, i.e., tissue creep never reached a plateau. Contrasting in vivo breast measurements with those in gelatin hydrogels, preliminary ideas regarding the mechanisms for viscoelastic contrast are emerging. PMID:18196803

  16. Nonlinear conductivity of a holographic superconductor under constant electric field

    NASA Astrophysics Data System (ADS)

    Zeng, Hua Bi; Tian, Yu; Fan, Zheyong; Chen, Chiang-Mei

    2017-02-01

    The dynamics of a two-dimensional superconductor under a constant electric field E is studied by using the gauge-gravity correspondence. The pair breaking current induced by E first increases to a peak value and then decreases to a constant value at late times, where the superconducting gap goes to zero, corresponding to a normal conducting phase. The peak value of the current is found to increase linearly with respect to the electric field. Moreover, the nonlinear conductivity, defined as an average of the conductivity in the superconducting phase, scales as ˜E-2 /3 when the system is close to the critical temperature Tc, which agrees with predictions from solving the time-dependent Ginzburg-Landau equation. Away from Tc, the E-2 /3 scaling of the conductivity still holds when E is large.

  17. Effects of air temperature and velocity on the drying kinetics and product particle size of starch from arrowroot (Maranta arundinacae)

    NASA Astrophysics Data System (ADS)

    Caparanga, Alvin R.; Reyes, Rachael Anne L.; Rivas, Reiner L.; De Vera, Flordeliza C.; Retnasamy, Vithyacharan; Aris, Hasnizah

    2017-11-01

    This study utilized the 3k factorial design with k as the two varying factors namely, temperature and air velocity. The effects of temperature and air velocity on the drying rate curves and on the average particle diameter of the arrowroot starch were investigated. Extracted arrowroot starch samples were dried based on the designed parameters until constant weight was obtained. The resulting initial moisture content of the arrowroot starch was 49.4%. Higher temperatures correspond to higher drying rates and faster drying time while air velocity effects were approximately negligible or had little effect. Drying rate is a function of temperature and time. The constant rate period was not observed for the drying rate of arrowroot starch. The drying curves were fitted against five mathematical models: Lewis, Page, Henderson and Pabis, Logarithmic and Midili. The Midili Model was the best fit for the experimental data since it yielded the highest R2 and the lowest RSME values for all runs. Scanning electron microscopy (SEM) was used for qualitative analysis and for determination of average particle diameter of the starch granules. The starch granules average particle diameter had a range of 12.06 - 24.60 μm. The use of ANOVA proved that particle diameters for each run varied significantly with each other. And, the Taguchi Design proved that high temperatures yield lower average particle diameter, while high air velocities yield higher average particle diameter.

  18. Macroscopic neural mass model constructed from a current-based network model of spiking neurons.

    PubMed

    Umehara, Hiroaki; Okada, Masato; Teramae, Jun-Nosuke; Naruse, Yasushi

    2017-02-01

    Neural mass models (NMMs) are efficient frameworks for describing macroscopic cortical dynamics including electroencephalogram and magnetoencephalogram signals. Originally, these models were formulated on an empirical basis of synaptic dynamics with relatively long time constants. By clarifying the relations between NMMs and the dynamics of microscopic structures such as neurons and synapses, we can better understand cortical and neural mechanisms from a multi-scale perspective. In a previous study, the NMMs were analytically derived by averaging the equations of synaptic dynamics over the neurons in the population and further averaging the equations of the membrane-potential dynamics. However, the averaging of synaptic current assumes that the neuron membrane potentials are nearly time invariant and that they remain at sub-threshold levels to retain the conductance-based model. This approximation limits the NMM to the non-firing state. In the present study, we newly propose a derivation of a NMM by alternatively approximating the synaptic current which is assumed to be independent of the membrane potential, thus adopting a current-based model. Our proposed model releases the constraint of the nearly constant membrane potential. We confirm that the obtained model is reducible to the previous model in the non-firing situation and that it reproduces the temporal mean values and relative power spectrum densities of the average membrane potentials for the spiking neurons. It is further ensured that the existing NMM properly models the averaged dynamics over individual neurons even if they are spiking in the populations.

  19. Early arthroscopic release in stiff shoulder

    PubMed Central

    Sabat, Dhananjaya; Kumar, Vinod

    2008-01-01

    Purpose: To evaluate the results of early arthroscopic release in the patients of stiff shoulder Methods: Twenty patients of stiff shoulder, who had symptoms for at least three months and failed to improve with steroid injections and physical therapy of 6 weeks duration, underwent arthroscopic release. The average time between onset of symptoms and the time of surgery was 4 months and 2 weeks. The functional outcome was evaluated using ASES and Constant and Murley scoring systems. Results: All the patients showed significant improvement in the range of motion and relief of pain by end of three months following the procedure. At 12 months, mean improvement in ASES score is 38 points and Constant and Murley score is 4O.5 points. All patients returned to work by 3-5 months (average -4.5 months). Conclusion: Early arthroscopic release showed promising results with reliable increase in range of motion, early relief of symptoms and consequent early return to work. So it is highly recommended in properly selected patients. Level of evidence: Level IV PMID:20300309

  20. Super-Arrhenius diffusion in an undercooled binary Lennard-Jones liquid results from a quantifiable correlation effect.

    PubMed

    de Souza, Vanessa K; Wales, David J

    2006-02-10

    On short time scales an underlying Arrhenius temperature dependence of the diffusion constant can be extracted from the fragile, super-Arrhenius diffusion of a binary Lennard-Jones mixture. This Arrhenius diffusion is related to the true super-Arrhenius behavior by a factor that depends on the average angle between steps in successive time windows. The correction factor accounts for the fact that on average, successive displacements are negatively correlated, and this effect can therefore be linked directly with the higher apparent activation energy for diffusion at low temperature.

  1. Fast optimization algorithms and the cosmological constant

    NASA Astrophysics Data System (ADS)

    Bao, Ning; Bousso, Raphael; Jordan, Stephen; Lackey, Brad

    2017-11-01

    Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of a problem that is hard for the complexity class NP (Nondeterministic Polynomial-time). The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable Universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 10-120 in a randomly generated 1 09-dimensional Arkani-Hamed-Dimopoulos-Kachru landscape.

  2. Statistical errors in molecular dynamics averages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiferl, S.K.; Wallace, D.C.

    1985-11-15

    A molecular dynamics calculation produces a time-dependent fluctuating signal whose average is a thermodynamic quantity of interest. The average of the kinetic energy, for example, is proportional to the temperature. A procedure is described for determining when the molecular dynamics system is in equilibrium with respect to a given variable, according to the condition that the mean and the bandwidth of the signal should be sensibly constant in time. Confidence limits for the mean are obtained from an analysis of a finite length of the equilibrium signal. The role of serial correlation in this analysis is discussed. The occurence ofmore » unstable behavior in molecular dynamics data is noted, and a statistical test for a level shift is described.« less

  3. Change in Psychosocial Work Factors Predicts Follow-up Employee Strain: An Examination of Australian Employees.

    PubMed

    Jimmieson, Nerina L; Hobman, Elizabeth V; Tucker, Michelle K; Bordia, Prashant

    2016-10-01

    This research undertook a time-ordered investigation of Australian employees in regards to their experiences of change in psychosocial work factors across time (decreases, increases, or no change) in the prediction of psychological, physical, attitudinal, and behavioral employee strain. Six hundred and ten employees from 17 organizations participated in Time 1 and Time 2 psychosocial risk assessments (average time lag of 16.7 months). Multi-level regressions examined the extent to which change in exposure to six demands and four resources predicted employee strain at follow-up, after controlling for baseline employee strain. Increases in demands and decreases in resources exacerbated employee strain, but even constant moderate demands and resources resulted in poor employee outcomes, not just constant high or low exposure, respectively. These findings can help employers prioritize hazards, and guide tailored psychosocial organizational interventions.

  4. The SPINDLE Disruption-Tolerant Networking System

    DTIC Science & Technology

    2007-11-01

    average availability ( AA ). The AA metric attempts to measure the average fraction of time in the near future that the link will be available for use...Each link’s AA is epidemically disseminated to all nodes. Path costs are computed using the topology learned through this dissemination, with cost of a...link l set to (1 − AA (l)) + c (a small constant factor that makes routing favor fewer number of hops when all links have AA of 1). Additional details

  5. Protonated Nitrous Oxide, NNOH(+): Fundamental Vibrational Frequencies and Spectroscopic Constants from Quartic Force Fields

    NASA Technical Reports Server (NTRS)

    Huang, Xinchuan; Fortenberry, Ryan C.; Lee, Timothy J.

    2013-01-01

    The interstellar presence of protonated nitrous oxide has been suspected for some time. Using established high-accuracy quantum chemical techniques, spectroscopic constants and fundamental vibrational frequencies are provided for the lower energy O-protonated isomer of this cation and its deuterated isotopologue. The vibrationally-averaged B0 and C0 rotational constants are within 6 MHz of their experimental values and the D(subJ) quartic distortion constants agree with experiment to within 3%. The known gas phase O-H stretch of NNOH(+) is 3330.91 cm(exp-1), and the vibrational configuration interaction computed result is 3330.9 cm(exp-1). Other spectroscopic constants are also provided, as are the rest of the fundamental vibrational frequencies for NNOH(+) and its deuterated isotopologue. This high-accuracy data should serve to better inform future observational or experimental studies of the rovibrational bands of protonated nitrous oxide in the ISM and the laboratory.

  6. Averaging of elastic constants for polycrystals

    DOE PAGES

    Blaschke, Daniel N.

    2017-10-13

    Many materials of interest are polycrystals, i.e., aggregates of single crystals. Randomly distributed orientations of single crystals lead to macroscopically isotropic properties. Here in this paper, we briefly review strategies of calculating effective isotropic second and third order elastic constants from the single crystal ones. Our main emphasis is on single crystals of cubic symmetry. Specifically, the averaging of third order elastic constants has not been particularly successful in the past, and discrepancies have often been attributed to texturing of polycrystals as well as to uncertainties in the measurement of elastic constants of both poly and single crystals. While thismore » may well be true, we also point out here shortcomings in the theoretical averaging framework.« less

  7. A full set of langatate high-temperature acoustic wave constants: elastic, piezoelectric, dielectric constants up to 900°C.

    PubMed

    Davulis, Peter M; da Cunha, Mauricio Pereira

    2013-04-01

    A full set of langatate (LGT) elastic, dielectric, and piezoelectric constants with their respective temperature coefficients up to 900°C is presented, and the relevance of the dielectric and piezoelectric constants and temperature coefficients are discussed with respect to predicted and measured high-temperature SAW propagation properties. The set of constants allows for high-temperature acoustic wave (AW) propagation studies and device design. The dielectric constants and polarization and conductive losses were extracted by impedance spectroscopy of parallel-plate capacitors. The measured dielectric constants at high temperatures were combined with previously measured LGT expansion coefficients and used to determine the elastic and piezoelectric constants using resonant ultrasound spectroscopy (RUS) measurements at temperatures up to 900°C. The extracted LGT piezoelectric constants and temperature coefficients show that e11 and e14 change by up to 62% and 77%, respectively, for the entire 25°C to 900°C range when compared with room-temperature values. The LGT high-temperature constants and temperature coefficients were verified by comparing measured and predicted phase velocities (vp) and temperature coefficients of delay (TCD) of SAW delay lines fabricated along 6 orientations in the LGT plane (90°, 23°, Ψ) up to 900°C. For the 6 tested orientations, the predicted SAW vp agree within 0.2% of the measured vp on average and the calculated TCD is within 9.6 ppm/°C of the measured value on average over the temperature range of 25°C to 900°C. By including the temperature dependence of both dielectric and piezoelectric constants, the average discrepancies between predicted and measured SAW properties were reduced, on average: 77% for vp, 13% for TCD, and 63% for the turn-over temperatures analyzed.

  8. The dynamics of water in hydrated white bread investigated using quasielastic neutron scattering

    NASA Astrophysics Data System (ADS)

    Sjöström, J.; Kargl, F.; Fernandez-Alonso, F.; Swenson, J.

    2007-10-01

    The dynamics of water in fresh and in rehydrated white bread is studied using quasielastic neutron scattering (QENS). A diffusion constant for water in fresh bread, without temperature gradients and with the use of a non-destructive technique, is presented here for the first time. The self-diffusion constant for fresh bread is estimated to be Ds = 3.8 × 10-10 m2 s-1 and the result agrees well with previous findings for similar systems. It is also suggested that water exhibits a faster dynamics than previously reported in the literature using equilibration of a hydration-level gradient monitored by vibrational spectroscopy. The temperature dependence of the dynamics of low hydration bread is also investigated for T = 280-350 K. The average relaxation time at constant momentum transfer (Q) shows an Arrhenius behavior in the temperature range investigated.

  9. The electromagnetic pendulum in quickly changing magnetic field of constant intensity

    NASA Astrophysics Data System (ADS)

    Rodyukov, F. F.; Shepeljavyi, A. I.

    2018-05-01

    The Lagrange-Maxwell equations for the pendulum in the form of a conductive frame, which is suspended in a uniform sinusoidal electromagnetic field of constant intensity, are obtained. The procedure for obtaining simplified mathematical models by a traditional method of separating fast and slow motions with subsiquent averaging a fast time is used. It is shown that this traditional approach may lead to inappropriate mathematical models. Suggested ways on how this can be avoided for the case are considered. The main statements by numerical experiments are illustrated.

  10. Program for narrow-band analysis of aircraft flyover noise using ensemble averaging techniques

    NASA Technical Reports Server (NTRS)

    Gridley, D.

    1982-01-01

    A package of computer programs was developed for analyzing acoustic data from an aircraft flyover. The package assumes the aircraft is flying at constant altitude and constant velocity in a fixed attitude over a linear array of ground microphones. Aircraft position is provided by radar and an option exists for including the effects of the aircraft's rigid-body attitude relative to the flight path. Time synchronization between radar and acoustic recording stations permits ensemble averaging techniques to be applied to the acoustic data thereby increasing the statistical accuracy of the acoustic results. Measured layered meteorological data obtained during the flyovers are used to compute propagation effects through the atmosphere. Final results are narrow-band spectra and directivities corrected for the flight environment to an equivalent static condition at a specified radius.

  11. AUTORADIOGRAPHIC STUDY OF DNA SYNTHESIS AND THE CELL CYCLE IN SPERMATOGONIA AND SPERMATOCYTES OF MOUSE TESTIS USING TRITIATED THYMIDINE

    PubMed Central

    Monesi, Valerio

    1962-01-01

    Mice were injected intraperitoneally with 15 µc of H3-thymidine. The time course of the labeling in spermatogonia and spermatocytes was studied by using autoradiography on 5 µ sections stained by the periodic acid-Schiff method and hematoxylin over a period of 57 hours after injection. Four generations of type A (called AI, AII, AIII, and AIV), one of intermediate, and one of type B spermatogonia occur in one cycle of the seminiferous epithelium. The average life span is about the same in all spermatogonia, i.e., about 27 to 30.5 hours. The average pre-DNA synthetic time, including the mitotic stages from metaphase through telophase and the portion of interphase preceding DNA synthesis, is also not very different, ranging between 7.5 and 10.5 hours. A remarkable difference exists, however, in the duration of DNA synthesis and of the post-DNA synthetic period. The average DNA synthetic time is very long and is highly variable in type B (14.5 hours), a little shorter and less variable in intermediate (12.5 hours) and AIV (13 hours) spermatogonia, and much shorter and very constant in AIII (8 hours), AII and AI (7 to 7.5 hours) spermatogonia. Conversely, the average post-DNA synthetic time, corresponding essentially to the duration of the prophase, is short and very constant in type B (4.5 hours), longer and variable in intermediate (6 hours) and AIV (8 hours) spermatogonia, and much longer and much more variable in AIII (11 hours), AII and AI (14 hours) spermatogonia. The premeiotic synthesis of DNA takes place in primary spermatocytes during the resting phase and terminates just before the visible onset of the meiotic prophase. Its average duration is 14 hours. No further synthesis of DNA takes place in later stages of spermatogenesis. PMID:14475361

  12. Improved estimation of anomalous diffusion exponents in single-particle tracking experiments

    NASA Astrophysics Data System (ADS)

    Kepten, Eldad; Bronshtein, Irena; Garini, Yuval

    2013-05-01

    The mean square displacement is a central tool in the analysis of single-particle tracking experiments, shedding light on various biophysical phenomena. Frequently, parameters are extracted by performing time averages on single-particle trajectories followed by ensemble averaging. This procedure, however, suffers from two systematic errors when applied to particles that perform anomalous diffusion. The first is significant at short-time lags and is induced by measurement errors. The second arises from the natural heterogeneity in biophysical systems. We show how to estimate and correct these two errors and improve the estimation of the anomalous parameters for the whole particle distribution. As a consequence, we manage to characterize ensembles of heterogeneous particles even for rather short and noisy measurements where regular time-averaged mean square displacement analysis fails. We apply this method to both simulations and in vivo measurements of telomere diffusion in 3T3 mouse embryonic fibroblast cells. The motion of telomeres is found to be subdiffusive with an average exponent constant in time. Individual telomere exponents are normally distributed around the average exponent. The proposed methodology has the potential to improve experimental accuracy while maintaining lower experimental costs and complexity.

  13. Habituation and adaptation of the vestibuloocular reflex: a model of differential control by the vestibulocerebellum

    NASA Technical Reports Server (NTRS)

    Cohen, H.; Cohen, B.; Raphan, T.; Waespe, W.

    1992-01-01

    We habituated the dominant time constant of the horizontal vestibuloocular reflex (VOR) of rhesus and cynomolgus monkeys by repeated testing with steps of velocity about a vertical axis and adapted the gain of the VOR by altering visual input with magnifying and reducing lenses. After baseline values were established, the nodulus and ventral uvula of the vestibulocerebellum were ablated in two monkeys, and the effects of nodulouvulectomy and flocculectomy on VOR gain adaptation and habituation were compared. The VOR time constant decreased with repeated testing, rapidly at first and more slowly thereafter. The gain of the VOR was unaffected. Massed trials were more effective than distributed trials in producing habituation. Regardless of the schedule of testing, the VOR time constant never fell below the time constant of the semicircular canals (approximately 5 s). This finding indicates that only the slow component of the vestibular response, the component produced by velocity storage, was habituated. In agreement with this, the time constant of optokinetic after-nystagmus (OKAN) was habituated concurrently with the VOR. Average values for VOR habituation were obtained on a per session basis for six animals. The VOR gain was adapted by natural head movements in partially habituated monkeys while they wore x 2.2 magnifying or x 0.5 reducing lenses. Adaptation occurred rapidly and reached about +/- 30%, similar to values obtained using forced rotation. VOR gain adaptation did not cause additional habituation of the time constant. When the VOR gain was reduced in animals with a long VOR time constant, there were overshoots in eye velocity that peaked at about 6-8 s after the onset or end of constant-velocity rotation. These overshoots occurred at times when the velocity storage integrator would have been maximally activated by semicircular canal input. Since the activity generated in the canals is not altered by visual adaptation, this finding indicates that the gain element that controls rapid changes in eye velocity in the VOR is separate from that which couples afferent input to velocity storage. Nodulouvulectomy caused a prompt and permanent loss of habituation, returning VOR time constants to initial values. VOR gain adaptation, which is lost after flocculectomy, was unaffected by nodulouvulectomy. Flocculectomy did not alter habituation of the VOR or of OKAN. Using a simplified model of the VOR, the decrease in the duration of vestibular nystagmus due to habituation was related to a decrement in the dominant time constant of the velocity storage integrator (1/h0).(ABSTRACT TRUNCATED AT 400 WORDS).

  14. Principal curvatures and area ratio of propagating surfaces in isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Zheng, Tianhang; You, Jiaping; Yang, Yue

    2017-10-01

    We study the statistics of principal curvatures and the surface area ratio of propagating surfaces with a constant or nonconstant propagating velocity in isotropic turbulence using direct numerical simulation. Propagating surface elements initially constitute a plane to model a planar premixed flame front. When the statistics of evolving propagating surfaces reach the stationary stage, the statistical profiles of principal curvatures scaled by the Kolmogorov length scale versus the constant displacement speed scaled by the Kolmogorov velocity scale collapse at different Reynolds numbers. The magnitude of averaged principal curvatures and the number of surviving surface elements without cusp formation decrease with increasing displacement speed. In addition, the effect of surface stretch on the nonconstant displacement speed inhibits the cusp formation on surface elements at negative Markstein numbers. In order to characterize the wrinkling process of the global propagating surface, we develop a model to demonstrate that the increase of the surface area ratio is primarily due to positive Lagrangian time integrations of the area-weighted averaged tangential strain-rate term and propagation-curvature term. The difference between the negative averaged mean curvature and the positive area-weighted averaged mean curvature characterizes the cellular geometry of the global propagating surface.

  15. Causality Analysis: Identifying the Leading Element in a Coupled Dynamical System

    PubMed Central

    BozorgMagham, Amir E.; Motesharrei, Safa; Penny, Stephen G.; Kalnay, Eugenia

    2015-01-01

    Physical systems with time-varying internal couplings are abundant in nature. While the full governing equations of these systems are typically unknown due to insufficient understanding of their internal mechanisms, there is often interest in determining the leading element. Here, the leading element is defined as the sub-system with the largest coupling coefficient averaged over a selected time span. Previously, the Convergent Cross Mapping (CCM) method has been employed to determine causality and dominant component in weakly coupled systems with constant coupling coefficients. In this study, CCM is applied to a pair of coupled Lorenz systems with time-varying coupling coefficients, exhibiting switching between dominant sub-systems in different periods. Four sets of numerical experiments are carried out. The first three cases consist of different coupling coefficient schemes: I) Periodic–constant, II) Normal, and III) Mixed Normal/Non-normal. In case IV, numerical experiment of cases II and III are repeated with imposed temporal uncertainties as well as additive normal noise. Our results show that, through detecting directional interactions, CCM identifies the leading sub-system in all cases except when the average coupling coefficients are approximately equal, i.e., when the dominant sub-system is not well defined. PMID:26125157

  16. Baseline-dependent sampling and windowing for radio interferometry: data compression, field-of-interest shaping, and outer field suppression

    NASA Astrophysics Data System (ADS)

    Atemkeng, M.; Smirnov, O.; Tasse, C.; Foster, G.; Keimpema, A.; Paragi, Z.; Jonas, J.

    2018-07-01

    Traditional radio interferometric correlators produce regular-gridded samples of the true uv-distribution by averaging the signal over constant, discrete time-frequency intervals. This regular sampling and averaging then translate to be irregular-gridded samples in the uv-space, and results in a baseline-length-dependent loss of amplitude and phase coherence, which is dependent on the distance from the image phase centre. The effect is often referred to as `decorrelation' in the uv-space, which is equivalent in the source domain to `smearing'. This work discusses and implements a regular-gridded sampling scheme in the uv-space (baseline-dependent sampling) and windowing that allow for data compression, field-of-interest shaping, and source suppression. The baseline-dependent sampling requires irregular-gridded sampling in the time-frequency space, i.e. the time-frequency interval becomes baseline dependent. Analytic models and simulations are used to show that decorrelation remains constant across all the baselines when applying baseline-dependent sampling and windowing. Simulations using MeerKAT telescope and the European Very Long Baseline Interferometry Network show that both data compression, field-of-interest shaping, and outer field-of-interest suppression are achieved.

  17. Prediction of the Creep-Fatigue Lifetime of Alloy 617: An Application of Non-destructive Evaluation and Information Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vivek Agarwal; Richard Wright; Timothy Roney

    A relatively simple method using the nominal constant average stress information and the creep rupture model is developed to predict the creep-fatigue lifetime of Alloy 617, in terms of time to rupture. The nominal constant average stress is computed using the stress relaxation curve. The predicted time to rupture can be converted to number of cycles to failure using the strain range, the strain rate during each cycle, and the hold time information. The predicted creep-fatigue lifetime is validated against the experimental measurements of the creep-fatigue lifetime collected using conventional laboratory creep-fatigue tests. High temperature creep-fatigue tests of Alloy 617more » were conducted in air at 950°C with a tensile hold period of up to 1800s in a cycle at total strain ranges of 0.3% and 0.6%. It was observed that the proposed method is conservative in that the predicted lifetime is less than the experimentally determined values. The approach would be relevant to calculate the remaining useful life to a component like a steam generator that might fail by the creep-fatigue mechanism.« less

  18. Industrial realization of a direct Fourier transform in automated experimental data processing systems

    NASA Technical Reports Server (NTRS)

    Lyubashevskiy, G. S.

    1973-01-01

    Fourier processing of automatic signals transforms direct current voltage into a numerical form through bandpass filtration in time-pulse multiplying devices. It is shown that the ratio of the interference energy to the useful signal energy is inversely proportional to the square of the product of the depth of the width modulation and the ratio of the time constant averaging to the cross-multiplied signals.

  19. Salaries of Teachers.

    ERIC Educational Resources Information Center

    Education Statistics Quarterly, 2000

    2000-01-01

    Examines changes in teacher salaries from 1971 to 1998 among teachers in different age groups. Also compares teacher salaries with the salaries of all bachelor's degree recipients. The annual median salaries (in constant 1998 dollars) of full-time teachers decreased between 1971 and 1998 by about $500-$700 per year on average in each age group.…

  20. Motional timescale predictions by molecular dynamics simulations: case study using proline and hydroxyproline sidechain dynamics.

    PubMed

    Aliev, Abil E; Kulke, Martin; Khaneja, Harmeet S; Chudasama, Vijay; Sheppard, Tom D; Lanigan, Rachel M

    2014-02-01

    We propose a new approach for force field optimizations which aims at reproducing dynamics characteristics using biomolecular MD simulations, in addition to improved prediction of motionally averaged structural properties available from experiment. As the source of experimental data for dynamics fittings, we use (13) C NMR spin-lattice relaxation times T1 of backbone and sidechain carbons, which allow to determine correlation times of both overall molecular and intramolecular motions. For structural fittings, we use motionally averaged experimental values of NMR J couplings. The proline residue and its derivative 4-hydroxyproline with relatively simple cyclic structure and sidechain dynamics were chosen for the assessment of the new approach in this work. Initially, grid search and simplexed MD simulations identified large number of parameter sets which fit equally well experimental J couplings. Using the Arrhenius-type relationship between the force constant and the correlation time, the available MD data for a series of parameter sets were analyzed to predict the value of the force constant that best reproduces experimental timescale of the sidechain dynamics. Verification of the new force-field (termed as AMBER99SB-ILDNP) against NMR J couplings and correlation times showed consistent and significant improvements compared to the original force field in reproducing both structural and dynamics properties. The results suggest that matching experimental timescales of motions together with motionally averaged characteristics is the valid approach for force field parameter optimization. Such a comprehensive approach is not restricted to cyclic residues and can be extended to other amino acid residues, as well as to the backbone. Copyright © 2013 Wiley Periodicals, Inc.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isotalo, Aarno

    A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less

  2. Unified halo-independent formalism from convex hulls for direct dark matter searches

    NASA Astrophysics Data System (ADS)

    Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.

    2017-12-01

    Using the Fenchel-Eggleston theorem for convex hulls (an extension of the Caratheodory theorem), we prove that any likelihood can be maximized by either a dark matter 1- speed distribution F(v) in Earth's frame or 2- Galactic velocity distribution fgal(vec u), consisting of a sum of delta functions. The former case applies only to time-averaged rate measurements and the maximum number of delta functions is (Script N‑1), where Script N is the total number of data entries. The second case applies to any harmonic expansion coefficient of the time-dependent rate and the maximum number of terms is Script N. Using time-averaged rates, the aforementioned form of F(v) results in a piecewise constant unmodulated halo function tilde eta0BF(vmin) (which is an integral of the speed distribution) with at most (Script N-1) downward steps. The authors had previously proven this result for likelihoods comprised of at least one extended likelihood, and found the best-fit halo function to be unique. This uniqueness, however, cannot be guaranteed in the more general analysis applied to arbitrary likelihoods. Thus we introduce a method for determining whether there exists a unique best-fit halo function, and provide a procedure for constructing either a pointwise confidence band, if the best-fit halo function is unique, or a degeneracy band, if it is not. Using measurements of modulation amplitudes, the aforementioned form of fgal(vec u), which is a sum of Galactic streams, yields a periodic time-dependent halo function tilde etaBF(vmin, t) which at any fixed time is a piecewise constant function of vmin with at most Script N downward steps. In this case, we explain how to construct pointwise confidence and degeneracy bands from the time-averaged halo function. Finally, we show that requiring an isotropic Galactic velocity distribution leads to a Galactic speed distribution F(u) that is once again a sum of delta functions, and produces a time-dependent tilde etaBF(vmin, t) function (and a time-averaged tilde eta0BF(vmin)) that is piecewise linear, differing significantly from best-fit halo functions obtained without the assumption of isotropy.

  3. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO

    PubMed Central

    Zhu, Zhichuan; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified. PMID:29853983

  4. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO.

    PubMed

    Li, Yang; Zhu, Zhichuan; Hou, Alin; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified.

  5. The Stagger-grid: A grid of 3D stellar atmosphere models. II. Horizontal and temporal averaging and spectral line formation

    NASA Astrophysics Data System (ADS)

    Magic, Z.; Collet, R.; Hayek, W.; Asplund, M.

    2013-12-01

    Aims: We study the implications of averaging methods with different reference depth scales for 3D hydrodynamical model atmospheres computed with the Stagger-code. The temporally and spatially averaged (hereafter denoted as ⟨3D⟩) models are explored in the light of local thermodynamic equilibrium (LTE) spectral line formation by comparing spectrum calculations using full 3D atmosphere structures with those from ⟨3D⟩ averages. Methods: We explored methods for computing mean ⟨3D⟩ stratifications from the Stagger-grid time-dependent 3D radiative hydrodynamical atmosphere models by considering four different reference depth scales (geometrical depth, column-mass density, and two optical depth scales). Furthermore, we investigated the influence of alternative averages (logarithmic, enforced hydrostatic equilibrium, flux-weighted temperatures). For the line formation we computed curves of growth for Fe i and Fe ii lines in LTE. Results: The resulting ⟨3D⟩ stratifications for the four reference depth scales can be very different. We typically find that in the upper atmosphere and in the superadiabatic region just below the optical surface, where the temperature and density fluctuations are highest, the differences become considerable and increase for higher Teff, lower log g, and lower [Fe / H]. The differential comparison of spectral line formation shows distinctive differences depending on which ⟨3D⟩ model is applied. The averages over layers of constant column-mass density yield the best mean ⟨3D⟩ representation of the full 3D models for LTE line formation, while the averages on layers at constant geometrical height are the least appropriate. Unexpectedly, the usually preferred averages over layers of constant optical depth are prone to increasing interference by reversed granulation towards higher effective temperature, in particular at low metallicity. Appendix A is available in electronic form at http://www.aanda.orgMean ⟨3D⟩ models are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/560/A8 as well as at http://www.stagger-stars.net

  6. Energy laws in human travel behaviour

    NASA Astrophysics Data System (ADS)

    Kölbl, Robert; Helbing, Dirk

    2003-05-01

    We show that energy concepts can contribute to the understanding of human travel behaviour. First, the average journey times for different modes of transport are inversely proportional to the energy consumption rates measured for the respective human physical activities. Second, when daily travel-time distributions for different modes of transport such as walking, cycling, bus or car travel are appropriately scaled, they turn out to have a universal functional relationship. This corresponds to a canonical-like energy distribution with exceptions for short trips, which can be theoretically explained. Combined, this points to a law of constant average energy consumption for the physical activity of daily travel. Applying these natural laws could help to improve long-term urban and transport planning.

  7. Hydrodynamic and Nonhydrodynamic Contributions to the Bimolecular Collision Rates of Solute Molecules in Supercooled Bulk Water

    PubMed Central

    2015-01-01

    Bimolecular collision rate constants of a model solute are measured in water at T = 259–303 K, a range encompassing both normal and supercooled water. A stable, spherical nitroxide spin probe, perdeuterated 2,2,6,6-tetramethyl-4-oxopiperidine-1-oxyl, is studied using electron paramagnetic resonance spectroscopy (EPR), taking advantage of the fact that the rotational correlation time, τR, the mean time between successive spin exchanges within a cage, τRE, and the long-time-averaged spin exchange rate constants, Kex, of the same solute molecule may be measured independently. Thus, long- and short-time translational diffusion behavior may be inferred from Kex and τRE, respectively. In order to measure Kex, the effects of dipole–dipole interactions (DD) on the EPR spectra must be separated, yielding as a bonus the DD broadening rate constants that are related to the dephasing rate constant due to DD, Wdd. We find that both Kex and Wdd behave hydrodynamically; that is to say they vary monotonically with T/η or η/T, respectively, where η is the shear viscosity, as predicted by the Stokes–Einstein equation. The same is true of the self-diffusion of water. In contrast, τRE does not follow hydrodynamic behavior, varying rather as a linear function of the density reaching a maximum at 276 ± 2 K near where water displays a maximum density. PMID:24874024

  8. Short time Fourier analysis of the electromyogram - Fast movements and constant contraction

    NASA Technical Reports Server (NTRS)

    Hannaford, Blake; Lehman, Steven

    1986-01-01

    Short-time Fourier analysis was applied to surface electromyograms (EMG) recorded during rapid movements, and during isometric contractions at constant forces. A portion of the data to be transformed by multiplying the signal by a Hamming window was selected, and then the discrete Fourier transform was computed. Shifting the window along the data record, a new spectrum was computed each 10 ms. The transformed data were displayed in spectograms or 'voiceprints'. This short-time technique made it possible to see time-dependencies in the EMG that are normally averaged in the Fourier analysis of these signals. Spectra of EMGs during isometric contractions at constant force vary in the short (10-20 ms) term. Short-time spectra from EMGs recorded during rapid movements were much less variable. The windowing technique picked out the typical 'three-burst pattern' in EMG's from both wrist and head movements. Spectra during the bursts were more consistent than those during isometric contractions. Furthermore, there was a consistent shift in spectral statistics in the course of the three bursts. Both the center frequency and the variance of the spectral energy distribution grew from the first burst to the second burst in the same muscle. The analogy between EMGs and speech signals is extended to argue for future applicability of short-time spectral analysis of EMG.

  9. A comparative study on the effect of Curcumin and Chlorin-p6 on the diffusion of two organic cations across a negatively charged lipid bilayer probed by second harmonic spectroscopy

    NASA Astrophysics Data System (ADS)

    Saini, R. K.; Varshney, G. K.; Dube, A.; Gupta, P. K.; Das, K.

    2014-09-01

    The influence of Curcumin and Chlorin-p6 (Cp6) on the real time diffusion kinetics of two organic cations, LDS (LDS-698) and Malachite Green (MG) across a negatively charged phospholipid bilayer is investigated by Second Harmonic (SH) spectroscopy. The diffusion time constant of LDS at neutral pH in liposomes containing either Curcumin or Cp6 is significantly reduced, the effect being more pronounced with Curcumin. At acidic pH, the quantum of reduction in the diffusion time constant of MG by both the drugs was observed to be similar. The relative changes in the average diffusion time constants of the cations with increasing drug concentration at pH 5.0 and 7.4 shows a substantial pH effect for Curcumin induced membrane permeability, while a modest pH effect was observed for Cp6 induced membrane permeability. Based on available evidence this can be attributed to the increased interaction between the drug and the polar head groups of the lipid at pH 7.4 where the drug resides closer to the lipid-water interface.

  10. Modified large number theory with constant G

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Recami, E.

    1983-03-01

    The inspiring ''numerology'' uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the ''gravitational world'' (cosmos) with the ''strong world'' (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the ''Large Number Theory,'' cosmos and hadrons are considered to be (finite) similar systems, so that the ratio R-bar/r-bar of the cosmos typical length R-bar to the hadron typical length r-bar is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle: accordingmore » to the ''cyclical big-bang'' hypothesis: then R-bar and r-bar can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P.Caldirola, G. D. Maccarrone, and M. Pavsic.« less

  11. A modified large number theory with constant G

    NASA Astrophysics Data System (ADS)

    Recami, Erasmo

    1983-03-01

    The inspiring “numerology” uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the “gravitational world” (cosmos) with the “strong world” (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the “Large Number Theory,” cosmos and hadrons are considered to be (finite) similar systems, so that the ratio{{bar R} / {{bar R} {bar r}} of the cosmos typical lengthbar R to the hadron typical lengthbar r is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle—according to the “cyclical bigbang” hypothesis—thenbar R andbar r can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P. Caldirola, G. D. Maccarrone, and M. Pavšič.

  12. Determining modes for the 3D Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Cheskidov, Alexey; Dai, Mimi; Kavlie, Landon

    2018-07-01

    We introduce a determining wavenumber for the forced 3D Navier-Stokes equations (NSE) defined for each individual solution. Even though this wavenumber blows up if the solution blows up, its time average is uniformly bounded for all solutions on the weak global attractor. The bound is compared to Kolmogorov's dissipation wavenumber and the Grashof constant.

  13. Selective Data Acquisition in NMR. The Quantification of Anti-phase Scalar Couplings

    NASA Astrophysics Data System (ADS)

    Hodgkinson, P.; Holmes, K. J.; Hore, P. J.

    Almost all time-domain NMR experiments employ "linear sampling," in which the NMR response is digitized at equally spaced times, with uniform signal averaging. Here, the possibilities of nonlinear sampling are explored using anti-phase doublets in the indirectly detected dimensions of multidimensional COSY-type experiments as an example. The Cramér-Rao lower bounds are used to evaluate and optimize experiments in which the sampling points, or the extent of signal averaging at each point, or both, are varied. The optimal nonlinear sampling for the estimation of the coupling constant J, by model fitting, turns out to involve just a few key time points, for example, at the first node ( t= 1/ J) of the sin(π Jt) modulation. Such sparse sampling patterns can be used to derive more practical strategies, in which the sampling or the signal averaging is distributed around the most significant time points. The improvements in the quantification of NMR parameters can be quite substantial especially when, as is often the case for indirectly detected dimensions, the total number of samples is limited by the time available.

  14. Intra-individual gait patterns across different time-scales as revealed by means of a supervised learning model using kernel-based discriminant regression.

    PubMed

    Horst, Fabian; Eekhoff, Alexander; Newell, Karl M; Schöllhorn, Wolfgang I

    2017-01-01

    Traditionally, gait analysis has been centered on the idea of average behavior and normality. On one hand, clinical diagnoses and therapeutic interventions typically assume that average gait patterns remain constant over time. On the other hand, it is well known that all our movements are accompanied by a certain amount of variability, which does not allow us to make two identical steps. The purpose of this study was to examine changes in the intra-individual gait patterns across different time-scales (i.e., tens-of-mins, tens-of-hours). Nine healthy subjects performed 15 gait trials at a self-selected speed on 6 sessions within one day (duration between two subsequent sessions from 10 to 90 mins). For each trial, time-continuous ground reaction forces and lower body joint angles were measured. A supervised learning model using a kernel-based discriminant regression was applied for classifying sessions within individual gait patterns. Discernable characteristics of intra-individual gait patterns could be distinguished between repeated sessions by classification rates of 67.8 ± 8.8% and 86.3 ± 7.9% for the six-session-classification of ground reaction forces and lower body joint angles, respectively. Furthermore, the one-on-one-classification showed that increasing classification rates go along with increasing time durations between two sessions and indicate that changes of gait patterns appear at different time-scales. Discernable characteristics between repeated sessions indicate continuous intrinsic changes in intra-individual gait patterns and suggest a predominant role of deterministic processes in human motor control and learning. Natural changes of gait patterns without any externally induced injury or intervention may reflect continuous adaptations of the motor system over several time-scales. Accordingly, the modelling of walking by means of average gait patterns that are assumed to be near constant over time needs to be reconsidered in the context of these findings, especially towards more individualized and situational diagnoses and therapy.

  15. Thermal motion in proteins: Large effects on the time-averaged interaction energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goethe, Martin, E-mail: martingoethe@ub.edu; Rubi, J. Miguel; Fita, Ignacio

    As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothingmore » effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.« less

  16. Thermal motion in proteins: Large effects on the time-averaged interaction energies

    NASA Astrophysics Data System (ADS)

    Goethe, Martin; Fita, Ignacio; Rubi, J. Miguel

    2016-03-01

    As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.

  17. Flow convergence caused by a salinity minimum in a tidal channel

    USGS Publications Warehouse

    Warner, John C.; Schoellhamer, David H.; Burau, Jon R.; Schladow, S. Geoffrey

    2006-01-01

    Residence times of dissolved substances and sedimentation rates in tidal channels are affected by residual (tidally averaged) circulation patterns. One influence on these circulation patterns is the longitudinal density gradient. In most estuaries the longitudinal density gradient typically maintains a constant direction. However, a junction of tidal channels can create a local reversal (change in sign) of the density gradient. This can occur due to a difference in the phase of tidal currents in each channel. In San Francisco Bay, the phasing of the currents at the junction of Mare Island Strait and Carquinez Strait produces a local salinity minimum in Mare Island Strait. At the location of a local salinity minimum the longitudinal density gradient reverses direction. This paper presents four numerical models that were used to investigate the circulation caused by the salinity minimum: (1) A simple one-dimensional (1D) finite difference model demonstrates that a local salinity minimum is advected into Mare Island Strait from the junction with Carquinez Strait during flood tide. (2) A three-dimensional (3D) hydrodynamic finite element model is used to compute the tidally averaged circulation in a channel that contains a salinity minimum (a change in the sign of the longitudinal density gradient) and compares that to a channel that contains a longitudinal density gradient in a constant direction. The tidally averaged circulation produced by the salinity minimum is characterized by converging flow at the bed and diverging flow at the surface, whereas the circulation produced by the constant direction gradient is characterized by converging flow at the bed and downstream surface currents. These velocity fields are used to drive both a particle tracking and a sediment transport model. (3) A particle tracking model demonstrates a 30 percent increase in the residence time of neutrally buoyant particles transported through the salinity minimum, as compared to transport through a constant direction density gradient. (4) A sediment transport model demonstrates increased deposition at the near-bed null point of the salinity minimum, as compared to the constant direction gradient null point. These results are corroborated by historically noted large sedimentation rates and a local maximum of selenium accumulation in clams at the null point in Mare Island Strait.

  18. Biotransformation of trace organic chemicals during groundwater recharge: How useful are first-order rate constants?

    PubMed

    Regnery, J; Wing, A D; Alidina, M; Drewes, J E

    2015-08-01

    This study developed relationships between the attenuation of emerging trace organic chemicals (TOrC) during managed aquifer recharge (MAR) as a function of retention time, system characteristics, and operating conditions using controlled laboratory-scale soil column experiments simulating MAR. The results revealed that MAR performance in terms of TOrC attenuation is primarily determined by key environmental parameters (i.e., redox, primary substrate). Soil columns with suboxic and anoxic conditions performed poorly (i.e., less than 30% attenuation of moderately degradable TOrC) in comparison to oxic conditions (on average between 70-100% attenuation for the same compounds) within a residence time of three days. Given this dependency on redox conditions, it was investigated if key parameter-dependent rate constants are more suitable for contaminant transport modeling to properly capture the dynamic TOrC attenuation under field-scale conditions. Laboratory-derived first-order removal kinetics were determined for 19 TOrC under three different redox conditions and rate constants were applied to MAR field data. Our findings suggest that simplified first-order rate constants will most likely not provide any meaningful results if the target compounds exhibit redox dependent biotransformation behavior or if the intention is to exactly capture the decline in concentration over time and distance at field-scale MAR. However, if the intention is to calculate the percent removal after an extended time period and subsurface travel distance, simplified first-order rate constants seem to be sufficient to provide a first estimate on TOrC attenuation during MAR. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Heating of tissues by microwaves: a model analysis.

    PubMed

    Foster, K R; Lozano-Nieto, A; Riu, P J; Ely, T S

    1998-01-01

    We consider the thermal response times for heating of tissue subject to nonionizing (microwave or infrared) radiation. The analysis is based on a dimensionless form of the bioheat equation. The thermal response is governed by two time constants: one (tau1) pertains to heat convection by blood flow, and is of the order of 20-30 min for physiologically normal perfusion rates; the second (tau2) characterizes heat conduction and varies as the square of a distance that characterizes the spatial extent of the heating. Two idealized cases are examined. The first is a tissue block with an insulated surface, subject to irradiation with an exponentially decreasing specific absorption rate, which models a large surface area of tissue exposed to microwaves. The second is a hemispherical region of tissue exposed at a spatially uniform specific absorption rate, which models localized exposure. In both cases, the steady-state temperature increase can be written as the product of the incident power density and an effective time constant tau(eff), which is defined for each geometry as an appropriate function of tau1 and tau2. In appropriate limits of the ratio of these time constants, the local temperature rise is dominated by conductive or convective heat transport. Predictions of the block model agree well with recent data for the thresholds for perception of warmth or pain from exposure to microwave energy. Using these concepts, we developed a thermal averaging time that might be used in standards for human exposure to microwave radiation, to limit the temperature rise in tissue from radiation by pulsed sources. We compare the ANSI exposure standards for microwaves and infrared laser radiation with respect to the maximal increase in tissue temperature that would be allowed at the maximal permissible exposures. A historical appendix presents the origin of the 6-min averaging time used in the microwave standard.

  20. The solar wind effect on cosmic rays and solar activity

    NASA Technical Reports Server (NTRS)

    Fujimoto, K.; Kojima, H.; Murakami, K.

    1985-01-01

    The relation of cosmic ray intensity to solar wind velocity is investigated, using neutron monitor data from Kiel and Deep River. The analysis shows that the regression coefficient of the average intensity for a time interval to the corresponding average velocity is negative and that the absolute effect increases monotonously with the interval of averaging, tau, that is, from -0.5% per 100km/s for tau = 1 day to -1.1% per 100km/s for tau = 27 days. For tau 27 days the coefficient becomes almost constant independently of the value of tau. The analysis also shows that this tau-dependence of the regression coefficiently is varying with the solar activity.

  1. Paramecia Swim with a constant propulsion in Solutions of Varying Viscosity

    NASA Astrophysics Data System (ADS)

    Valles, James M., Jr.; Jung, Ilyong; Mickalide, Harry; Park, Hojin; Powers, Thomas

    2012-02-01

    Paramecia swim through the coordinated beating of the 1000's of cilia covering their body. We have measured the swimming speed of populations of Paramecium Caudatam in solutions of different viscosity, η, to see how their propulsion changes with increased drag. We have found the average instantaneous speed, V to decrease monotonically with increasing η. The product ηv is roughly constant over a factor of 7 change in viscosity suggesting that paramecia swim at constant propulsion force. The distribution of swimming speeds is Gaussian. The width appears proportional to the average speed implying that both fast and slow swimmers exert a constant propulsion. We discuss the possibility that this behavior implies that the body cilia beat at constant force with varying viscosity.

  2. Comparison of Measured and Calculated Stresses in Built-up Beams

    NASA Technical Reports Server (NTRS)

    Levin, L Ross; Nelson, David H

    1946-01-01

    Web stresses and flange stresses were measured in three built-up beams: one of constant depth with flanges of constant cross-section, one linearly tapered in depth with flanges of constant cross section, and one linearly tapered in depth with tapered flanges. The measured stresses were compared with the calculated stresses obtained by the methods outlined in order to determine the degree of accuracy that may be expected from the stress analysis formulas. These comparisons indicated that the average measured stresses for all points in the central section of the beams did not exceed the average calculated stresses by more than 5 percent. It also indicated that the difference between average measured flange stresses and average calculated flange stresses on the net area and a fully effective web did not exceed 6.1 percent.

  3. Rationale choosing interval of a piecewise-constant approximation of input rate of non-stationary queue system

    NASA Astrophysics Data System (ADS)

    Korelin, Ivan A.; Porshnev, Sergey V.

    2018-01-01

    The paper demonstrates the possibility of calculating the characteristics of the flow of visitors to objects carrying out mass events passing through checkpoints. The mathematical model is based on the non-stationary queuing system (NQS) where dependence of requests input rate from time is described by the function. This function was chosen in such way that its properties were similar to the real dependencies of speed of visitors arrival on football matches to the stadium. A piecewise-constant approximation of the function is used when statistical modeling of NQS performing. Authors calculated the dependencies of the queue length and waiting time for visitors to service (time in queue) on time for different laws. Time required to service the entire queue and the number of visitors entering the stadium at the beginning of the match were calculated too. We found the dependence for macroscopic quantitative characteristics of NQS from the number of averaging sections of the input rate.

  4. Accounting for elite indoor 200 m sprint results.

    PubMed

    Usherwood, James R; Wilson, Alan M

    2006-03-22

    Times for indoor 200 m sprint races are notably worse than those for outdoor races. In addition, there is a considerable bias against competitors drawn in inside lanes (with smaller bend radii). Centripetal acceleration requirements increase average forces during sprinting around bends. These increased forces can be modulated by changes in duty factor (the proportion of stride the limb is in contact with the ground). If duty factor is increased to keep limb forces constant, and protraction time and distance travelled during stance are unchanging, bend-running speeds are reduced. Here, we use results from the 2004 Olympics and World Indoor Championships to show quantitatively that the decreased performances in indoor competition, and the bias by lane number, are consistent with this 'constant limb force' hypothesis. Even elite athletes appear constrained by limb forces.

  5. Determinants of translation speed are randomly distributed across transcripts resulting in a universal scaling of protein synthesis times

    NASA Astrophysics Data System (ADS)

    Sharma, Ajeet K.; Ahmed, Nabeel; O'Brien, Edward P.

    2018-02-01

    Ribosome profiling experiments have found greater than 100-fold variation in ribosome density along mRNA transcripts, indicating that individual codon elongation rates can vary to a similar degree. This wide range of elongation times, coupled with differences in codon usage between transcripts, suggests that the average codon translation-rate per gene can vary widely. Yet, ribosome run-off experiments have found that the average codon translation rate for different groups of transcripts in mouse stem cells is constant at 5.6 AA/s. How these seemingly contradictory results can be reconciled is the focus of this study. Here, we combine knowledge of the molecular factors shown to influence translation speed with genomic information from Escherichia coli, Saccharomyces cerevisiae and Homo sapiens to simulate the synthesis of cytosolic proteins in these organisms. The model recapitulates a near constant average translation rate, which we demonstrate arises because the molecular determinants of translation speed are distributed nearly randomly amongst most of the transcripts. Consequently, codon translation rates are also randomly distributed and fast-translating segments of a transcript are likely to be offset by equally probable slow-translating segments, resulting in similar average elongation rates for most transcripts. We also show that the codon usage bias does not significantly affect the near random distribution of codon translation rates because only about 10 % of the total transcripts in an organism have high codon usage bias while the rest have little to no bias. Analysis of Ribo-Seq data and an in vivo fluorescent assay supports these conclusions.

  6. Cole-cole analysis and electrical conduction mechanism of N{sup +} implanted polycarbonate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chawla, Mahak; Shekhawat, Nidhi; Aggarwal, Sanjeev, E-mail: write2sa@gmail.com

    2014-05-14

    In this paper, we present the analysis of the dielectric (dielectric constant, dielectric loss, a.c. conductivity) and electrical properties (I–V characteristics) of pristine and nitrogen ion implanted polycarbonate. The samples of polycarbonate were implanted with 100 keV N{sup +} ions with fluence ranging from 1 × 10{sup 15} to 1 × 10{sup 17} ions cm{sup −2}. The dielectric measurements of these samples were performed in the frequency range of 100 kHz to 100 MHz. It has been observed that dielectric constant decreases whereas dielectric loss and a.c. conductivity increases with increasing ion fluence. An analysis of real and imaginary parts of dielectric permittivity has beenmore » elucidated using Cole-Cole plot of the complex permittivity. With the help of Cole-Cole plot, we determined the values of static dielectric constant (ε{sub s}), optical dielectric constant (ε{sub ∞}), spreading factor (α), average relaxation time (τ{sub 0}), and molecular relaxation time (τ). The I–V characteristics were studied using Keithley (6517) electrometer. The electrical conduction behaviour of pristine and implanted polycarbonate specimens has been explained using various models of conduction.« less

  7. Two-Photon Coherent State Light - Its Generation and Potential Applications

    DTIC Science & Technology

    1984-05-31

    photodetection statistics are sought. Although we shall neglect internal time constant and noise effects, which are present in real detectors, our direct...four times the average number of received photons. Furthermore, the observed fluctuation behavior for any laser, light-emitting diode, or ordinary...mode d. In a real experiment, this is not quite so because of the assumptions made in arriving at (11.3). In particular, the pump modes B1 and B2 cannot

  8. Motion of kinesin in a viscoelastic medium

    NASA Astrophysics Data System (ADS)

    Knoops, Gert; Vanderzande, Carlo

    2018-05-01

    Kinesin is a molecular motor that transports cargo along microtubules. The results of many in vitro experiments on kinesin-1 are described by kinetic models in which one transition corresponds to the forward motion and subsequent binding of the tethered motor head. We argue that in a viscoelastic medium like the cytosol of a cell this step is not Markov and has to be described by a nonexponential waiting time distribution. We introduce a semi-Markov kinetic model for kinesin that takes this effect into account. We calculate, for arbitrary waiting time distributions, the moment generating function of the number of steps made, and determine from this the average velocity and the diffusion constant of the motor. We illustrate our results for the case of a waiting time distribution that is Weibull. We find that for realistic parameter values, viscoelasticity decreases the velocity and the diffusion constant, but increases the randomness (or Fano factor).

  9. Simulating X-ray bursts during a transient accretion event

    NASA Astrophysics Data System (ADS)

    Johnston, Zac; Heger, Alexander; Galloway, Duncan K.

    2018-06-01

    Modelling of thermonuclear X-ray bursts on accreting neutron stars has to date focused on stable accretion rates. However, bursts are also observed during episodes of transient accretion. During such events, the accretion rate can evolve significantly between bursts, and this regime provides a unique test for burst models. The accretion-powered millisecond pulsar SAX J1808.4-3658 exhibits accretion outbursts every 2-3 yr. During the well-sampled month-long outburst of 2002 October, four helium-rich X-ray bursts were observed. Using this event as a test case, we present the first multizone simulations of X-ray bursts under a time-dependent accretion rate. We investigate the effect of using a time-dependent accretion rate in comparison to constant, averaged rates. Initial results suggest that using a constant, average accretion rate between bursts may underestimate the recurrence time when the accretion rate is decreasing, and overestimate it when the accretion rate is increasing. Our model, with an accreted hydrogen fraction of X = 0.44 and a CNO metallicity of ZCNO = 0.02, reproduces the observed burst arrival times and fluences with root mean square (rms) errors of 2.8 h, and 0.11× 10^{-6} erg cm^{-2}, respectively. Our results support previous modelling that predicted two unobserved bursts and indicate that additional bursts were also missed by observations.

  10. Motional timescale predictions by molecular dynamics simulations: Case study using proline and hydroxyproline sidechain dynamics

    PubMed Central

    Aliev, Abil E; Kulke, Martin; Khaneja, Harmeet S; Chudasama, Vijay; Sheppard, Tom D; Lanigan, Rachel M

    2014-01-01

    We propose a new approach for force field optimizations which aims at reproducing dynamics characteristics using biomolecular MD simulations, in addition to improved prediction of motionally averaged structural properties available from experiment. As the source of experimental data for dynamics fittings, we use 13C NMR spin-lattice relaxation times T1 of backbone and sidechain carbons, which allow to determine correlation times of both overall molecular and intramolecular motions. For structural fittings, we use motionally averaged experimental values of NMR J couplings. The proline residue and its derivative 4-hydroxyproline with relatively simple cyclic structure and sidechain dynamics were chosen for the assessment of the new approach in this work. Initially, grid search and simplexed MD simulations identified large number of parameter sets which fit equally well experimental J couplings. Using the Arrhenius-type relationship between the force constant and the correlation time, the available MD data for a series of parameter sets were analyzed to predict the value of the force constant that best reproduces experimental timescale of the sidechain dynamics. Verification of the new force-field (termed as AMBER99SB-ILDNP) against NMR J couplings and correlation times showed consistent and significant improvements compared to the original force field in reproducing both structural and dynamics properties. The results suggest that matching experimental timescales of motions together with motionally averaged characteristics is the valid approach for force field parameter optimization. Such a comprehensive approach is not restricted to cyclic residues and can be extended to other amino acid residues, as well as to the backbone. Proteins 2014; 82:195–215. © 2013 Wiley Periodicals, Inc. PMID:23818175

  11. Heat transfer characteristics of an emergent strand

    NASA Technical Reports Server (NTRS)

    Simon, W. E.; Witte, L. C.; Hedgcoxe, P. G.

    1974-01-01

    A mathematical model was developed to describe the heat transfer characteristics of a hot strand emerging into a surrounding coolant. A stable strand of constant efflux velocity is analyzed, with a constant (average) heat transfer coefficient on the sides and leading surface of the strand. After developing a suitable governing equation to provide an adequate description of the physical system, the dimensionless governing equation is solved with Laplace transform methods. The solution yields the temperature within the strand as a function of axial distance and time. Generalized results for a wide range of parameters are presented, and the relationship of the results and experimental observations is discussed.

  12. Femtosecond soliton source with fast and broad spectral tunability.

    PubMed

    Masip, Martin E; Rieznik, A A; König, Pablo G; Grosz, Diego F; Bragas, Andrea V; Martinez, Oscar E

    2009-03-15

    We present a complete set of measurements and numerical simulations of a femtosecond soliton source with fast and broad spectral tunability and nearly constant pulse width and average power. Solitons generated in a photonic crystal fiber, at the low-power coupling regime, can be tuned in a broad range of wavelengths, from 850 to 1200 nm using the input power as the control parameter. These solitons keep almost constant time duration (approximately 40 fs) and spectral widths (approximately 20 nm) over the entire measured spectra regardless of input power. Our numerical simulations agree well with measurements and predict a wide working wavelength range and robustness to input parameters.

  13. Chemical weathering rates of a soil chronosequence on granitic alluvium: I. Quantification of mineralogical and surface area changes and calculation of primary silicate reaction rates

    USGS Publications Warehouse

    White, A.F.; Blum, A.E.; Schulz, M.S.; Bullen, T.D.; Harden, J.W.; Peterson, M.L.

    1996-01-01

    Mineral weathering rates are determined for a series of soils ranging in age from 0.2-3000 Ky developed on alluvial terraces near Merced in the Central Valley of California. Mineralogical and elemental abundances exhibit time-dependent trends documenting the chemical evolution of granitic sand to residual kaolinite and quartz. Mineral losses with time occur in the order: hornblende > plagioclase > K-feldspar. Maximum volume decreases of >50% occur in the older soils. BET surface areas of the bulk soils increase with age, as do specific surface areas of aluminosilicate mineral fractions such as plagioclase, which increases from 0.4-1.5 m2 g-1 over 600 Ky. Quartz surface areas are lower and change less with time (0.11-0.23 m2 g-1). BET surface areas correspond to increasing external surface roughness (?? = 10-600) and relatively constant internal surface area (??? 1.3 m2 g-1). SEM observations confirm both surface pitting and development of internal porosity. A numerical model describes aluminosilicate dissolution rates as a function of changes in residual mineral abundance, grain size distributions, and mineral surface areas with time. A simple geometric treatment, assuming spherical grains and no surface roughness, predicts average dissolution rates (plagioclase, 10-17.4; K-feldspar, 10-17.8; and hornblende, 10-17.5 mol cm-1 s-1) that are constant with time and comparable to previous estimates of soil weathering. Average rates, based on BET surface area measurements and variable surface roughnesses, are much slower (plagioclase, 10-19.9; K-feldspar, 10-20.5; and hornblende 10-20.1 mol cm-2 s-1). Rates for individual soil horizons decrease by a factor of 101.5 over 3000 Ky indicating that the surface reactivities of minerals decrease as the physical surface areas increase. Rate constants based on BET estimates for the Merced soils are factors of 103-104 slower than reported experimental dissolution rates determined from freshly prepared silicates with low surface roughness (?? <10). This study demonstrates that the utility of experimental rate constants to predict weathering in soils is limited without consideration of variable surface areas and processes that control the evolution of surface reactivity with time.

  14. A Markovian model of evolving world input-output network

    PubMed Central

    Isacchini, Giulio

    2017-01-01

    The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money. PMID:29065145

  15. Filter-based multiscale entropy analysis of complex physiological time series.

    PubMed

    Xu, Yuesheng; Zhao, Liang

    2013-08-01

    Multiscale entropy (MSE) has been widely and successfully used in analyzing the complexity of physiological time series. We reinterpret the averaging process in MSE as filtering a time series by a filter of a piecewise constant type. From this viewpoint, we introduce filter-based multiscale entropy (FME), which filters a time series to generate multiple frequency components, and then we compute the blockwise entropy of the resulting components. By choosing filters adapted to the feature of a given time series, FME is able to better capture its multiscale information and to provide more flexibility for studying its complexity. Motivated by the heart rate turbulence theory, which suggests that the human heartbeat interval time series can be described in piecewise linear patterns, we propose piecewise linear filter multiscale entropy (PLFME) for the complexity analysis of the time series. Numerical results from PLFME are more robust to data of various lengths than those from MSE. The numerical performance of the adaptive piecewise constant filter multiscale entropy without prior information is comparable to that of PLFME, whose design takes prior information into account.

  16. Constant Stress Drop Fits Earthquake Surface Slip-Length Data

    NASA Astrophysics Data System (ADS)

    Shaw, B. E.

    2011-12-01

    Slip at the surface of the Earth provides a direct window into the earthquake source. A longstanding controversy surrounds the scaling of average surface slip with rupture length, which shows the puzzling feature of continuing to increase with rupture length for lengths many times the seismogenic width. Here we show that a more careful treatment of how ruptures transition from small circular ruptures to large rectangular ruptures combined with an assumption of constant stress drop provides a new scaling law for slip versus length which (1) does an excellent job fitting the data, (2) gives an explanation for the large crossover lengthscale at which slip begins to saturate, and (3) supports constant stress drop scaling which matches that seen for small earthquakes. We additionally discuss how the new scaling can be usefully applied to seismic hazard estimates.

  17. Seventeen-year trends in spring and autumn phenophases of Betula pubescens in a boreal environment.

    PubMed

    Poikolainen, Jarmo; Tolvanen, Anne; Karhu, Jouni; Kubin, Eero

    2016-08-01

    Trends in the timing of spring and autumn phenophases of Betula pubescens were investigated in the southern, middle, and northern boreal zones in Finland. The field observations were carried out at 21 sites in the Finnish National Phenological Network in 1997-2013. The effective temperature sum of the thermal growth period, i.e. the sum of the positive differences between diurnal mean temperatures and 5 °C (ETS1), increased annually on average by 6-7 degree day units. Timing of bud burst remained constant in the southern and middle boreal zones but advanced annually by 0.5 day in the northern boreal zone. The effective temperature sum at bud burst (ETS2) showed no trend in the southern and middle boreal zones, whereas ETS2 increased on average from 20-30 to 50 degree day units in the northern boreal zone, almost to the same level as in the other zones. Increase in ETS2 indicates that the trees did not start their growth in very early spring despite warmer spring temperatures. The timing of leaf colouring and leaf fall remained almost constant in the southern boreal zones, whereas these advanced annually by 0.3 and 0.6 day in the middle boreal zone and by 0.6 and 0.4 day in the northern boreal zone, respectively. The duration of the growth period remained constant in all boreal zones. The results indicate high buffering capacity of B. pubescens against temperature changes. The study also shows the importance of the duration of phenological studies: some trends in spring phenophases had levelled out, while new trends in autumn phases had emerged after earlier studies in the same network for a shorter observation period.

  18. Interfacial charge separation and recombination in InP and quasi-type II InP/CdS core/shell quantum dot-molecular acceptor complexes.

    PubMed

    Wu, Kaifeng; Song, Nianhui; Liu, Zheng; Zhu, Haiming; Rodríguez-Córdoba, William; Lian, Tianquan

    2013-08-15

    Recent studies of group II-VI colloidal semiconductor heterostuctures, such as CdSe/CdS core/shell quantum dots (QDs) or dot-in-rod nanorods, show that type II and quasi-type II band alignment can facilitate electron transfer and slow down charge recombination in QD-molecular electron acceptor complexes. To explore the general applicability of this wave function engineering approach for controlling charge transfer properties, we investigate exciton relaxation and dissociation dynamics in InP (a group III-V semiconductor) and InP/CdS core/shell (a heterostructure beween group III-V and II-VI semiconductors) QDs by transient absorption spectroscopy. We show that InP/CdS QDs exhibit a quasi-type II band alignment with the 1S electron delocalized throughout the core and shell and the 1S hole confined in the InP core. In InP-methylviologen (MV(2+)) complexes, excitons in the QD can be dissociated by ultrafast electron transfer to MV(2+) from the 1S electron level (with an average time constant of 11.4 ps) as well as 1P and higher electron levels (with a time constant of 0.39 ps), which is followed by charge recombination to regenerate the complex in its ground state (with an average time constant of 47.1 ns). In comparison, InP/CdS-MV(2+) complexes show similar ultrafast charge separation and 5-fold slower charge recombination rates, consistent with the quasi-type II band alignment in these heterostructures. This result demonstrates that wave function engineering in nanoheterostructures of group III-V and II-VI semiconductors provides a promising approach for optimizing their light harvesting and charge separation for solar energy conversion applications.

  19. An Inverse Square Law Variation for Hubble's Constant

    NASA Astrophysics Data System (ADS)

    Day, Orville W., Jr.

    1999-11-01

    The solution to Einstein's gravitational field equations is examined, using a Robertson-Walker metric with positive curvature, when Hubble's parameter, H_0, is taken to be a constant divided by R^2. R is the cosmic scale factor for the universe treated as a three-dimensional hypersphere in a four-dimensional Euclidean space. This solution produces a self-energy of the universe, W^(0)_self, proportional to the square of the total mass, times the universal gravitational constant divided by the cosmic scale factor, R. This result is totally analogous to the self-energy of the electromagnetic field of a charged particle, W^(0)_self = ke^2/2r, where the total charge e is squared, k is the universal electric constant and r is the scale factor, usually identified as the radius of the particle. It is shown that this choice for H0 leads to physically meaningful results for the average mass density and pressure, and a deacceleration parameter q_0=1.

  20. First passage times for a tracer particle in single file diffusion and fractional Brownian motion.

    PubMed

    Sanders, Lloyd P; Ambjörnsson, Tobias

    2012-05-07

    We investigate the full functional form of the first passage time density (FPTD) of a tracer particle in a single-file diffusion (SFD) system whose population is: (i) homogeneous, i.e., all particles having the same diffusion constant and (ii) heterogeneous, with diffusion constants drawn from a heavy-tailed power-law distribution. In parallel, the full FPTD for fractional Brownian motion [fBm-defined by the Hurst parameter, H ∈ (0, 1)] is studied, of interest here as fBm and SFD systems belong to the same universality class. Extensive stochastic (non-Markovian) SFD and fBm simulations are performed and compared to two analytical Markovian techniques: the method of images approximation (MIA) and the Willemski-Fixman approximation (WFA). We find that the MIA cannot approximate well any temporal scale of the SFD FPTD. Our exact inversion of the Willemski-Fixman integral equation captures the long-time power-law exponent, when H ≥ 1/3, as predicted by Molchan [Commun. Math. Phys. 205, 97 (1999)] for fBm. When H < 1/3, which includes homogeneous SFD (H = 1/4), and heterogeneous SFD (H < 1/4), the WFA fails to agree with any temporal scale of the simulations and Molchan's long-time result. SFD systems are compared to their fBm counter parts; and in the homogeneous system both scaled FPTDs agree on all temporal scales including also, the result by Molchan, thus affirming that SFD and fBm dynamics belong to the same universality class. In the heterogeneous case SFD and fBm results for heterogeneity-averaged FPTDs agree in the asymptotic time limit. The non-averaged heterogeneous SFD systems display a lack of self-averaging. An exponential with a power-law argument, multiplied by a power-law pre-factor is shown to describe well the FPTD for all times for homogeneous SFD and sub-diffusive fBm systems.

  1. Periodical capacity setting methods for make-to-order multi-machine production systems

    PubMed Central

    Altendorfer, Klaus; Hübl, Alexander; Jodlbauer, Herbert

    2014-01-01

    The paper presents different periodical capacity setting methods for make-to-order, multi-machine production systems with stochastic customer required lead times and stochastic processing times to improve service level and tardiness. These methods are developed as decision support when capacity flexibility exists, such as, a certain range of possible working hours a week for example. The methods differ in the amount of information used whereby all are based on the cumulated capacity demand at each machine. In a simulation study the methods’ impact on service level and tardiness is compared to a constant provided capacity for a single and a multi-machine setting. It is shown that the tested capacity setting methods can lead to an increase in service level and a decrease in average tardiness in comparison to a constant provided capacity. The methods using information on processing time and customer required lead time distribution perform best. The results found in this paper can help practitioners to make efficient use of their flexible capacity. PMID:27226649

  2. Using the Mean Shift Algorithm to Make Post Hoc Improvements to the Accuracy of Eye Tracking Data Based on Probable Fixation Locations

    DTIC Science & Technology

    2010-08-01

    astigmatism and other sources, and stay constant from time to time (LC Technologies, 2000). Systematic errors can sometimes reach many degrees of visual angle...Taking the average of all disparities would mean treating each as equally important regardless of whether they are from correct or incorrect mappings. In...likely stop somewhere near the centroid because the large hM basically treats every point equally (or nearly equally if using the multivariate

  3. Piezoelectric and Electrostrictive Materials for Transducer Applications. Volume 1

    DTIC Science & Technology

    1990-01-31

    by non-stoichiometry or by doping with aleovalent ions. For doped materials the aging is very similar to that in PLZT, again affecting the dispersive...during aging looks similar to that in MnO doped PMNPT.? Figure 8 shows the Cole-Cole plot for different aging time in the quenched sample. CL~ m ~ m rmt... parameters show that the angle of tilt of the arc from the real axis a and the average time constant r decrease during aging . The Cole-Cole plot become

  4. Graph transformation method for calculating waiting times in Markov chains.

    PubMed

    Trygubenko, Semen A; Wales, David J

    2006-06-21

    We describe an exact approach for calculating transition probabilities and waiting times in finite-state discrete-time Markov processes. All the states and the rules for transitions between them must be known in advance. We can then calculate averages over a given ensemble of paths for both additive and multiplicative properties in a nonstochastic and noniterative fashion. In particular, we can calculate the mean first-passage time between arbitrary groups of stationary points for discrete path sampling databases, and hence extract phenomenological rate constants. We present a number of examples to demonstrate the efficiency and robustness of this approach.

  5. Absolute Timing of the Crab Pulsar with RXTE

    NASA Technical Reports Server (NTRS)

    Rots, Arnold H.; Jahoda, Keith; Lyne, Andrew G.

    2004-01-01

    We have monitored the phase of the main X-ray pulse of the Crab pulsar with the Rossi X-ray Timing Explorer (RXTE) for almost eight years, since the start of the mission in January 1996. The absolute time of RXTE's clock is sufficiently accurate to allow this phase to be compared directly with the radio profile. Our monitoring observations of the pulsar took place bi-weekly (during the periods when it was at least 30 degrees from the Sun) and we correlated the data with radio timing ephemerides derived from observations made at Jodrell Bank. We have determined the phase of the X-ray main pulse for each observation with a typical error in the individual data points of 50 microseconds. The total ensemble is consistent with a phase that is constant over the monitoring period, with the X-ray pulse leading the radio pulse by 0.01025 plus or minus 0.00120 period in phase, or 344 plus or minus 40 microseconds in time. The error estimate is dominated by a systematic error of 40 microseconds, most likely constant, arising from uncertainties in the instrumental calibration of the radio data. The statistical error is 0.00015 period, or 5 microseconds. The separation of the main pulse and interpulse appears to be unchanging at time scales of a year or less, with an average value of 0.4001 plus or minus 0.0002 period. There is no apparent variation in these values with energy over the 2-30 keV range. The lag between the radio and X-ray pulses ma be constant in phase (i.e., rotational in nature) or constant in time (i.e., due to a pathlength difference). We are not (yet) able to distinguish between these two interpretations.

  6. Accounting for elite indoor 200 m sprint results

    PubMed Central

    Usherwood, James R; Wilson, Alan M

    2005-01-01

    Times for indoor 200 m sprint races are notably worse than those for outdoor races. In addition, there is a considerable bias against competitors drawn in inside lanes (with smaller bend radii). Centripetal acceleration requirements increase average forces during sprinting around bends. These increased forces can be modulated by changes in duty factor (the proportion of stride the limb is in contact with the ground). If duty factor is increased to keep limb forces constant, and protraction time and distance travelled during stance are unchanging, bend-running speeds are reduced. Here, we use results from the 2004 Olympics and World Indoor Championships to show quantitatively that the decreased performances in indoor competition, and the bias by lane number, are consistent with this ‘constant limb force’ hypothesis. Even elite athletes appear constrained by limb forces. PMID:17148323

  7. Minimally invasive plate osteosynthesis technique for displaced midshaft clavicular fracture using the clavicle reductor.

    PubMed

    Zhang, Tao; Chen, Wei; Sun, Jiayuan; Zhang, Qi; Zhang, Yingze

    2017-08-01

    This study aims to introduce a self-designed clavicle reductor and to test the effectivity of a alternative minimally invasive plate osteosynthesis technique (MIPO) for displaced midshaft clavicular fractures (DMCFs) with the application of our self-designed clavicle reductor. From October 2012 to February 2013, 27 male patients who suffered with unilateral displaced midshaft clavicular fracture (DMCFs) were included into our study. Patients were treated by minimally invasive plate osteosynthesis (MIPO) technique with the application of our self-designed clavicle reductor and followed up regularly. Constant-Murley score was employed to test the functional outcomes at one year's follow up. The average follow-up time for the 27 patients was 15.8 months (range, 13-18 months). The average age of all patients was 32.6 (range, 21 to 48). According to OTC system, 12 cases were simple fractures (15-B1), ten cases were wedge fractures (15-B2) and five cases were comminuted fractures (15-B3). With the application of the clavicle reductor, minimally invasive plate osteosynthesis technique can be performed without any barrier in all of the 27 cases. Operative duration was 48.1 minutes (range, 35-65 minutes) and average fluoroscopy time was 12.8 seconds (range, from 7 to 22 seconds). All of the 27 cases healed from four to six months post-operatively. The average Constant-Murley-score of the 27 patients was 92.7 ± 5.88 (range, 80 to 100). No complications were noted. The self-designed clavicle reductor can effectively pave the way for the application of MIPO technique in the treatment of DMCFs. MIPO technique with locking reconstruction plate is a feasible and worthwhile alternative for displaced midshaft clavicular fractures (DMCFs).

  8. Prospects for an Improved Measurement of Experimental Limit on G-dot

    NASA Technical Reports Server (NTRS)

    Sanders, Alvin J.

    2003-01-01

    The orbital motion of an ultra-drag-free satellite, such as the large test body of the SEE (Satellite Energy Exchange) satellite, known as the "Shepherd," may possibly provide the best test for time variation of the gravitational constant G at the level of parts in 10(exp 14). Scarcely anything could be more significant scientifically than the incontestable discovery that a fundamental "constant" of Nature is not constant. A finding of non-zero (G-dot)/G would clearly mark the boundaries where general relativity is valid, and specify the onset of new physics. The requirements for measuring G-dot at the level proposed by SEE will require great care in treating perturbation forces. In the present paper we concentrate on the methods for dealing with the gravitational field due to possible large manufacturing defects in the SEE observatory. We find that, with adequate modeling of the perturbation forces and cancellation methods, the effective time-averaged acceleration on the SEE Shepherd will be approx. 10(exp -18) g (10(exp -17) m/sq s).

  9. Real time microcontroller implementation of an adaptive myoelectric filter.

    PubMed

    Bagwell, P J; Chappell, P H

    1995-03-01

    This paper describes a real time digital adaptive filter for processing myoelectric signals. The filter time constant is automatically selected by the adaptation algorithm, giving a significant improvement over linear filters for estimating the muscle force and controlling a prosthetic device. Interference from mains sources often produces problems for myoelectric processing, and so 50 Hz and all harmonic frequencies are reduced by an averaging filter and differential process. This makes practical electrode placement and contact less critical and time consuming. An economic real time implementation is essential for a prosthetic controller, and this is achieved using an Intel 80C196KC microcontroller.

  10. Optimal Configurations for Rotating Spacecraft Formations

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Hall, Christopher D.

    2000-01-01

    In this paper a new class of formations that maintain a constant shape as viewed from the Earth is introduced. An algorithm is developed to place n spacecraft in a constant shape formation spaced equally in time using the classical orbital elements. To first order, the dimensions of the formation are shown to be simple functions of orbit eccentricity and inclination. The performance of the formation is investigated over a Keplerian orbit using a performance measure based on a weighted average of the angular separations between spacecraft in formation. Analytic approximations are developed that yield optimum configurations for different values of n. The analytic approximations are shown to be in excellent agreement with the exact solutions.

  11. Resistance exercise countermeasures for space flight: implications of training specificity

    NASA Technical Reports Server (NTRS)

    Bamman, M. M.; Caruso, J. F.

    2000-01-01

    While resistance exercise should be a logical choice for prevention of strength loss during unloading, the principle of training specificity cannot be overlooked. Our purpose was to explore training specificity in describing the effect of our constant load exercise countermeasure on isokinetic strength performance. Twelve healthy men (mean +/- SD: 28.0 +/- 5.2 years, 179.4 +/- 3.9 cm, 77.5 +/- 13.6 kg) were randomly assigned to no exercise or resistance exercise (REX) during 14 days of bed rest. REX performed five sets of leg press exercise to volitional fatigue (6-10 repetitions) every other day. Unilateral isokinetic concentric-eccentric knee extension testing performed before and on day 15 prior to reambulation included torque-velocity and power-velocity relationships at four velocities (0.52, 1.75, 2.97, and 4.19 rad s-1), torque-position relationship, and contractile work capacity (10 repetitions at 1.05 rad s-1). Two (group) x 2 (time) ANOVA revealed no group x time interactions; thus, groups were combined. Across velocities, angle-specific torque fell 18% and average power fell 20% (p < 0.05). No velocity x time or mode (concentric/eccentric) x time interactions were noted. Torque x position decreased on average 24% (p < 0.05). Total contractile work dropped 27% (p < 0.05). Results indicate bed rest induces rapid and marked reductions in strength and our constant load resistance training protocol did not prevent isokinetic strength losses. Differences between closed-chain training and open-chain testing may explain the lack of protection.

  12. Voicing produced by a constant velocity lung source

    PubMed Central

    Howe, M. S.; McGowan, R. S.

    2013-01-01

    An investigation is made of the influence of subglottal boundary conditions on the prediction of voiced sounds. It is generally assumed in mathematical models of voicing that vibrations of the vocal folds are maintained by a constant subglottal mean pressure pI, whereas voicing is actually initiated by contraction of the chest cavity until the subglottal pressure becomes large enough to separate the vocal folds. The problem is reformulated to determine voicing characteristics in terms of a prescribed volumetric flow rate Qo of air from the lungs—the evolution of the resulting time-dependent subglottal mean pressure p¯_(t) is then governed by glottal mechanics, the aeroacoustics of the vocal tract, and the influence of continued contraction of the lungs. The new problem is analyzed in detail for an idealized mechanical vocal system that permits precise specification of all boundary conditions. Predictions of the glottal volume velocity pulse shape are found to be in good general agreement with the traditional constant-pI theory when pI is set equal to the time averaged value of p¯_(t). But, in all cases examined the constant-pI approximation yields values of the mean flow rates Qo and sound pressure levels that are smaller by as much as 10%. PMID:23556600

  13. Fate of a mutation in a fluctuating environment

    PubMed Central

    Cvijović, Ivana; Good, Benjamin H.; Jerison, Elizabeth R.; Desai, Michael M.

    2015-01-01

    Natural environments are never truly constant, but the evolutionary implications of temporally varying selection pressures remain poorly understood. Here we investigate how the fate of a new mutation in a fluctuating environment depends on the dynamics of environmental variation and on the selective pressures in each condition. We find that even when a mutation experiences many environmental epochs before fixing or going extinct, its fate is not necessarily determined by its time-averaged selective effect. Instead, environmental variability reduces the efficiency of selection across a broad parameter regime, rendering selection unable to distinguish between mutations that are substantially beneficial and substantially deleterious on average. Temporal fluctuations can also dramatically increase fixation probabilities, often making the details of these fluctuations more important than the average selection pressures acting on each new mutation. For example, mutations that result in a trade-off between conditions but are strongly deleterious on average can nevertheless be more likely to fix than mutations that are always neutral or beneficial. These effects can have important implications for patterns of molecular evolution in variable environments, and they suggest that it may often be difficult for populations to maintain specialist traits, even when their loss leads to a decline in time-averaged fitness. PMID:26305937

  14. Mean-variance portfolio optimization by using time series approaches based on logarithmic utility function

    NASA Astrophysics Data System (ADS)

    Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.

    2017-01-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.

  15. Genetic polymorphisms in varied environments.

    PubMed

    Powell, J R

    1971-12-03

    Thirteen experimenital populationis of Drosophila willistoni were maintained in cages, in some of which the environments were relatively constant and in others varied. After 45 weeks, the populations were assayed by gel electrophoresis for polymorphisms at 22 protein loci. The average heterozygosity per individual and the average unmber of alleles per locus were higher in populations maintained in heterogeneous environments than in populations in more constant enviroments.

  16. Cosmological measure with volume averaging and the vacuum energy problem

    NASA Astrophysics Data System (ADS)

    Astashenok, Artyom V.; del Popolo, Antonino

    2012-04-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.

  17. Anesthetic management with sevoflurane combined with alfaxalone-medetomidine constant rate infusion in a Thoroughbred racehorse undergoing a long-time orthopedic surgery

    PubMed Central

    WAKUNO, Ai; MAEDA, Tatsuya; KODAIRA, Kazumichi; KIKUCHI, Takuya; OHTA, Minoru

    2017-01-01

    ABSTRACT A three-year old Thoroughbred racehorse was anesthetized with sevoflurane and oxygen inhalation anesthesia combined with constant rate infusion (CRI) of alfaxalone-medetomidine for internal fixation of a third metacarpal bone fracture. After premedication with intravenous (IV) injections of medetomidine (6.0 µg/kg IV), butorphanol (25 µg/kg IV), and midazolam (20 µg/kg IV), anesthesia was induced with 5% guaifenesin (500 ml/head IV) followed immediately by alfaxalone (1.0 mg/kg IV). Anesthesia was maintained with sevoflurane and CRIs of alfaxalone (1.0 mg/kg/hr) and medetomidine (3.0 µg/kg/hr). The total surgical time was 180 min, and the total inhalation anesthesia time was 230 min. The average end-tidal sevoflurane concentration during surgery was 1.8%. The mean arterial blood pressure was maintained above 70 mmHg throughout anesthesia, and the recovery time was 65 min. In conclusion, this anesthetic technique may be clinically applicable for Thoroughbred racehorses undergoing a long-time orthopedic surgery. PMID:28955163

  18. Quantifying the Behavior of Stock Correlations Under Market Stress

    PubMed Central

    Preis, Tobias; Kenett, Dror Y.; Stanley, H. Eugene; Helbing, Dirk; Ben-Jacob, Eshel

    2012-01-01

    Understanding correlations in complex systems is crucial in the face of turbulence, such as the ongoing financial crisis. However, in complex systems, such as financial systems, correlations are not constant but instead vary in time. Here we address the question of quantifying state-dependent correlations in stock markets. Reliable estimates of correlations are absolutely necessary to protect a portfolio. We analyze 72 years of daily closing prices of the 30 stocks forming the Dow Jones Industrial Average (DJIA). We find the striking result that the average correlation among these stocks scales linearly with market stress reflected by normalized DJIA index returns on various time scales. Consequently, the diversification effect which should protect a portfolio melts away in times of market losses, just when it would most urgently be needed. Our empirical analysis is consistent with the interesting possibility that one could anticipate diversification breakdowns, guiding the design of protected portfolios. PMID:23082242

  19. Natural convection in a vertical plane channel: DNS results for high Grashof numbers

    NASA Astrophysics Data System (ADS)

    Kiš, P.; Herwig, H.

    2014-07-01

    The turbulent natural convection of a gas ( Pr = 0.71) between two vertical infinite walls at different but constant temperatures is investigated by means of direct numerical simulation for a wide range of Grashof numbers (6.0 × 106 > Gr > 1.0 × 103). The maximum Grashof number is almost one order of magnitude higher than those of computations reported in the literature so far. Results for the turbulent transport equations are presented and compared to previous studies with special attention to the study of Verteegh and Nieuwstadt (Int J Heat Fluid Flow 19:135-149, 1998). All turbulence statistics are available on the TUHH homepage (http://www.tu-harburg.de/tt/dnsdatabase/dbindex.en.html). Accuracy considerations are based on the time averaged balance equations for kinetic and thermal energy. With the second law of thermodynamics Nusselt numbers can be determined by evaluating time averaged wall temperature gradients as well as by a volumetric time averaged integration. Comparing the results of both approaches leads to a direct measure of the physical consistency.

  20. Limitations of heterogeneous models of liquid dynamics: very slow rate exchange in the excess wing.

    PubMed

    Samanta, Subarna; Richert, Ranko

    2014-02-07

    For several molecular glass formers, the nonlinear dielectric effects (NDE's) are investigated for the so-called excess wing regime, i.e., for the relatively high frequencies between 10(2) and 10(7) times the peak loss frequency. It is found that significant nonlinear behavior persists across the entire frequency window of this study, and that its magnitude traces the temperature dependence of the activation energy. A time resolved measurement of the dielectric loss at fields up to 480 kV/cm across tens of thousands of periods reveals that it takes an unexpectedly long time for the steady state NDE to develop. For various materials and at different temperatures and frequencies, it is found that the average structural relaxation with time scale τα governs the equilibration of these fast modes that are associated with time constants τ which are up to 10(7) times shorter than τα. It is argued that true indicators of structural relaxation (such as rate exchange and aging) of these fast modes are slaved to macroscopic softening on the time scale of τα, and thus many orders of magnitude slower than the time constant of the mode itself.

  1. Diurnal variation in ruminal pH on the digestibility of highly digestible perennial ryegrass during continuous culture fermentation.

    PubMed

    Wales, W J; Kolver, E S; Thorne, P L; Egan, A R

    2004-06-01

    Dairy cows grazing high-digestibility pastures exhibit pronounced diurnal variation in ruminal pH, with pH being below values considered optimal for digestion. Using a dual-flow continuous culture system, the hypothesis that minimizing diurnal variation in pH would improve digestion of pasture when pH was low, but not at a higher pH, was tested. Four treatments were imposed, with pH either allowed to exhibit normal diurnal variation around an average pH of 6.1 or 5.6, or maintained at constant pH. Digesta samples were collected during the last 3 d of each of four, 9-d experimental periods. A constant pH at 5.6 compared with a constant pH of 6.1 reduced the digestibility of organic matter (OM), neutral detergent (NDF), and acid detergent fiber (ADF) by 7, 14, and 21%, respectively. When pH was allowed to vary (averaging 5.6), digestion of OM, NDF, and ADF were reduced by 15,30, and 36%, respectively, compared with pH varying at 6.1. There was little difference in digestion parameters when pH was either constant or varied with an average pH of 6.1. However, when average pH was 5.6, maintaining a constant pH significantly increased digestion of OM, NDF, and ADF by 5, 25, and 24% compared with a pH that exhibited normal diurnal variation. These in vitro results show that gains in digestibility and potential milk production can be made by minimizing diurnal variation in ruminal pH, but only when ruminal pH is low (5.6). However, larger gains in productivity can be achieved by increasing average daily ruminal pH from 5.6 to 6.1.

  2. Myoplasmic calcium transients in intact frog skeletal muscle fibers monitored with the fluorescent indicator furaptra

    PubMed Central

    1991-01-01

    Furaptra (Raju, B., E. Murphy, L. A. Levy, R. D. Hall, and R. E. London. 1989. Am. J. Physiol. 256:C540-C548) is a "tri-carboxylate" fluorescent indicator with a chromophore group similar to that of fura- 2 (Grynkiewicz, G., M. Poenie, and R. Y. Tsien. 1985. J. Biol. Chem. 260:3440-3450). In vitro calibrations indicate that furaptra reacts with Ca2+ and Mg2+ with 1:1 stoichiometry, with dissociation constants of 44 microM and 5.3 mM, respectively (16-17 degrees C; ionic strength, 0.15 M; pH, 7.0). Thus, in a frog skeletal muscle fiber stimulated electrically, the indicator is expected to respond to the change in myoplasmic free [Ca2+] (delta[Ca2+]) with little interference from changes in myoplasmic free [Mg2+]. The apparent longitudinal diffusion constant of furaptra in myoplasm was found to be 0.68 (+/- 0.02, SEM) x 10(-6) cm2 s-1 (16-16.5 degrees C), a value which suggests that about half of the indicator was bound to myoplasmic constituents of large molecular weight. Muscle membranes (surface and/or transverse-tubular) appear to have some permeability to furaptra, as the total quantity of indicator contained within a fiber decreased after injection; the average time constant of the loss was 302 (+/- 145, SEM) min. In fibers containing less than 0.5 mM furaptra and stimulated by a single action potential, the calibrated peak value of delta[Ca2+] averaged 5.1 (+/- 0.3, SEM) microM. This value is about half that reported in the preceding paper (9.4 microM; Konishi, M., and S. M. Baylor. 1991. J. Gen. Physiol. 97:245-270) for fibers injected with purpurate-diacetic acid (PDAA). The latter difference may be explained, at least in part, by the likelihood that the effective dissociation constant of furaptra for Ca2+ is larger in vivo than in vitro, owing to the binding of the indicator to myoplasmic constituents. The time course of furaptra's delta[Ca2+], with average values (+/- SEM) for time to peak and half- width of 6.3 (+/- 0.1) and 9.5 (+/- 0.4) ms, respectively, is very similar to that of delta[Ca2+] recorded with PDAA. Since furaptra's delta[Ca2+] can be recorded at a single excitation wavelength (e.g., 420 nm) with little interference from fiber intrinsic changes, movement artifacts, or delta[Mg2+], furaptra represents a useful myoplasmic Ca2+ indicator, with properties complementary to those of other available indicators. PMID:2016581

  3. A cosmic Ray Muon Experiment: a Way to Teach Standard Model of Particles at Community Colleges

    NASA Astrophysics Data System (ADS)

    Barazandeh, C.; Gutarra-Leon, A.; Rivas, R.; Glaser, H.; Majewski, W.

    2016-11-01

    This experiment is an example of research for early undergraduate students and of its benefits and challenges as an accessible strategy for community colleges, in the spirit of the report on improving undergraduate STEM education from the US President's Council of Advisors on Science and Technology. The goals of this project include measuring average low- energy muon flux, day/night flux difference, time dilation, energy spectra of electrons and muons in arbitrary units, muon decay curve, average lifetime of muons. From the lifetime data we calculate the weak coupling constant gw, electric charge e and the Higgs energy density.

  4. Persistent collective trend in stock markets

    NASA Astrophysics Data System (ADS)

    Balogh, Emeric; Simonsen, Ingve; Nagy, Bálint Zs.; Néda, Zoltán

    2010-12-01

    Empirical evidence is given for a significant difference in the collective trend of the share prices during the stock index rising and falling periods. Data on the Dow Jones Industrial Average and its stock components are studied between 1991 and 2008. Pearson-type correlations are computed between the stocks and averaged over stock pairs and time. The results indicate a general trend: whenever the stock index is falling the stock prices are changing in a more correlated manner than in case the stock index is ascending. A thorough statistical analysis of the data shows that the observed difference is significant, suggesting a constant fear factor among stockholders.

  5. Application of the backward extrapolation method to pulsed neutron sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamo, Alberto; Gohar, Yousry

    We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less

  6. Using Time-on-Task Measurements to Understand Student Performance in a Physics Class: A Ten-Year Study

    NASA Astrophysics Data System (ADS)

    Stewart, John

    2015-04-01

    The amount of time spent on out-of-class activities such as working homework, reading, and studying for examinations is presented for 10 years of an introductory, calculus-based physics class at a large public university. While the class underwent significant change in the 10 years studied, the amount of time invested by students in weeks not containing an in-semester examination was constant and did not vary with the length of the reading or homework assignments. The amount of time spent preparing for examinations did change as the course was modified. The time spent on class assignments, both reading and homework, did not scale linearly with the length of the assignment. The time invested in both reading and homework per length of the assignment decreased as the assignments became longer. The class average time invested in examination preparation did change with the average performance on previous examinations in the same class, with more time spent in preparation for lower previous examination scores (R2 = 0 . 70).

  7. Application of the backward extrapolation method to pulsed neutron sources

    DOE PAGES

    Talamo, Alberto; Gohar, Yousry

    2017-09-23

    We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less

  8. New solution to the problem of the tension between the high-redshift and low-redshift measurements of the Hubble constant

    NASA Astrophysics Data System (ADS)

    Bolejko, Krzysztof

    2018-01-01

    During my talk I will present results suggesting that the phenomenon of emerging spatial curvature could resolve the conflict between Planck's (high-redshift) and Riess et al. (low-redshift) measurements of the Hubble constant. The phenomenon of emerging spatial curvature is absent in the Standard Cosmological Model, which has a flat and fixed spatial curvature (small perturbations are considered in the Standard Cosmological Model but their global average vanishes, leading to spatial flatness at all times).In my talk I will show that with the nonlinear growth of cosmic structures the global average deviates from zero. As a result, the spatial curvature evolves from spatial flatness of the early universe to a negatively curved universe at the present day, with Omega_K ~ 0.1. Consequently, the present day expansion rate, as measured by the Hubble constant, is a few percent higher compared to the high-redshift constraints. This provides an explanation why there is a tension between high-redshift (Planck) and low-redshift (Riess et al.) measurements of the Hubble constant. In the presence of emerging spatial curvature these two measurements should in fact be different: high redshift measurements should be slightly lower than the Hubble constant inferred from the low-redshift data.The presentation will be based on the results described in arXiv:1707.01800 and arXiv:1708.09143 (which discuss the phenomenon of emerging spatial curvature) and on a paper that is still work in progress but is expected to be posted on arxiv by the AAS meeting (this paper uses mock low-redshift data to show that starting from the Planck's cosmological models (in the early universe) but with the emerging spatial curvature taken into account, the low-redshift Hubble constant should be 72.4 km/s/Mpc.

  9. SmB6 electron-phonon coupling constant from time- and angle-resolved photoelectron spectroscopy

    NASA Astrophysics Data System (ADS)

    Sterzi, A.; Crepaldi, A.; Cilento, F.; Manzoni, G.; Frantzeskakis, E.; Zacchigna, M.; van Heumen, E.; Huang, Y. K.; Golden, M. S.; Parmigiani, F.

    2016-08-01

    SmB6 is a mixed valence Kondo system resulting from the hybridization between localized f electrons and delocalized d electrons. We have investigated its out-of-equilibrium electron dynamics by means of time- and angle-resolved photoelectron spectroscopy. The transient electronic population above the Fermi level can be described by a time-dependent Fermi-Dirac distribution. By solving a two-temperature model that well reproduces the relaxation dynamics of the effective electronic temperature, we estimate the electron-phonon coupling constant λ to range from 0.13 ±0.03 to 0.04 ±0.01 . These extremes are obtained assuming a coupling of the electrons with either a phonon mode at 10 or 19 meV. A realistic value of the average phonon energy will give an actual value of λ within this range. Our results provide an experimental report on the material electron-phonon coupling, contributing to both the electronic transport and the macroscopic thermodynamic properties of SmB6.

  10. The Discounted Method and Equivalence of Average Criteria for Risk-Sensitive Markov Decision Processes on Borel Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavazos-Cadena, Rolando, E-mail: rcavazos@uaaan.m; Salem-Silva, Francisco, E-mail: frsalem@uv.m

    2010-04-15

    This note concerns discrete-time controlled Markov chains with Borel state and action spaces. Given a nonnegative cost function, the performance of a control policy is measured by the superior limit risk-sensitive average criterion associated with a constant and positive risk sensitivity coefficient. Within such a framework, the discounted approach is used (a) to establish the existence of solutions for the corresponding optimality inequality, and (b) to show that, under mild conditions on the cost function, the optimal value functions corresponding to the superior and inferior limit average criteria coincide on a certain subset of the state space. The approach ofmore » the paper relies on standard dynamic programming ideas and on a simple analytical derivation of a Tauberian relation.« less

  11. The Weighted-Average Lagged Ensemble.

    PubMed

    DelSole, T; Trenary, L; Tippett, M K

    2017-11-01

    A lagged ensemble is an ensemble of forecasts from the same model initialized at different times but verifying at the same time. The skill of a lagged ensemble mean can be improved by assigning weights to different forecasts in such a way as to maximize skill. If the forecasts are bias corrected, then an unbiased weighted lagged ensemble requires the weights to sum to one. Such a scheme is called a weighted-average lagged ensemble. In the limit of uncorrelated errors, the optimal weights are positive and decay monotonically with lead time, so that the least skillful forecasts have the least weight. In more realistic applications, the optimal weights do not always behave this way. This paper presents a series of analytic examples designed to illuminate conditions under which the weights of an optimal weighted-average lagged ensemble become negative or depend nonmonotonically on lead time. It is shown that negative weights are most likely to occur when the errors grow rapidly and are highly correlated across lead time. The weights are most likely to behave nonmonotonically when the mean square error is approximately constant over the range forecasts included in the lagged ensemble. An extreme example of the latter behavior is presented in which the optimal weights vanish everywhere except at the shortest and longest lead times.

  12. Penetration Effects of the Compound Vortex in Gas Metal-Arc Welding

    DTIC Science & Technology

    1988-05-01

    steel plate using constant current GMAW equipment and argon + 2;. oxygen shielding gas. After welding, the plates were cut, ground, polished and etched...49 14. Typical time plot of current used in pulsed GMAW ..... 51 15. The experimental apparatus ........................... 54 16. Plot...this phenomenon could be employed in some manner to yield high penetration welds with low average current. 2. Pulsed GMAW . KolodziejczaK [26] studied

  13. Method of deposition by molecular beam epitaxy

    DOEpatents

    Chalmers, Scott A.; Killeen, Kevin P.; Lear, Kevin L.

    1995-01-01

    A method is described for reproducibly controlling layer thickness and varying layer composition in an MBE deposition process. In particular, the present invention includes epitaxially depositing a plurality of layers of material on a substrate with a plurality of growth cycles whereby the average of the instantaneous growth rates for each growth cycle and from one growth cycle to the next remains substantially constant as a function of time.

  14. Method of deposition by molecular beam epitaxy

    DOEpatents

    Chalmers, S.A.; Killeen, K.P.; Lear, K.L.

    1995-01-10

    A method is described for reproducibly controlling layer thickness and varying layer composition in an MBE deposition process. In particular, the present invention includes epitaxially depositing a plurality of layers of material on a substrate with a plurality of growth cycles whereby the average of the instantaneous growth rates for each growth cycle and from one growth cycle to the next remains substantially constant as a function of time. 9 figures.

  15. Atomic force microscopic study of step bunching and macrostep formation during the growth of L-arginine phosphate monohydrate single crystals

    NASA Astrophysics Data System (ADS)

    Sangwal, K.; Torrent-Burgues, J.; Sanz, F.; Gorostiza, P.

    1997-02-01

    The experimental results of the formation of step bunches and macrosteps on the {100} face of L-arginine phosphate monohydrate crystals grown from aqueous solutions at different supersaturations studied by using atomic force microscopy are described and discussed. It was observed that (1) the step height does not remain constant with increasing time but fluctuates within a particular range of heights, which depends on the region of step bunches, (2) the maximum height and the slope of bunched steps increases with growth time as well as supersaturation used for growth, and that (3) the slope of steps of relatively small heights is usually low with a value of about 8° and does not depend on the region of formation of step bunches, but the slope of steps of large heights is up to 21°. Analysis of the experimental results showed that (1) at a particular value of supersaturation the ratio of the average step height to the average step spacing is a constant, suggesting that growth of the {100} face of L-arginine phosphate monohydrate crystals occurs by direct integration of growth entities to growth steps, and that (2) the formation of step bunches and macrosteps follows the dynamic theory of faceting, advanced by Vlachos et al.

  16. FDTD analysis of human body-core temperature elevation due to RF far-field energy prescribed in the ICNIRP guidelines.

    PubMed

    Hirata, Akimasa; Asano, Takayuki; Fujiwara, Osamu

    2007-08-21

    This study investigated the relationship between the specific absorption rate and temperature elevation in an anatomically-based model named NORMAN for exposure to radio-frequency far fields in the ICNIRP guidelines (1998 Health Phys. 74 494-522). The finite-difference time-domain method is used for analyzing the electromagnetic absorption and temperature elevation in NORMAN. In order to consider the variability of human thermoregulation, parameters for sweating are derived and incorporated into a conventional sweating formula. First, we investigated the effect of blood temperature variation modeling on body-core temperature. The computational results show that the modeling of blood temperature variation was the dominant factor influencing the body-core temperature. This is because the temperature in the inner tissues is elevated via the circulation of blood whose temperature was elevated due to EM absorption. Even at different frequencies, the body-core temperature elevation at an identical whole-body average specific absorption rate (SAR) was almost the same, suggesting the effectiveness of the whole-body average SAR as a measure in the ICNIRP guidelines. Next, we discussed the effect of sweating on the temperature elevation and thermal time constant of blood. The variability of temperature elevation caused by the sweating rate was found to be 30%. The blood temperature elevation at the basic restriction in the ICNIRP guidelines of 0.4 W kg(-1) is 0.25 degrees C even for a low sweating rate. The thermal time constant of blood temperature elevation was 23 min and 52 min for a man with a lower and a higher sweating rate, respectively, which is longer than the average time of the SAR in the ICNIRP guidelines. Thus, the whole-body average SAR required for blood temperature elevation of 1 degrees C was 4.5 W kg(-1) in the model of a human with the lower sweating coefficients for 60 min exposure. From a comparison of this value with the basic restriction in the ICNIRP guidelines of 0.4 W kg(-1), the safety factor was 11.

  17. Extra compressibility terms for Favre-averaged two-equation models of inhomogeneous turbulent flows

    NASA Technical Reports Server (NTRS)

    Rubesin, Morris W.

    1990-01-01

    Forms of extra-compressibility terms that result from use of Favre averaging of the turbulence transport equations for kinetic energy and dissipation are derived. These forms introduce three new modeling constants, a polytropic coefficient that defines the interrelationships of the pressure, density, and enthalpy fluctuations and two constants in the dissipation equation that account for the non-zero pressure-dilitation and mean pressure gradients.

  18. Circadian and Fatigue Effects on the Dynamics of the Pupillary Light Reflex

    NASA Technical Reports Server (NTRS)

    Tyson, Terence L.; Flynn-Evans, Erin E.; Stone, Leland S.

    2017-01-01

    The pupillary light reflex (PLR) is known to be driven by the photo-entrainment of intrinsically-photosensitive retinal ganglion cells. These ganglion cells are known to have retino-hypothalamic projections to the suprachiasmatic nuclei (SCN), which regulates circadian rhythms, and bilateral retinal projections to the pretectal area, which mediates the PLR (Dacey et al., 2005; Hattar et al., 2002, 2006). The magnitude of the PLR has previously been shown to show circadian variation (Mnch et al., 2012). In this study, we used a constant routine protocol (Mills et al., 1978) to examine circadian and fatigue effects on the dynamics of the PLR. We characterized the PLR (pupil size as a function of time) in response to a square-wave change in the luminance of a white display background, at ten different times over a single circadian cycle. Twelve subjects participated in three daytime baseline runs followed by 7 nighttime runs each separated by an hour (17 23 hours after awakening). The constriction and dilation phases of the PLR waveform were fit separately with a single exponential model (Longtin Milton, 1988; Milton Longtin, 1990) with time constants estimated using a least-squares method. The dilation time constant exhibited a distinct sinusoidal modulation across the circadian cycle and, after 23 hours of wakefulness, decreased on average by 82 ms (paired t-test, p 0.05) relative to baseline (mean: 543 ms). The constriction time constant however, did not show an overall decrease with increased wakefulness. We conclude that the dynamics of the PLR show circadian variation and that, in addition, the briskness of the dilation response to a step-decrease in luminance shows a homeostatic enhancement with increased wakefulness.

  19. Nonadiabatic rate constants for proton transfer and proton-coupled electron transfer reactions in solution: Effects of quadratic term in the vibronic coupling expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soudackov, Alexander V.; Hammes-Schiffer, Sharon

    2015-11-21

    Rate constant expressions for vibronically nonadiabatic proton transfer and proton-coupled electron transfer reactions are presented and analyzed. The regimes covered include electronically adiabatic and nonadiabatic reactions, as well as high-frequency and low-frequency proton donor-acceptor vibrational modes. These rate constants differ from previous rate constants derived with the cumulant expansion approach in that the logarithmic expansion of the vibronic coupling in terms of the proton donor-acceptor distance includes a quadratic as well as a linear term. The analysis illustrates that inclusion of this quadratic term in the framework of the cumulant expansion framework may significantly impact the rate constants at highmore » temperatures for proton transfer interfaces with soft proton donor-acceptor modes that are associated with small force constants and weak hydrogen bonds. The effects of the quadratic term may also become significant in these regimes when using the vibronic coupling expansion in conjunction with a thermal averaging procedure for calculating the rate constant. In this case, however, the expansion of the coupling can be avoided entirely by calculating the couplings explicitly for the range of proton donor-acceptor distances sampled. The effects of the quadratic term for weak hydrogen-bonding systems are less significant for more physically realistic models that prevent the sampling of unphysical short proton donor-acceptor distances. Additionally, the rigorous relation between the cumulant expansion and thermal averaging approaches is clarified. In particular, the cumulant expansion rate constant includes effects from dynamical interference between the proton donor-acceptor and solvent motions and becomes equivalent to the thermally averaged rate constant when these dynamical effects are neglected. This analysis identifies the regimes in which each rate constant expression is valid and thus will be important for future applications to proton transfer and proton-coupled electron transfer in chemical and biological processes.« less

  20. Characterizing Detrended Fluctuation Analysis of multifractional Brownian motion

    NASA Astrophysics Data System (ADS)

    Setty, V. A.; Sharma, A. S.

    2015-02-01

    The Hurst exponent (H) is widely used to quantify long range dependence in time series data and is estimated using several well known techniques. Recognizing its ability to remove trends the Detrended Fluctuation Analysis (DFA) is used extensively to estimate a Hurst exponent in non-stationary data. Multifractional Brownian motion (mBm) broadly encompasses a set of models of non-stationary data exhibiting time varying Hurst exponents, H(t) as against a constant H. Recently, there has been a growing interest in time dependence of H(t) and sliding window techniques have been used to estimate a local time average of the exponent. This brought to fore the ability of DFA to estimate scaling exponents in systems with time varying H(t) , such as mBm. This paper characterizes the performance of DFA on mBm data with linearly varying H(t) and further test the robustness of estimated time average with respect to data and technique related parameters. Our results serve as a bench-mark for using DFA as a sliding window estimator to obtain H(t) from time series data.

  1. Universality of spectrum of passive scalar variance at very high Schmidt number in isotropic steady turbulence

    NASA Astrophysics Data System (ADS)

    Gotoh, Toshiyuki

    2012-11-01

    Spectrum of passive scalar variance at very high Schmidt number up to 1000 in isotropic steady turbulence has been studied by using very high resolution DNS. Gaussian random force and scalar source which are isotropic and white in time are applied at low wavenumber band. Since the Schmidt number is very large, the system was integrated for 72 large eddy turn over time for the system to forgot the initial state. It is found that the scalar spectrum attains the asymptotic k-1 spectrum in the viscous-convective range and the constant CB is found to be 5.7 which is larger than 4.9 obtained by DNS under the uniform mean scalar gradient. Reasons for the difference are inferred as the Reynolds number effect, anisotropy, difference in the scalar injection, duration of time average, and the universality of the constant is discussed. The constant CB is also compared with the prediction by the Lagrangian statistical theory for the passive scalar. The scalar spectrum in the far diffusive range is found to be exponential, which is consistent with the Kraichnan's spectrum. However, the Kraichnan spectrum was derived under the assumption that the velocity field is white in time, therefore theoretical explanation of the agreement needs to be explored. Grant-in-Aid for Scientific Research No. 21360082, Ministry of Education, Culture, Sports, Science and Technology of Japan.

  2. Theory of Turing Patterns on Time Varying Networks.

    PubMed

    Petit, Julien; Lauwens, Ben; Fanelli, Duccio; Carletti, Timoteo

    2017-10-06

    The process of pattern formation for a multispecies model anchored on a time varying network is studied. A nonhomogeneous perturbation superposed to an homogeneous stable fixed point can be amplified following the Turing mechanism of instability, solely instigated by the network dynamics. By properly tuning the frequency of the imposed network evolution, one can make the examined system behave as its averaged counterpart, over a finite time window. This is the key observation to derive a closed analytical prediction for the onset of the instability in the time dependent framework. Continuously and piecewise constant periodic time varying networks are analyzed, setting the framework for the proposed approach. The extension to nonperiodic settings is also discussed.

  3. Multidimensional discrete compactons in nonlinear Schrödinger lattices with strong nonlinearity management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Ambroise, J.; Salerno, M.; Kevrekidis, P. G.

    The existence of multidimensional lattice compactons in the discrete nonlinear Schrödinger equation in the presence of fast periodic time modulations of the nonlinearity is demonstrated. By averaging over the period of the fast modulations, an effective averaged dynamical equation arises with coupling constants involving Bessel functions of the first and zeroth kinds. We show that these terms allow one to solve, at this averaged level, for exact discrete compacton solution configurations in the corresponding stationary equation. We focus on seven types of compacton solutions. Single-site and vortex solutions are found to be always stable in the parametric regimes we examined.more » We also found that other solutions such as double-site in- and out-of-phase, four-site symmetric and antisymmetric, and a five-site compacton solution are found to have regions of stability and instability in two-dimensional parametric planes, involving variations of the strength of the coupling and of the nonlinearity. We also explore the time evolution of the solutions and compare the dynamics according to the averaged equations with those of the original dynamical system. Finally, the possible observation of compactons in Bose-Einstein condensates loaded in a deep two-dimensional optical lattice with interactions modulated periodically in time is also discussed.« less

  4. Multidimensional discrete compactons in nonlinear Schrödinger lattices with strong nonlinearity management

    DOE PAGES

    D'Ambroise, J.; Salerno, M.; Kevrekidis, P. G.; ...

    2015-11-19

    The existence of multidimensional lattice compactons in the discrete nonlinear Schrödinger equation in the presence of fast periodic time modulations of the nonlinearity is demonstrated. By averaging over the period of the fast modulations, an effective averaged dynamical equation arises with coupling constants involving Bessel functions of the first and zeroth kinds. We show that these terms allow one to solve, at this averaged level, for exact discrete compacton solution configurations in the corresponding stationary equation. We focus on seven types of compacton solutions. Single-site and vortex solutions are found to be always stable in the parametric regimes we examined.more » We also found that other solutions such as double-site in- and out-of-phase, four-site symmetric and antisymmetric, and a five-site compacton solution are found to have regions of stability and instability in two-dimensional parametric planes, involving variations of the strength of the coupling and of the nonlinearity. We also explore the time evolution of the solutions and compare the dynamics according to the averaged equations with those of the original dynamical system. Finally, the possible observation of compactons in Bose-Einstein condensates loaded in a deep two-dimensional optical lattice with interactions modulated periodically in time is also discussed.« less

  5. Fitting a function to time-dependent ensemble averaged data.

    PubMed

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  6. Molecular Weight Effects on the Viscoelastic Response of a Polyimide

    NASA Technical Reports Server (NTRS)

    Nicholson, Lee M.; Whitley, Karen S.; Gates, Thomas S.

    2000-01-01

    The effect of molecular weight on the viscoelastic performance of an advanced polymer (LaRC -SI) was investigated through the use of creep compliance tests. Testing consisted of short-term isothermal creep and recovery with the creep segments performed under constant load. The tests were conducted at three temperatures below the glass transition temperature of each material with different molecular weight. Through the use of time-aging-time superposition procedures, the material constants, material master curves and aging-related parameters were evaluated at each temperature for a given molecular weight. The time-temperature superposition technique helped to describe the effect of temperature on the timescale of the viscoelastic response of each molecular weight. It was shown that the low molecular weight materials have increased creep compliance and creep compliance rate, and are more sensitive to temperature than the high molecular weight materials. Furthermore, a critical molecular weight transition was observed to occur at a weight-average molecular weight of approximately 25000 g/mol below which, the temperature sensitivity of the time-temperature superposition shift factor increases rapidly.

  7. Spatial search on a two-dimensional lattice with long-range interactions

    NASA Astrophysics Data System (ADS)

    Osada, Tomo; Sanaka, Kaoru; Munro, William J.; Nemoto, Kae

    2018-06-01

    Quantum-walk-based algorithms that search a marked location among N locations on a d -dimensional lattice succeeds in time O (√{N }) for d >2 , while this is not found to be possible when d =2 . In this paper, we consider a spatial search algorithm using continuous-time quantum walk on a two-dimensional square lattice with the existence of additional long-range edges. We examined such a search on a probabilistic graph model where an edge connecting non-nearest-neighbor lattice points i and j apart by a distance |i -j | is added by probability pi j=|i-j | -α(α ≥0 ) . Through numerical analysis, we found that the search succeeds in time O (√{N }) when α ≤αc=2.4 ±0.1 . For α >2 , the expectation value of the additional long-range edges on each node scales as a constant when N →∞ , which means that search time of O (√{N }) is achieved on a graph with average degree scaling as a constant.

  8. Reactive-transport modeling of iron diagenesis and associated organic carbon remineralization in a Florida (USA) subterranean estuary

    USGS Publications Warehouse

    Roy, Moutusi; Martin, Jonathan B.; Smith, Christopher G.; Cable, Jaye E.

    2011-01-01

    Iron oxides are important terminal electron acceptors for organic carbon (OC) remineralization in subterranean estuaries, particularly where oxygen and nitrate concentrations are low. In Indian River Lagoon, Florida, USA, terrestrial Fe-oxides dissolve at the seaward edge of the seepage face and flow upward into overlying marine sediments where they precipitate as Fe-sulfides. The dissolved Fe concentrations vary by over three orders of magnitude, but Fe-oxide dissolution rates are similar across the 25-m wide seepage face, averaging around 0.21 mg/cm2/yr. The constant dissolution rate, but differing concentrations, indicate Fe dissolution is controlled by a combination of increasing lability of dissolved organic carbon (DOC) and slower porewater flow velocities with distance offshore. In contrast, the average rate constants of Fe-sulfide precipitation decrease from 21.9 × 10-8 s-1 to 0.64 × 10-8 s-1 from the shoreline to the seaward edge of the seepage face as more oxygenated surface water circulates through the sediment. The amount of OC remineralized by Fe-oxides varies little across the seepage face, averaging 5.34 × 10-2 mg/cm2/yr. These rates suggest about 3.4 kg of marine DOC was remineralized in a 1-m wide, shore-perpendicular strip of the seepage face as the terrestrial sediments were transgressed over the past 280 years. During this time, about 10 times more marine solid organic carbon (SOC) accumulated in marine sediments than were removed from the underlying terrestrial sediments. Indian River Lagoon thus appears to be a net sink for marine OC.

  9. Stability Investigation of a Blunted Cone and a Blunted Ogive with a Flared Cylinder Afterbody at Mach Numbers from 0.30 to 2.85

    NASA Technical Reports Server (NTRS)

    Coltrane, Lucille C.

    1959-01-01

    A cone with a blunt nose tip and a 10.7 deg cone half angle and an ogive with a blunt nose tip and a 20 deg flared cylinder afterbody have been tested in free flight over a Mach number range of 0.30 to 2.85 and a Reynolds number range of 1 x 10(exp 6) to 23 x 10(exp 6). Time histories, cross plots of force and moment coefficients, and plots of the longitudinal force,coefficient, rolling velocity, aerodynamic center, normal- force-curve slope, and dynamic stability are presented. With the center-of-gravity location at about 50 percent of the model length, the models were both statically and dynamically stable throughout the Mach number range. For the cone, the average aerodynamic center moved slightly forward with decreasing speeds and the normal-force-curve slope was fairly constant throughout the speed range. For the ogive, the average aerodynamic center remained practically constant and the normal-force-curve slope remained practically constant to a Mach number of approximately 1.6 where a rising trend is noted. Maximum drag coefficient for the cone, with reference to the base area, was approximately 0.6, and for the ogive, with reference to the area of the cylindrical portion, was approximately 2.1.

  10. Enhancing the NS-2 Network Simulator for Near Real-Time Control Feedback and Distributed Simulation

    DTIC Science & Technology

    2009-03-21

    Communication Mediator, see mediator Constant Bit Rate, see cbr Emulation, 8 Georgia Tech Network Simulator, see GT- NetS Globlal Mobile Information ...PAGE Form ApprovedOMB No. 0704–0188 The public reporting burden for this collection of information is estimated to average 1 hour per response...for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704–0188

  11. Stresses and elastic constants of crystalline sodium, from molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiferl, S.K.

    1985-02-01

    The stresses and the elastic constants of bcc sodium are calculated by molecular dynamics (MD) for temperatures to T = 340K. The total adiabatic potential of a system of sodium atoms is represented by pseudopotential model. The resulting expression has two terms: a large, strictly volume-dependent potential, plus a sum over ion pairs of a small, volume-dependent two-body potential. The stresses and the elastic constants are given as strain derivatives of the Helmholtz free energy. The resulting expressions involve canonical ensemble averages (and fluctuation averages) of the position and volume derivatives of the potential. An ensemble correction relates the resultsmore » to MD equilibrium averages. Evaluation of the potential and its derivatives requires the calculation of integrals with infinite upper limits of integration, and integrand singularities. Methods for calculating these integrals and estimating the effects of integration errors are developed. A method is given for choosing initial conditions that relax quickly to a desired equilibrium state. Statistical methods developed earlier for MD data are extended to evaluate uncertainties in fluctuation averages, and to test for symmetry. 45 refs., 10 figs., 4 tabs.« less

  12. Temperature dependence of (+)-catechin pyran ring proton coupling constants as measured by NMR and modeled using GMMX search methodology

    Treesearch

    Fred L. Tobiason; Stephen S. Kelley; M. Mark Midland; Richard W. Hemingway

    1997-01-01

    The pyran ring proton coupling constants for (+)-catechin have been experimentally determined in deuterated methanol over a temperature range of 213 K to 313 K. The experimental coupling constants were simulated to 0.04 Hz on the average at a 90 percent confidence limit using a LAOCOON method. The temperature dependence of the coupling constants was reproduced from the...

  13. Convergence Studies of Mass Transport in Disks with Gravitational Instabilities. I. The Constant Cooling Time Case

    NASA Astrophysics Data System (ADS)

    Michael, Scott; Steiman-Cameron, Thomas Y.; Durisen, Richard H.; Boley, Aaron C.

    2012-02-01

    We conduct a convergence study of a protostellar disk, subject to a constant global cooling time and susceptible to gravitational instabilities (GIs), at a time when heating and cooling are roughly balanced. Our goal is to determine the gravitational torques produced by GIs, the level to which transport can be represented by a simple α-disk formulation, and to examine fragmentation criteria. Four simulations are conducted, identical except for the number of azimuthal computational grid points used. A Fourier decomposition of non-axisymmetric density structures in cos (mphi), sin (mphi) is performed to evaluate the amplitudes Am of these structures. The Am , gravitational torques, and the effective Shakura & Sunyaev α arising from gravitational stresses are determined for each resolution. We find nonzero Am for all m-values and that Am summed over all m is essentially independent of resolution. Because the number of measurable m-values is limited to half the number of azimuthal grid points, higher-resolution simulations have a larger fraction of their total amplitude in higher-order structures. These structures act more locally than lower-order structures. Therefore, as the resolution increases the total gravitational stress decreases as well, leading higher-resolution simulations to experience weaker average gravitational torques than lower-resolution simulations. The effective α also depends upon the magnitude of the stresses, thus αeff also decreases with increasing resolution. Our converged αeff is consistent with predictions from an analytic local theory for thin disks by Gammie, but only over many dynamic times when averaged over a substantial volume of the disk.

  14. Ash level meter for a fixed-bed coal gasifier

    DOEpatents

    Fasching, George E.

    1984-01-01

    An ash level meter for a fixed-bed coal gasifier is provided which utilizes the known ash level temperature profile to monitor the ash bed level. A bed stirrer which travels up and down through the extent of the bed ash level is modified by installing thermocouples to measure the bed temperature as the stirrer travels through the stirring cycle. The temperature measurement signals are transmitted to an electronic signal process system by an FM/FM telemetry system. The processing system uses the temperature signals together with an analog stirrer position signal, taken from a position transducer disposed to measure the stirrer position to compute the vertical location of the ash zone upper boundary. The circuit determines the fraction of each total stirrer cycle time the stirrer-derived bed temperature is below a selected set point, multiplies this fraction by the average stirrer signal level, multiplies this result by an appropriate constant and adds another constant such that a 1 to 5 volt signal from the processor corresponds to a 0 to 30 inch span of the ash upper boundary level. Three individual counters in the processor store clock counts that are representative of: (1) the time the stirrer temperature is below the set point (500.degree. F.), (2) the time duration of the corresponding stirrer travel cycle, and (3) the corresponding average stirrer vertical position. The inputs to all three counters are disconnected during any period that the stirrer is stopped, eliminating corruption of the measurement by stirrer stoppage.

  15. Optimization and universality of Brownian search in a basic model of quenched heterogeneous media

    NASA Astrophysics Data System (ADS)

    Godec, Aljaž; Metzler, Ralf

    2015-05-01

    The kinetics of a variety of transport-controlled processes can be reduced to the problem of determining the mean time needed to arrive at a given location for the first time, the so-called mean first-passage time (MFPT) problem. The occurrence of occasional large jumps or intermittent patterns combining various types of motion are known to outperform the standard random walk with respect to the MFPT, by reducing oversampling of space. Here we show that a regular but spatially heterogeneous random walk can significantly and universally enhance the search in any spatial dimension. In a generic minimal model we consider a spherically symmetric system comprising two concentric regions with piecewise constant diffusivity. The MFPT is analyzed under the constraint of conserved average dynamics, that is, the spatially averaged diffusivity is kept constant. Our analytical calculations and extensive numerical simulations demonstrate the existence of an optimal heterogeneity minimizing the MFPT to the target. We prove that the MFPT for a random walk is completely dominated by what we term direct trajectories towards the target and reveal a remarkable universality of the spatially heterogeneous search with respect to target size and system dimensionality. In contrast to intermittent strategies, which are most profitable in low spatial dimensions, the spatially inhomogeneous search performs best in higher dimensions. Discussing our results alongside recent experiments on single-particle tracking in living cells, we argue that the observed spatial heterogeneity may be beneficial for cellular signaling processes.

  16. Development of a Type I gluten-free sourdough.

    PubMed

    Picozzi, C; Mariotti, M; Cappa, C; Tedesco, B; Vigentini, I; Foschino, R; Lucisano, M

    2016-02-01

    The aim of this study was the setting up of a gluten-free sourdough from selected lactobacilli and yeasts isolated from a traditional wheat-based Type I sourdough. A gluten-free matrix was inoculated with Lactobacillus sanfranciscensis and Candida humilis, fermented to pH 4·0, and constantly propagated for ten times. A stable association between micro-organisms was observed from the second refreshment with mean values of 9·08 ± 0·25 log CFU g(-1) for lactobacilli and 7·81 ± 0·07 log CFU g(-1) for yeasts. In order to have a good workability of the dough, a 230 BU consistency was considered. Rheofermentographic indices remained constant over the ten refreshments, showing an average value of 23·2 mm dough height in about 7·5 h. The CO2 production and retention volumes reached average values of 1430 and 1238 ml respectively. The microbiological and technological data obtained highlighted that a GF sourdough was effectively developed. Type I sourdough has a long tradition as a leavening agent of baked goods as its use results in an improved texture, flavour, taste and extended shelf-life of the final products. In this study a Type I gluten-free sourdough was obtained. After few refreshments in controlled conditions, the sourdough presented a stable association between Lactobacillus sanfranciscensis and Candida humilis, constant fermentation times and technological properties (in terms of dough consistency, dough maximum height, CO2 production and retention). The results showed that the gluten-free sourdough developed in this study can improve the overall quality of gluten-free baked products. © 2015 The Society for Applied Microbiology.

  17. Life History Characteristics of Frankliniella occidentalis and Frankliniella intonsa (Thysanoptera: Thripidae) in Constant and Fluctuating Temperatures.

    PubMed

    Ullah, Mohammad Shaef; Lim, Un Taek

    2015-06-01

    Frankliniella occidentalis (Pergande) and Frankliniella intonsa (Trybom) are sympatric pests of many greenhouse and field crops in Korea. We compared the influence of constant (27.3°C) and fluctuating temperatures (23.8-31.5°C, with an average of 27.3°C) on the life table characteristics of F. occidentalis and F. intonsa held at a photoperiod of 16:8 (L:D) h and 45±5% relative humidity. The development times of both F. occidentalis and F. intonsa were significantly affected by temperature fluctuation, species, and sex. The development time from egg to adult of F. intonsa was shorter than that for F. occidentalis at both constant and fluctuating temperatures. Survival of immature life stages was higher under fluctuating than constant temperature for both thrips species. The total and daily production of first instars was higher in F. intonsa (90.4 and 4.2 at constant temperature, and 95.7 and 3.9 at fluctuating temperatures) than that of F. occidentalis (58.7 and 3.3 at constant temperature, and 60.5 and 3.1 at fluctuating temperatures) under both constant and fluctuating temperatures. The percentage of female offspring was greater in F. intonsa (72.1-75.7%) than in F. occidentalis (57.4-58.7%) under both temperature regimes. The intrinsic rate of natural increase (rm) was higher at constant temperature than at fluctuating temperature for both thrips species. F. intonsa had a higher rm value (0.2146 and 0.2004) than did F. occidentalis (0.1808 and 0.1733), under both constant and fluctuating temperatures, respectively. The biological response of F. occidentalis and F. intonsa to constant and fluctuating temperature was found to be interspecifically different, and F. intonsa may have higher pest potential than F. occidentalis based on the life table parameters we are reporting first here. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Average luminosity distance in inhomogeneous universes

    NASA Astrophysics Data System (ADS)

    Kostov, Valentin Angelov

    Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus it is more directly applicable to our observations. Unlike previous studies, the averaging is exact, non-perturbative, an includes all possible non-linear effects. The inhomogeneous universes are represented by Sweese-cheese models containing random and simple cubic lattices of mass- compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein - de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. For voids aligned in a certain direction, there is a cumulative gravitational lensing correction to the distance modulus that increases with redshift. That correction is present even for small voids and depends on the density contrast of the voids, not on their radius. Averaging over all directions destroys the cumulative correction even in a non-randomized simple cubic lattice of voids. Despite the well known argument for photon flux conservation, the average distance modulus correction at low redshifts is not zero due to the peculiar velocities. A formula for the maximum possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (1) have approximately constant densities in their interior and walls, (2) are not in a deep nonlinear regime. The actual average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximum. That is traced to cancelations between the corrections coming from the fronts and backs of different voids at the same redshift from the observer. The calculated correction at low redshifts allows one to readily predict the redshift at which the averaged fluctuation in the Hubble diagram is below a required precision and suggests a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.

  19. Switching synchronization in one-dimensional memristive networks

    NASA Astrophysics Data System (ADS)

    Slipko, Valeriy A.; Shumovskyi, Mykola; Pershin, Yuriy V.

    2015-11-01

    We report on a switching synchronization phenomenon in one-dimensional memristive networks, which occurs when several memristive systems with different switching constants are switched from the high- to low-resistance state. Our numerical simulations show that such a collective behavior is especially pronounced when the applied voltage slightly exceeds the combined threshold voltage of memristive systems. Moreover, a finite increase in the network switching time is found compared to the average switching time of individual systems. An analytical model is presented to explain our observations. Using this model, we have derived asymptotic expressions for memory resistances at short and long times, which are in excellent agreement with results of our numerical simulations.

  20. Correcting GOES-R Magnetometer Data for Stray Fields

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Freesland, Douglas C.; Tadikonda, Sivakumara K.; Kronenwetter, Jeffrey; Todirita, Monica; Dahya, Melissa; Chu, Donald

    2016-01-01

    Time-varying spacecraft magnetic fields or stray fields are a problem for magnetometer systems. While constant fields can be removed with zero offset calibration, stray fields are difficult to distinguish from ambient field variations. Putting two magnetometers on a long boom and solving for both the ambient and stray fields can be a good idea, but this gradiometer solution is even more susceptible to noise than a single magnetometer. Unless the stray fields are larger than the magnetometer noise, simply averaging the two measurements is a more accurate approach. If averaging is used, it may be worthwhile to explicitly estimate and remove stray fields. Models and estimation algorithms are provided for solar array, arcjet and reaction wheel fields.

  1. High Grazing Angle and High Resolution Sea Clutter: Correlation and Polarisation Analyses

    DTIC Science & Technology

    2007-03-01

    the azimuthal correlation. The correlation between the HH and VV sea clutter data is low. A CA-CFAR ( cell average constant false-alarm rate...to calculate the power spectra of correlation profiles. The frequency interval of the traditional Discrete Fourier Transform is NT1 Hz, where N and...sea spikes, the Entropy-Alpha decomposition of sea spikes is shown in Figure 30. The process first locates spikes using a cell -average constant false

  2. α‐Conotoxin M1 (CTx) blocks αδ binding sites of adult nicotinic receptors while ACh binding at αε sites elicits only small and short quantal synaptic currents

    PubMed Central

    Dudel, Josef

    2014-01-01

    Abstract In ‘embryonic’ nicotinic receptors, low CTx concentrations are known to block only the αδ binding site, whereas binding of ACh at the αγ‐site elicits short single channel openings and short bursts. In adult muscles the αγ‐ is replaced by the αε‐site. Quantal EPSCs (qEPSCs) were elicited in adult muscles by depolarization pulses and recorded through a perfused macropatch electrode. One to 200 nmol L−1 CTx reduced amplitudes and decay time constants of qEPSCs, but increased their rise times. CTx block at the αδ binding sites was incomplete: The qEPSCs still contained long bursts from not yet blocked receptors, whereas their average decay time constants were reduced by a short burst component generated by ACh binding to the αε‐site. Two nanomolar CTx applied for 3 h reduced the amplitudes of qEPSCs to less than half with a constant slope. The equilibrium concentration of the block is below 1 nmol L−1 and lower than that of embryonic receptors. CTx‐block increased in proportion to CTx concentrations (average rate 2 × 104 s−1·mol−1 L). Thus, the reactions of ‘embryonic’ and of adult nicotinic receptors to block by CTx are qualitatively the same. – The study of the effects of higher CTx concentrations or of longer periods of application of CTx was limited by presynaptic effects of CTx. Even low CTx concentrations severely reduced the release of quanta by activating presynaptic M2 receptors at a maximal rate of 6 × 105 s−1·mol−1 L. When this dominant inhibition was prevented by blocking the M2 receptors with methoctramine, activation of M1 receptors was unmasked and facilitated release. PMID:25501436

  3. The Influence of Aircraft Speed Variations on Sensible Heat-Flux Measurements by Different Airborne Systems

    NASA Astrophysics Data System (ADS)

    Martin, Sabrina; Bange, Jens

    2014-01-01

    Crawford et al. (Boundary-Layer Meteorol 66:237-245, 1993) showed that the time average is inappropriate for airborne eddy-covariance flux calculations. The aircraft's ground speed through a turbulent field is not constant. One reason can be a correlation with vertical air motion, so that some types of structures are sampled more densely than others. To avoid this, the time-sampled data are adjusted for the varying ground speed so that the modified estimates are equivalent to spatially-sampled data. A comparison of sensible heat-flux calculations using temporal and spatial averaging methods is presented and discussed. Data of the airborne measurement systems , Helipod and Dornier 128-6 are used for the analysis. These systems vary in size, weight and aerodynamic characteristics, since the is a small unmanned aerial vehicle (UAV), the Helipod a helicopter-borne turbulence probe and the Dornier 128-6 a manned research aircraft. The systematic bias anticipated in covariance computations due to speed variations was neither found when averaging over Dornier, Helipod nor UAV flight legs. However, the random differences between spatial and temporal averaging fluxes were found to be up to 30 % on the individual flight legs.

  4. Infrared radiation of thin plastic films.

    NASA Technical Reports Server (NTRS)

    Tien, C. L.; Chan, C. K.; Cunnington, G. R.

    1972-01-01

    A combined analytical and experimental study is presented for infrared radiation characteristics of thin plastic films with and without a metal substrate. On the basis of the thin-film analysis, a simple analytical technique is developed for determining band-averaged optical constants of thin plastic films from spectral normal transmittance data for two different film thicknesses. Specifically, the band-averaged optical constants of polyethylene terephthalate and polyimide were obtained from transmittance measurements of films with thicknesses in the range of 0.25 to 3 mil. The spectral normal reflectance and total normal emittance of the film side of singly aluminized films are calculated by use of optical constants; the results compare favorably with measured values.

  5. The stretch to stray on time: Resonant length of random walks in a transient

    NASA Astrophysics Data System (ADS)

    Falcke, Martin; Friedhoff, Victor Nicolai

    2018-05-01

    First-passage times in random walks have a vast number of diverse applications in physics, chemistry, biology, and finance. In general, environmental conditions for a stochastic process are not constant on the time scale of the average first-passage time or control might be applied to reduce noise. We investigate moments of the first-passage time distribution under an exponential transient describing relaxation of environmental conditions. We solve the Laplace-transformed (generalized) master equation analytically using a novel method that is applicable to general state schemes. The first-passage time from one end to the other of a linear chain of states is our application for the solutions. The dependence of its average on the relaxation rate obeys a power law for slow transients. The exponent ν depends on the chain length N like ν = - N / ( N + 1 ) to leading order. Slow transients substantially reduce the noise of first-passage times expressed as the coefficient of variation (CV), even if the average first-passage time is much longer than the transient. The CV has a pronounced minimum for some lengths, which we call resonant lengths. These results also suggest a simple and efficient noise control strategy and are closely related to the timing of repetitive excitations, coherence resonance, and information transmission by noisy excitable systems. A resonant number of steps from the inhibited state to the excitation threshold and slow recovery from negative feedback provide optimal timing noise reduction and information transmission.

  6. Spatial variability in the trends in extreme storm surges and weekly-scale high water levels in the eastern Baltic Sea

    NASA Astrophysics Data System (ADS)

    Soomere, Tarmo; Pindsoo, Katri

    2016-03-01

    We address the possibilities of a separation of the overall increasing trend in maximum water levels of semi-enclosed water bodies into associated trends in the heights of local storm surges and basin-scale components of the water level based on recorded and modelled local water level time series. The test area is the Baltic Sea. Sequences of strong storms may substantially increase its water volume and raise the average sea level by almost 1 m for a few weeks. Such events are singled out from the water level time series using a weekly-scale average. The trends in the annual maxima of the weekly average have an almost constant value along the entire eastern Baltic Sea coast for averaging intervals longer than 4 days. Their slopes are ~4 cm/decade for 8-day running average and decrease with an increase of the averaging interval. The trends for maxima of local storm surge heights represent almost the entire spatial variability in the water level maxima. Their slopes vary from almost zero for the open Baltic Proper coast up to 5-7 cm/decade in the eastern Gulf of Finland and Gulf of Riga. This pattern suggests that an increase in wind speed in strong storms is unlikely in this area but storm duration may have increased and wind direction may have rotated.

  7. Foundational Performance Analyses of Pressure Gain Combustion Thermodynamic Benefits for Gas Turbines

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Kaemming, Thomas A.

    2012-01-01

    A methodology is described whereby the work extracted by a turbine exposed to the fundamentally nonuniform flowfield from a representative pressure gain combustor (PGC) may be assessed. The method uses an idealized constant volume cycle, often referred to as an Atkinson or Humphrey cycle, to model the PGC. Output from this model is used as input to a scalable turbine efficiency function (i.e., a map), which in turn allows for the calculation of useful work throughout the cycle. Integration over the entire cycle yields mass-averaged work extraction. The unsteady turbine work extraction is compared to steady work extraction calculations based on various averaging techniques for characterizing the combustor exit pressure and temperature. It is found that averages associated with momentum flux (as opposed to entropy or kinetic energy) provide the best match. This result suggests that momentum-based averaging is the most appropriate figure-of-merit to use as a PGC performance metric. Using the mass-averaged work extraction methodology, it is also found that the design turbine pressure ratio for maximum work extraction is significantly higher than that for a turbine fed by a constant pressure combustor with similar inlet conditions and equivalence ratio. Limited results are presented whereby the constant volume cycle is replaced by output from a detonation-based PGC simulation. The results in terms of averaging techniques and design pressure ratio are similar.

  8. Using solid phase micro extraction to determine salting-out (Setschenow) constants for hydrophobic organic chemicals.

    PubMed

    Jonker, Michiel T O; Muijs, Barry

    2010-06-01

    With increasing ionic strength, the aqueous solubility and activity of organic chemicals are altered. This so-called salting-out effect causes the hydrophobicity of the chemicals to be increased and sorption in the marine environment to be more pronounced than in freshwater systems. The process can be described with empirical salting-out or Setschenow constants, which traditionally are determined by comparing aqueous solubilities in freshwater and saline water. Aqueous solubilities of hydrophobic organic chemicals (HOCs) however are difficult to determine, which might partly explain the limited size of the existing data base on Setschenow constants for these chemicals. In this paper, we propose an alternative approach for determining the constants, which is based on the use of solid phase micro extraction (SPME) fibers. Partitioning of polycyclic aromatic hydrocarbons (PAHs) to SPME fibers increased about 1.7 times when going from de-ionized water to seawater. From the log-linear relationship between SPME fiber-water partition coefficients and ionic strength, Setschenow constants were derived, which measured on average 0.35 L mol(-1). These values agreed with literature values existing for some of the investigated PAHs and were independent of solute hydrophobicity or molar volume. Based on the present data, SPME seems to be a convenient and suitable alternative technique to determine Setschenow constants for HOCs. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  9. Diffusion and binding constants for acetylcholine derived from the falling phase of miniature endplate currents.

    PubMed Central

    Land, B R; Harris, W V; Salpeter, E E; Salpeter, M M

    1984-01-01

    In previous papers we studied the rising phase of a miniature endplate current (MEPC) to derive diffusion and forward rate constants controlling acetylcholine (AcCho) in the intact neuromuscular junction. The present study derives similar values (but with smaller error ranges) for these constants by including experimental results from the falling phase of the MEPC. We find diffusion to be 4 X 10(-6) cm2 s-1, slightly slower than free diffusion, forward binding to be 3.3 X 10(7) M-1 s-1, and the distance from an average release site to the nearest exit from the cleft to be 1.6 micron. We also estimate the back reaction rates. From our values we can accurately describe the shape of MEPCs under different conditions of receptor and esterase concentration. Since we suggest that unbinding is slower than isomerization, we further predict that there should be several short "closing flickers" during the total open time for an AcCho-ligated receptor channel. PMID:6584895

  10. Household spending on health care.

    PubMed

    Chaplin, R; Earl, L

    2000-10-01

    This article examines changes in household spending on health care between 1978 and 1998. It also provides a detailed look at household spending on health care in 1998. Data on household spending are from Statistics Canada's Family Expenditure Survey for survey years between 1978 and 1996, and from the annual Survey of Household Spending for 1997 and 1998. Proportion of after-tax spending was calculated by subtracting average personal income taxes from average total expenditures and then dividing health care expenditures by this figure. Per capita spending was calculated by dividing average household spending by average household size. Constant dollar figures and adjustments for inflation were calculated using the Consumer Price Index (1998 = 100) to control for the effect of inflation over time. Almost every Canadian household (98.2%) reported health care expenditures in 1998, spending an average of close to $1,200, up from around $900 in 1978. In 1998, households dedicated a larger share of their average after-tax spending (2.9%) to health care than they did 20 years earlier (2.3%). Health insurance premiums claimed the largest share (29.8%) of average health care expenditures, followed by dental care, then prescription medications and pharmaceutical products.

  11. A simple method relating specific rate constants k(E,J) and Thermally averaged rate constants k(infinity)(T) of unimolecular bond fission and the reverse barrierless association reactions.

    PubMed

    Troe, J; Ushakov, V G

    2006-06-01

    This work describes a simple method linking specific rate constants k(E,J) of bond fission reactions AB --> A + B with thermally averaged capture rate constants k(cap)(T) of the reverse barrierless combination reactions A + B --> AB (or the corresponding high-pressure dissociation or recombination rate constants k(infinity)(T)). Practical applications are given for ionic and neutral reaction systems. The method, in the first stage, requires a phase-space theoretical treatment with the most realistic minimum energy path potential available, either from reduced dimensionality ab initio or from model calculations of the potential, providing the centrifugal barriers E(0)(J). The effects of the anisotropy of the potential afterward are expressed in terms of specific and thermal rigidity factors f(rigid)(E,J) and f(rigid)(T), respectively. Simple relationships provide a link between f(rigid)(E,J) and f(rigid)(T) where J is an average value of J related to J(max)(E), i.e., the maximum J value compatible with E > or = E0(J), and f(rigid)(E,J) applies to the transitional modes. Methods for constructing f(rigid)(E,J) from f(rigid)(E,J) are also described. The derived relationships are adaptable and can be used on that level of information which is available either from more detailed theoretical calculations or from limited experimental information on specific or thermally averaged rate constants. The examples used for illustration are the systems C6H6+ <==> C6H5+ + H, C8H10+ --> C7H7+ + CH3, n-C9H12+ <==> C7H7+ + C2H5, n-C10H14+ <==> C7H7+ + C3H7, HO2 <==> H + O2, HO2 <==> HO + O, and H2O2 <==> 2HO.

  12. Examining a scaled dynamical system of telomere shortening

    NASA Astrophysics Data System (ADS)

    Cyrenne, Benoit M.; Gooding, Robert J.

    2015-02-01

    A model of telomere dynamics is proposed and examined. Our model, which extends a previously introduced model that incorporates stem cells as progenitors of new cells, imposes the Hayflick limit, the maximum number of cell divisions that are possible. This new model leads to cell populations for which the average telomere length is not necessarily a monotonically decreasing function of time, in contrast to previously published models. We provide a phase diagram indicating where such results would be expected via the introduction of scaled populations, rate constants and time. The application of this model to available leukocyte baboon data is discussed.

  13. The effects of ground hydrology on climate sensitivity to solar constant variations

    NASA Technical Reports Server (NTRS)

    Chou, S. H.; Curran, R. J.; Ohring, G.

    1979-01-01

    The effects of two different evaporation parameterizations on the climate sensitivity to solar constant variations are investigated by using a zonally averaged climate model. The model is based on a two-level quasi-geostrophic zonally averaged annual mean model. One of the evaporation parameterizations tested is a nonlinear formulation with the Bowen ratio determined by the predicted vertical temperature and humidity gradients near the earth's surface. The other is the linear formulation with the Bowen ratio essentially determined by the prescribed linear coefficient.

  14. Non-uniformly weighted sampling for faster localized two-dimensional correlated spectroscopy of the brain in vivo

    NASA Astrophysics Data System (ADS)

    Verma, Gaurav; Chawla, Sanjeev; Nagarajan, Rajakumar; Iqbal, Zohaib; Albert Thomas, M.; Poptani, Harish

    2017-04-01

    Two-dimensional localized correlated spectroscopy (2D L-COSY) offers greater spectral dispersion than conventional one-dimensional (1D) MRS techniques, yet long acquisition times and limited post-processing support have slowed its clinical adoption. Improving acquisition efficiency and developing versatile post-processing techniques can bolster the clinical viability of 2D MRS. The purpose of this study was to implement a non-uniformly weighted sampling (NUWS) scheme for faster acquisition of 2D-MRS. A NUWS 2D L-COSY sequence was developed for 7T whole-body MRI. A phantom containing metabolites commonly observed in the brain at physiological concentrations was scanned ten times with both the NUWS scheme of 12:48 duration and a 17:04 constant eight-average sequence using a 32-channel head coil. 2D L-COSY spectra were also acquired from the occipital lobe of four healthy volunteers using both the proposed NUWS and the conventional uniformly-averaged L-COSY sequence. The NUWS 2D L-COSY sequence facilitated 25% shorter acquisition time while maintaining comparable SNR in humans (+0.3%) and phantom studies (+6.0%) compared to uniform averaging. NUWS schemes successfully demonstrated improved efficiency of L-COSY, by facilitating a reduction in scan time without affecting signal quality.

  15. Encoding of sound envelope transients in the auditory cortex of juvenile rats and adult rats.

    PubMed

    Lu, Qi; Jiang, Cuiping; Zhang, Jiping

    2016-02-01

    Accurate neural processing of time-varying sound amplitude and spectral information is vital for species-specific communication. During postnatal development, cortical processing of sound frequency undergoes progressive refinement; however, it is not clear whether cortical processing of sound envelope transients also undergoes age-related changes. We determined the dependence of neural response strength and first-spike latency on sound rise-fall time across sound levels in the primary auditory cortex (A1) of juvenile (P20-P30) rats and adult (8-10 weeks) rats. A1 neurons were categorized as "all-pass", "short-pass", or "mixed" ("all-pass" at high sound levels to "short-pass" at lower sound levels) based on the normalized response strength vs. rise-fall time functions across sound levels. The proportions of A1 neurons within each of the three categories in juvenile rats were similar to that in adult rats. In general, with increasing rise-fall time, the average response strength decreased and the average first-spike latency increased in A1 neurons of both groups. At a given sound level and rise-fall time, the average normalized neural response strength did not differ significantly between the two age groups. However, the A1 neurons in juvenile rats showed greater absolute response strength, longer first-spike latency compared to those in adult rats. In addition, at a constant sound level, the average first-spike latency of juvenile A1 neurons was more sensitive to changes in rise-fall time. Our results demonstrate the dependence of the responses of rat A1 neurons on sound rise-fall time, and suggest that the response latency exhibit some age-related changes in cortical representation of sound envelope rise time. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Control of High-Speed Flows Using Helium Injection

    DTIC Science & Technology

    2005-01-21

    Gordeyev et al. 2003). The original instrument was described by Malley et al. (1992), and has been developed further by Jumper at Notre Dame (Hugo & Jumper ...n, described by the Gladstone-Dale relation ( Jumper and Fitzgerald, 2001), n = 1 + pKGD where KOD is the Gladstone-Dale ’constant’ which depends on...aberrations ( Jumper and Fitzgerald, 2001). According to the large-aperture approximation, an estimate for the time-averaged SR for a given optical phase

  17. Implementation of GAMMON - An efficient load balancing strategy for a local computer system

    NASA Technical Reports Server (NTRS)

    Baumgartner, Katherine M.; Kling, Ralph M.; Wah, Benjamin W.

    1989-01-01

    GAMMON (Global Allocation from Maximum to Minimum in cONstant time), an efficient load-balancing algorithm, is described. GAMMON uses the available broadcast capability of multiaccess networks to implement an efficient search technique for finding hosts with maximal and minimal loads. The search technique has an average overhead which is independent of the number of participating stations. The transition from the theoretical concept to a practical, reliable, and efficient implementation is described.

  18. Exact combinatorial approach to finite coagulating systems

    NASA Astrophysics Data System (ADS)

    Fronczak, Agata; Chmiel, Anna; Fronczak, Piotr

    2018-02-01

    This paper outlines an exact combinatorial approach to finite coagulating systems. In this approach, cluster sizes and time are discrete and the binary aggregation alone governs the time evolution of the systems. By considering the growth histories of all possible clusters, an exact expression is derived for the probability of a coagulating system with an arbitrary kernel being found in a given cluster configuration when monodisperse initial conditions are applied. Then this probability is used to calculate the time-dependent distribution for the number of clusters of a given size, the average number of such clusters, and that average's standard deviation. The correctness of our general expressions is proved based on the (analytical and numerical) results obtained for systems with the constant kernel. In addition, the results obtained are compared with the results arising from the solutions to the mean-field Smoluchowski coagulation equation, indicating its weak points. The paper closes with a brief discussion on the extensibility to other systems of the approach presented herein, emphasizing the issue of arbitrary initial conditions.

  19. Three-dimensional scanning force/tunneling spectroscopy at room temperature.

    PubMed

    Sugimoto, Yoshiaki; Ueda, Keiichi; Abe, Masayuki; Morita, Seizo

    2012-02-29

    We simultaneously measured the force and tunneling current in three-dimensional (3D) space on the Si(111)-(7 × 7) surface using scanning force/tunneling microscopy at room temperature. The observables, the frequency shift and the time-averaged tunneling current were converted to the physical quantities of interest, i.e. the interaction force and the instantaneous tunneling current. Using the same tip, the local density of states (LDOS) was mapped on the same surface area at constant height by measuring the time-averaged tunneling current as a function of the bias voltage at every lateral position. LDOS images at negative sample voltages indicate that the tip apex is covered with Si atoms, which is consistent with the Si-Si covalent bonding mechanism for AFM imaging. A measurement technique for 3D force/current mapping and LDOS imaging on the equivalent surface area using the same tip was thus demonstrated.

  20. Voter dynamics on an adaptive network with finite average connectivity

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Abhishek; Schmittmann, Beate

    2009-03-01

    We study a simple model for voter dynamics in a two-party system. The opinion formation process is implemented in a random network of agents in which interactions are not restricted by geographical distance. In addition, we incorporate the rapidly changing nature of the interpersonal relations in the model. At each time step, agents can update their relationships, so that there is no history dependence in the model. This update is determined by their own opinion, and by their preference to make connections with individuals sharing the same opinion and with opponents. Using simulations and analytic arguments, we determine the final steady states and the relaxation into these states for different system sizes. In contrast to earlier studies, the average connectivity (``degree'') of each agent is constant here, independent of the system size. This has significant consequences for the long-time behavior of the model.

  1. On the spray pulsations of the effervescent atomizers

    NASA Astrophysics Data System (ADS)

    Mlkvik, Marek; Knizat, Branislav

    2018-06-01

    The presented paper focuses on the comparison of the two effervescent atomizer configurations—the outside-in-gas (OIG) and the outside-in-liquid (OIL). The comparison was based on the spray pulsation assessment by different methods. The atomizers were tested under the same operating conditions given by the constant injection pressure (0.14 MPa) and the gas to the liquid mass ratio (GLR) varying from 2.5 to 5%. The aqueous maltodextrin solution was used as the working liquid (μ = 60 and 146 mPa·s). We found that the time-averaging method does not provide sufficient spray quality description. Based on the cumulative distribution function (CDF) we found that the OIG atomizer generated the spray with non-uniform droplet size distribution at all investigated GLRs. Exceptionally large droplets were present even in the spray which appeared stable when was analyzed by the time-averaging method.

  2. Statistical effects related to low numbers of reacting molecules analyzed for a reversible association reaction A + B = C in ideally dispersed systems: An apparent violation of the law of mass action

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szymanski, R., E-mail: rszymans@cbmm.lodz.pl; Sosnowski, S.; Maślanka, Ł.

    2016-03-28

    Theoretical analysis and computer simulations (Monte Carlo and numerical integration of differential equations) show that the statistical effect of a small number of reacting molecules depends on a way the molecules are distributed among the small volume nano-reactors (droplets in this study). A simple reversible association A + B = C was chosen as a model reaction, enabling to observe both thermodynamic (apparent equilibrium constant) and kinetic effects of a small number of reactant molecules. When substrates are distributed uniformly among droplets, all containing the same equal number of substrate molecules, the apparent equilibrium constant of the association is highermore » than the chemical one (observed in a macroscopic—large volume system). The average rate of the association, being initially independent of the numbers of molecules, becomes (at higher conversions) higher than that in a macroscopic system: the lower the number of substrate molecules in a droplet, the higher is the rate. This results in the correspondingly higher apparent equilibrium constant. A quite opposite behavior is observed when reactant molecules are distributed randomly among droplets: the apparent association rate and equilibrium constants are lower than those observed in large volume systems, being the lower, the lower is the average number of reacting molecules in a droplet. The random distribution of reactant molecules corresponds to ideal (equal sizes of droplets) dispersing of a reaction mixture. Our simulations have shown that when the equilibrated large volume system is dispersed, the resulting droplet system is already at equilibrium and no changes of proportions of droplets differing in reactant compositions can be observed upon prolongation of the reaction time.« less

  3. Clinical outcomes after arthroscopic release for recalcitrant frozen shoulder.

    PubMed

    Ebrahimzadeh, Mohammad H; Moradi, Ali; Pour, Mostafa Khalili; Moghadam, Mohammad Hallaj; Kachooei, Amir Reza

    2014-09-01

    To explain the role of arthroscopic release in intractable frozen shoulders. We used different questionnaires and measuring tools to understand whether arthroscopic release is the superior modality to treat patients with intractable frozen shoulders. Between 2007 and 2013, in a prospective study, we enrolled 80 patients (52 females and 28 males) with recalcitrant frozen shoulder, who underwent arthroscopic release at Ghaem Hospital, a tertiary referral center, in Mashhad, Iran. Before operation, all patients filled out the Disability of Arm, Shoulder and Hand (DASH), Constant, University of California Los Angeles (UCLA), ROWE and Visual Analogue Scale (VAS) for pain questionnaires. We measured the difference in range of motion between both the normal and the frozen shoulders in each patient. The average age of the patients was 50.8±7.1 years. In 49 patients, the right shoulder was affected and in the remaining 31 the left side was affected. Before surgery, the patients were suffering from this disease on average for 11.7±10.3 months. The average time to follow-up was 47.2±6.8 months (14 to 60 months). Diabetes mellitus (38%) and history of shoulder trauma (23%) were the most common comorbidities in our patients. We did not find any significant differences between baseline characteristics of diabetics patients with non-diabetics ones. After surgery, the average time to achieve maximum pain improvement and range of motion were 3.6±2.1 and 3.6±2 months, respectively. The VAS score, constant shoulder score, Rowe score, UCLA shoulder score, and DASH score showed significant improvement in shoulder function after surgery, and shoulder range of motion improved in all directions compared to pre-operation range of motion. According to our results, arthroscopic release of recalcitrant frozen shoulder is a valuable modality in treating this disease. This method could decrease pain and improve both subjective and objective mid-term outcomes.

  4. Clinical Outcomes after Arthroscopic Release for Recalcitrant Frozen Shoulder

    PubMed Central

    Ebrahimzadeh, Mohammad H; Moradi, Ali; Pour, Mostafa Khalili; Moghadam, Mohammad Hallaj; Kachooei, Amir Reza

    2014-01-01

    Background: To explain the role of arthroscopic release in intractable frozen shoulders. We used different questionnaires and measuring tools to understand whether arthroscopic release is the superior modality to treat patients with intractable frozen shoulders. Methods: Between 2007 and 2013, in a prospective study, we enrolled 80 patients (52 females and 28 males) with recalcitrant frozen shoulder, who underwent arthroscopic release at Ghaem Hospital, a tertiary referral center, in Mashhad, Iran. Before operation, all patients filled out the Disability of Arm, Shoulder and Hand (DASH), Constant, University of California Los Angeles (UCLA), ROWE and Visual Analogue Scale (VAS) for pain questionnaires. We measured the difference in range of motion between both the normal and the frozen shoulders in each patient. Results: The average age of the patients was 50.8±7.1 years. In 49 patients, the right shoulder was affected and in the remaining 31 the left side was affected. Before surgery, the patients were suffering from this disease on average for 11.7±10.3 months. The average time to follow-up was 47.2±6.8 months (14 to 60 months). Diabetes mellitus (38%) and history of shoulder trauma (23%) were the most common comorbidities in our patients. We did not find any significant differences between baseline characteristics of diabetics patients with non-diabetics ones. After surgery, the average time to achieve maximum pain improvement and range of motion were 3.6±2.1 and 3.6±2 months, respectively. The VAS score, constant shoulder score, Rowe score, UCLA shoulder score, and DASH score showed significant improvement in shoulder function after surgery, and shoulder range of motion improved in all directions compared to pre-operation range of motion. Conclusions: According to our results, arthroscopic release of recalcitrant frozen shoulder is a valuable modality in treating this disease. This method could decrease pain and improve both subjective and objective mid-term outcomes. PMID:25386586

  5. Bound state potential energy surface construction: ab initio zero-point energies and vibrationally averaged rotational constants.

    PubMed

    Bettens, Ryan P A

    2003-01-15

    Collins' method of interpolating a potential energy surface (PES) from quantum chemical calculations for reactive systems (Jordan, M. J. T.; Thompson, K. C.; Collins, M. A. J. Chem. Phys. 1995, 102, 5647. Thompson, K. C.; Jordan, M. J. T.; Collins, M. A. J. Chem. Phys. 1998, 108, 8302. Bettens, R. P. A.; Collins, M. A. J. Chem. Phys. 1999, 111, 816) has been applied to a bound state problem. The interpolation method has been combined for the first time with quantum diffusion Monte Carlo calculations to obtain an accurate ground state zero-point energy, the vibrationally average rotational constants, and the vibrationally averaged internal coordinates. In particular, the system studied was fluoromethane using a composite method approximating the QCISD(T)/6-311++G(2df,2p) level of theory. The approach adopted in this work (a) is fully automated, (b) is fully ab initio, (c) includes all nine nuclear degrees of freedom, (d) requires no assumption of the functional form of the PES, (e) possesses the full symmetry of the system, (f) does not involve fitting any parameters of any kind, and (g) is generally applicable to any system amenable to quantum chemical calculations and Collins' interpolation method. The calculated zero-point energy agrees to within 0.2% of its current best estimate. A0 and B0 are within 0.9 and 0.3%, respectively, of experiment.

  6. Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.

    PubMed

    Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian

    2018-05-23

    Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.

  7. Enhanced electrohydrodynamic force generation in a two-stroke cycle dielectric-barrier-discharge plasma actuator

    NASA Astrophysics Data System (ADS)

    Sato, Shintaro; Takahashi, Masayuki; Ohnishi, Naofumi

    2017-05-01

    An approach for electrohydrodynamic (EHD) force production is proposed with a focus on a charge cycle on a dielectric surface. The cycle, consisting of positive-charging and neutralizing strokes, is completely different from the conventional methodology, which involves a negative-charging stroke, in that the dielectric surface charge is constantly positive. The two-stroke charge cycle is realized by applying a DC voltage combined with repetitive pulses. Simulation results indicate that the negative pulse eliminates the surface charge accumulated during constant voltage phase, resulting in repetitive EHD force generation. The time-averaged EHD force increases almost linearly with increasing repetitive pulse frequency and becomes one order of magnitude larger than that driven by the sinusoidal voltage, which has the same peak-to-peak voltage.

  8. Evolution of the Carter constant for inspirals into a black hole: Effect of the black hole quadrupole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flanagan, Eanna E.; Laboratory for Elementary Particle Physics, Cornell University, Ithaca, New York 14853; Hinderer, Tanja

    2007-06-15

    We analyze the effect of gravitational radiation reaction on generic orbits around a body with an axisymmetric mass quadrupole moment Q to linear order in Q, to the leading post-Newtonian order, and to linear order in the mass ratio. This system admits three constants of the motion in absence of radiation reaction: energy, angular momentum along the symmetry axis, and a third constant analogous to the Carter constant. We compute instantaneous and time-averaged rates of change of these three constants. For a point particle orbiting a black hole, Ryan has computed the leading order evolution of the orbit's Carter constant,more » which is linear in the spin. Our result, when combined with an interaction quadratic in the spin (the coupling of the black hole's spin to its own radiation reaction field), gives the next to leading order evolution. The effect of the quadrupole, like that of the linear spin term, is to circularize eccentric orbits and to drive the orbital plane towards antialignment with the symmetry axis. In addition we consider a system of two point masses where one body has a single mass multipole or current multipole of order l. To linear order in the mass ratio, to linear order in the multipole, and to the leading post-Newtonian order, we show that there does not exist an analog of the Carter constant for such a system (except for the cases of an l=1 current moment and an l=2 mass moment). Thus, the existence of the Carter constant in Kerr depends on interaction effects between the different multipoles. With mild additional assumptions, this result falsifies the conjecture that all vacuum, axisymmetric spacetimes possess a third constant of the motion for geodesic motion.« less

  9. Development of a New Time-Resolved Laser-Induced Fluorescence Technique

    NASA Astrophysics Data System (ADS)

    Durot, Christopher; Gallimore, Alec

    2012-10-01

    We are developing a time-resolved laser-induced fluorescence (LIF) technique to interrogate the ion velocity distribution function (VDF) of EP thruster plumes down to the microsecond time scale. Better measurements of dynamic plasma processes will lead to improvements in simulation and prediction of thruster operation and erosion. We present the development of the new technique and results of initial tests. Signal-to-noise ratio (SNR) is often a challenge for LIF studies, and it is only more challenging for time-resolved measurements since a lock-in amplifier cannot be used with a long time constant. The new system uses laser modulation on the order of MHz, which enables the use of electronic filtering and phase-sensitive detection to improve SNR while preserving time-resolved information. Statistical averaging over many cycles to further improve SNR is done in the frequency domain. This technique can have significant advantages, including (1) larger spatial maps enabled by shorter data acquisition time and (2) the ability to average data without creating a phase reference by modifying the thruster operating condition with a periodic cutoff in discharge current, which can modify the ion velocity distribution.

  10. Charged structure constants from modularity

    NASA Astrophysics Data System (ADS)

    Das, Diptarka; Datta, Shouvik; Pal, Sridip

    2017-11-01

    We derive a universal formula for the average heavy-heavy-light structure constants for 2 d CFTs with non-vanishing u(1) charge. The derivation utilizes the modular properties of one-point functions on the torus. Refinements in N=2 SCFTs, show that the resulting Cardy-like formula for the structure constants has precisely the same shifts in the central charge as that of the thermodynamic entropy found earlier. This analysis generalizes the recent results by Kraus and Maloney for CFTs with an additional global u(1) symmetry [1]. Our results at large central charge are also shown to match with computations from the holographic dual, which suggest that the averaged CFT three-point coefficient also serves as a useful probe of detecting black hole hair.

  11. Non-Contact Thrust Stand Calibration Method for Repetitively-Pulsed Electric Thrusters

    NASA Technical Reports Server (NTRS)

    Wong, Andrea R.; Toftul, Alexandra; Polzin, Kurt A.; Pearson, J. Boise

    2011-01-01

    A thrust stand calibration technique for use in testing repetitively-pulsed electric thrusters for in-space propulsion has been developed and tested using a modified hanging pendulum thrust stand. In the implementation of this technique, current pulses are applied to a solenoidal coil to produce a pulsed magnetic field that acts against the magnetic field produced by a permanent magnet mounted to the thrust stand pendulum arm. The force on the magnet is applied in this non-contact manner, with the entire pulsed force transferred to the pendulum arm through a piezoelectric force transducer to provide a time-accurate force measurement. Modeling of the pendulum arm dynamics reveals that after an initial transient in thrust stand motion the quasisteady average deflection of the thrust stand arm away from the unforced or zero position can be related to the average applied force through a simple linear Hooke s law relationship. Modeling demonstrates that this technique is universally applicable except when the pulsing period is increased to the point where it approaches the period of natural thrust stand motion. Calibration data were obtained using a modified hanging pendulum thrust stand previously used for steady-state thrust measurements. Data were obtained for varying impulse bit at constant pulse frequency and for varying pulse frequency. The two data sets exhibit excellent quantitative agreement with each other as the constant relating average deflection and average thrust match within the errors on the linear regression curve fit of the data. Quantitatively, the error on the calibration coefficient is roughly 1% of the coefficient value.

  12. Error modeling for differential GPS. M.S. Thesis - MIT, 12 May 1995

    NASA Technical Reports Server (NTRS)

    Blerman, Gregory S.

    1995-01-01

    Differential Global Positioning System (DGPS) positioning is used to accurately locate a GPS receiver based upon the well-known position of a reference site. In utilizing this technique, several error sources contribute to position inaccuracy. This thesis investigates the error in DGPS operation and attempts to develop a statistical model for the behavior of this error. The model for DGPS error is developed using GPS data collected by Draper Laboratory. The Marquardt method for nonlinear curve-fitting is used to find the parameters of a first order Markov process that models the average errors from the collected data. The results show that a first order Markov process can be used to model the DGPS error as a function of baseline distance and time delay. The model's time correlation constant is 3847.1 seconds (1.07 hours) for the mean square error. The distance correlation constant is 122.8 kilometers. The total process variance for the DGPS model is 3.73 sq meters.

  13. First Demonstration of Electrostatic Damping of Parametric Instability at Advanced LIGO

    NASA Astrophysics Data System (ADS)

    Blair, Carl; Gras, Slawek; Abbott, Richard; Aston, Stuart; Betzwieser, Joseph; Blair, David; DeRosa, Ryan; Evans, Matthew; Frolov, Valera; Fritschel, Peter; Grote, Hartmut; Hardwick, Terra; Liu, Jian; Lormand, Marc; Miller, John; Mullavey, Adam; O'Reilly, Brian; Zhao, Chunnong; Abbott, B. P.; Abbott, T. D.; Adams, C.; Adhikari, R. X.; Anderson, S. B.; Ananyeva, A.; Appert, S.; Arai, K.; Ballmer, S. W.; Barker, D.; Barr, B.; Barsotti, L.; Bartlett, J.; Bartos, I.; Batch, J. C.; Bell, A. S.; Billingsley, G.; Birch, J.; Biscans, S.; Biwer, C.; Bork, R.; Brooks, A. F.; Ciani, G.; Clara, F.; Countryman, S. T.; Cowart, M. J.; Coyne, D. C.; Cumming, A.; Cunningham, L.; Danzmann, K.; Da Silva Costa, C. F.; Daw, E. J.; DeBra, D.; DeSalvo, R.; Dooley, K. L.; Doravari, S.; Driggers, J. C.; Dwyer, S. E.; Effler, A.; Etzel, T.; Evans, T. M.; Factourovich, M.; Fair, H.; Fernández Galiana, A.; Fisher, R. P.; Fulda, P.; Fyffe, M.; Giaime, J. A.; Giardina, K. D.; Goetz, E.; Goetz, R.; Gray, C.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hall, E. D.; Hammond, G.; Hanks, J.; Hanson, J.; Harry, G. M.; Heintze, M. C.; Heptonstall, A. W.; Hough, J.; Izumi, K.; Jones, R.; Kandhasamy, S.; Karki, S.; Kasprzack, M.; Kaufer, S.; Kawabe, K.; Kijbunchoo, N.; King, E. J.; King, P. J.; Kissel, J. S.; Korth, W. Z.; Kuehn, G.; Landry, M.; Lantz, B.; Lockerbie, N. A.; Lundgren, A. P.; MacInnis, M.; Macleod, D. M.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martin, I. W.; Martynov, D. V.; Mason, K.; Massinger, T. J.; Matichard, F.; Mavalvala, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McIntyre, G.; McIver, J.; Mendell, G.; Merilh, E. L.; Meyers, P. M.; Mittleman, R.; Moreno, G.; Mueller, G.; Munch, J.; Nuttall, L. K.; Oberling, J.; Oppermann, P.; Oram, Richard J.; Ottaway, D. J.; Overmier, H.; Palamos, J. R.; Paris, H. R.; Parker, W.; Pele, A.; Penn, S.; Phelps, M.; Pierro, V.; Pinto, I.; Principe, M.; Prokhorov, L. G.; Puncken, O.; Quetschke, V.; Quintero, E. A.; Raab, F. J.; Radkins, H.; Raffai, P.; Reid, S.; Reitze, D. H.; Robertson, N. A.; Rollins, J. G.; Roma, V. J.; Romie, J. H.; Rowan, S.; Ryan, K.; Sadecki, T.; Sanchez, E. J.; Sandberg, V.; Savage, R. L.; Schofield, R. M. S.; Sellers, D.; Shaddock, D. A.; Shaffer, T. J.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sigg, D.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; Sorazu, B.; Staley, A.; Strain, K. A.; Tanner, D. B.; Taylor, R.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Torrie, C. I.; Traylor, G.; Vajente, G.; Valdes, G.; van Veggel, A. A.; Vecchio, A.; Veitch, P. J.; Venkateswara, K.; Vo, T.; Vorvick, C.; Walker, M.; Ward, R. L.; Warner, J.; Weaver, B.; Weiss, R.; Weßels, P.; Willke, B.; Wipf, C. C.; Worden, J.; Wu, G.; Yamamoto, H.; Yancey, C. C.; Yu, Hang; Yu, Haocun; Zhang, L.; Zucker, M. E.; Zweizig, J.; LSC Instrument Authors

    2017-04-01

    Interferometric gravitational wave detectors operate with high optical power in their arms in order to achieve high shot-noise limited strain sensitivity. A significant limitation to increasing the optical power is the phenomenon of three-mode parametric instabilities, in which the laser field in the arm cavities is scattered into higher-order optical modes by acoustic modes of the cavity mirrors. The optical modes can further drive the acoustic modes via radiation pressure, potentially producing an exponential buildup. One proposed technique to stabilize parametric instability is active damping of acoustic modes. We report here the first demonstration of damping a parametrically unstable mode using active feedback forces on the cavity mirror. A 15 538 Hz mode that grew exponentially with a time constant of 182 sec was damped using electrostatic actuation, with a resulting decay time constant of 23 sec. An average control force of 0.03 nN was required to maintain the acoustic mode at its minimum amplitude.

  14. Relations among passive electrical properties of lumbar alpha-motoneurones of the cat.

    PubMed Central

    Gustafsson, B; Pinter, M J

    1984-01-01

    The relations among passive membrane properties have been examined in cat motoneurones utilizing exclusively electrophysiological techniques. A significant relation was found to exist between the input resistance and the membrane time constant. The estimated electrotonic length showed no evident tendency to vary with input resistance but did show a tendency to decrease with increasing time constant. Detailed analysis of this trend suggests, however, that a variation in dendritic geometry is likely to exist among cat motoneurones, such that the dendritic trees of motoneurones projecting to fast-twitch muscle units are relatively more expansive than those of motoneurones projecting to slow-twitch units. Utilizing an expression derived from the Rall neurone model, the total capacitance of the equivalent cylinder corresponding to a motoneurone has been estimated. With the assumption of a constant and uniform specific capacitance of 1 mu F/cm2, the resulting values have been used as estimates of cell surface area. These estimates agree well with morphologically obtained measurements from cat motoneurones reported by others. Both membrane time constant (and thus likely specific membrane resistivity) and electrotonic length showed little tendency to vary with surface area. However, after-hyperpolarization (a.h.p.) duration showed some tendency to vary such that cells with brief a.h.p. duration were, on average, larger than those with longer a.h.p. durations. Apart from motoneurones with the lowest values, axonal conduction velocity was only weakly related to variations in estimated surface area. Input resistance and membrane time constant were found to vary systematically with the a.h.p. duration. Analysis suggested that the major part of the increase in input resistance with a.h.p. duration was related to an increase in membrane resistivity and a variation in dendritic geometry rather than to differences in surface area among the motoneurones. The possible effects of imperfect electrode seals have been considered. According to an analysis of a passive membrane model, soma leaks caused by impalement injury will result in underestimates of input resistance and time constant and over-estimates of electrotonic length and total capacitance. Assuming a non-injured resting potential of -80 mV, a comparison of membrane potentials predicted by various relative leaks (leak conductance/input conductance) with those actually observed suggests that the magnitude of these errors in the present material will not unduly affect the presented results.+4 PMID:6520792

  15. Measuring Solar Radiation Incident on Earth: Solar Constant-3 (SOLCON-3)

    NASA Technical Reports Server (NTRS)

    Crommelynck, Dominique; Joukoff, Alexandre; Dewitte, Steven

    2002-01-01

    Life on Earth is possible because the climate conditions on Earth are relatively mild. One element of the climate on Earth, the temperature, is determined by the heat exchanges between the Earth and its surroundings, outer space. The heat exchanges take place in the form of electromagnetic radiation. The Earth gains energy because it absorbs solar radiation, and it loses energy because it emits thermal infrared radiation to cold space. The heat exchanges are in balance: the heat gained by the Earth through solar radiation equals the heat lost through thermal radiation. When the balance is perturbed, a temperature change and hence a climate change of the Earth will occur. One possible perturbation of the balance is the CO2 greenhouse effect: when the amount of CO2 in the atmosphere increases, this will reduce the loss of thermal infrared radiation to cold space. Earth will gain more heat and hence the temperature will rise. Another perturbation of the balance can occur through variation of the amount of energy emitted by the sun. When the sun emits more energy, this will directly cause a rise of temperature on Earth. For a long time scientists believed that the energy emitted by the sun was constant. The 'solar constant' is defined as the amount of solar energy received per unit surface at a distance of one astronomical unit (the average distance of Earth's orbit) from the sun. Accurate measurements of the variations of the solar constant have been made since 1978. From these we know that the solar constant varies approximately with the 11-year solar cycle observed in other solar phenomena, such as the occurrence of sunspots, dark spots that are sometimes visible on the solar surface. When a sunspot occurs on the sun, since the spot is dark, the radiation (light) emitted by the sun drops instantaneously. Oddly, periods of high solar activity, when a lot of sunspot numbers increase, correspond to periods when the average solar constant is high. This indicates that the background on which the sunspots occur becomes brighter during high solar activity.

  16. Dielectric method of high-resolution gas hydrate estimation

    NASA Astrophysics Data System (ADS)

    Sun, Y. F.; Goldberg, D.

    2005-02-01

    In-situ dielectric properties of natural gas hydrate are measured for the first time in the Mallik 5L-38 Well in the Mackenzie Delta, Canada. The average dielectric constant of the hydrate zones is 9, ranging from 5 to 20. The average resistivity is >5 ohm.m in the hydrate zones, ranging from 2 to 10 ohm.m at a 1.1 GHz dielectric tool frequency. The dielectric logs show similar trends with sonic and induction resistivity logs, but exhibits inherently higher vertical resolution (<5 cm). The average in-situ hydrate saturation in the well is about 70%, ranging from 20% to 95%. The dielectric estimates are overall in agreement with induction estimates but the induction log tends to overestimate hydrate content up to 15%. Dielectric estimates could be used as a better proxy of in-situ hydrate saturation in modeling hydrate dynamics. The fine-scale structure in hydrate zones could help reveal hydrate formation history.

  17. Modeling of mixing processes: Fluids, particulates, and powders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ottino, J.M.; Hansen, S.

    Work under this grant involves two main areas: (1) Mixing of Viscous Liquids, this first area comprising aggregation, fragmentation and dispersion, and (2) Mixing of Powders. In order to produce a coherent self-contained picture, we report primarily on results obtained under (1), and within this area, mostly on computational studies of particle aggregation in regular and chaotic flows. Numerical simulations show that the average cluster size of compact clusters grows algebraically, while the average cluster size of fractal clusters grows exponentially; companion mathematical arguments are used to describe the initial growth of average cluster size and polydispersity. It is foundmore » that when the system is well mixed and the capture radius independent of mass, the polydispersity is constant for long-times and the cluster size distribution is self-similar. Furthermore, our simulations indicate that the fractal nature of the clusters is dependent upon the mixing.« less

  18. Predicting heterocyclic ring coupling constants through a conformational search of tetra-O-methyl-(+)-catechin

    Treesearch

    Fred L. Tobiason; Richard W. Hemingway

    1994-01-01

    A GMMX conformational search routine gives a family of conformations that reflects the Boltzmann-averaged heterocyclic ring conformation as evidenced by accurate prediction of all three coupling constants observed for tetra-O-methyl-(+)-catechin.

  19. Predicting heterocyclic ring coupling constants through a conformational search of tetra-o-methyl-(+)-catechin

    Treesearch

    Fred L. Tobiason; Richard w. Hemingway

    1994-01-01

    A GMMXe conformational search routine gives a family a conformations that reflects the boltzmann-averaged heterocyclic ring conformation as evidence by accurate prediction of all three coupling constants observed for tetra-O-methyl-(+)-catechin.

  20. Observing Climate with GNSS Radio Occultation: Characterization and Mitigation of Systematic Errors

    NASA Astrophysics Data System (ADS)

    Foelsche, U.; Scherllin-Pirscher, B.; Danzer, J.; Ladstädter, F.; Schwarz, J.; Steiner, A. K.; Kirchengast, G.

    2013-05-01

    GNSS Radio Occultation (RO) data a very well suited for climate applications, since they do not require external calibration and only short-term measurement stability over the occultation event duration (1 - 2 min), which is provided by the atomic clocks onboard the GPS satellites. With this "self-calibration", it is possible to combine data from different sensors and different missions without need for inter-calibration and overlap (which is extremely hard to achieve for conventional satellite data). Using the same retrieval for all datasets we obtained monthly refractivity and temperature climate records from multiple radio occultation satellites, which are consistent within 0.05 % and 0.05 K in almost any case (taking global averages over the altitude range 10 km to 30 km). Longer-term average deviations are even smaller. Even though the RO record is still short, its high quality already allows to see statistically significant temperature trends in the lower stratosphere. The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We started to look at different error sources, like the influence of the quality control and the high altitude initialization. We will focus on recent results regarding (apparent) constants used in the retrieval and systematic ionospheric errors. (1) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with atmospheric parameters. With the increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the molecular/atomic polarizabilities of the constituents of air, and the dipole moment of water vapor). This approach also allows computing sensitivities to changes in atmospheric composition. We found that changes caused by the anthropogenic CO2 increase are still almost exactly offset by the concurrent O2 decrease. (2) Since the ionospheric correction of RO data is an approximation to first order, we have to consider an ionospheric residual, which can be expected to be larger when the ionization is high (day vs. night, high vs. low solar activity). In climate applications this could lead to a time dependent bias, which could induce wrong trends in atmospheric parameters at high altitudes. We studied this systematic ionospheric residual by analyzing the bending angle bias characteristics of CHAMP and COSMIC RO data from the years 2001 to 2011. We found that the night time bending angle bias stays constant over the whole period of 11 years, while the day time bias increases from low to high solar activity. As a result, the difference between night and day time bias increases from -0.05 μrad to -0.4 μrad. This behavior paves the way to correct the (small) solar cycle dependent bias of large ensembles of day time RO profiles.

  1. Textural changes of FER-A peridotite in time series piston-cylinder experiments at 1.0 GPa, 1300°C

    NASA Astrophysics Data System (ADS)

    Schwab, B. E.; Mercer, C. N.; Johnston, A.

    2012-12-01

    A series of eight 1.0 GPa, 1300°C partial melting experiments were performed using FER-A peridotite starting material to investigate potential textural changes in the residual crystalline phases over time. Powdered peridotite with a layer of vitreous carbon spheres as a melt sink were sealed in graphite-lined Pt capsules and run in CaF2 furnace assemblies in 1.27cm piston-cylinder apparatus at the University of Oregon. Run durations ranged from 4 to 128 hours. Experimental charges were mounted in epoxy, cut, and polished for analysis. In a first attempt to quantify the mineral textures, individual 500x BSE images were collected from selected, representative locations on each of the experimental charges using the FEI Quanta 250 ESEM at Humboldt State University. Noran System Seven (NSS) EDS system was used to collect x-ray maps (spectral images) to aid in identification of phases. A combination of image analysis techniques within NSS and ImageJ software are being used to process the images and quantify the mineral textures observed. The goals are to quantify the size, shape, and abundance of residual olivine (ol), orthopyroxene (opx), clinopyroxene (cpx), and spinel crystals within the selected sample areas of the run products. Additional work will be done to compare the results of the selected areas with larger (lower magnification) images acquired using the same techniques. Preliminary results indicate that measurements of average grain area, minimum grain area, and average, maximum, and minimum grain perimeter show the greatest change (generally decreasing) in measurements for ol, opx, and cpx between the shortest-duration, 4-hour, experiment and the subsequent, 8-hour, experiment. The largest relative change in nearly all of these measurements appears to be for cpx. After the initial decrease, preliminary measurements remain relatively constant for ol, opx, and cpx, respectively, in experiments from 8 to 128 hours in duration. In contrast, measured parameters of spinel grains increase from the 4-hour to 8-hour experiment and continue to fluctuate over the time interval investigated. Spinel also represents the smallest number of individual grains (average n = 25) in any experiment. Average aspect ratios for all minerals remain relatively constant (~1.5-2) throughout the time series. Additional measurements and refinements are underway.

  2. Proportional Feedback Control of Energy Intake During Obesity Pharmacotherapy.

    PubMed

    Hall, Kevin D; Sanghvi, Arjun; Göbel, Britta

    2017-12-01

    Obesity pharmacotherapies result in an exponential time course for energy intake whereby large early decreases dissipate over time. This pattern of declining drug efficacy to decrease energy intake results in a weight loss plateau within approximately 1 year. This study aimed to elucidate the physiology underlying the exponential decay of drug effects on energy intake. Placebo-subtracted energy intake time courses were examined during long-term obesity pharmacotherapy trials for 14 different drugs or drug combinations within the theoretical framework of a proportional feedback control system regulating human body weight. Assuming each obesity drug had a relatively constant effect on average energy intake and did not affect other model parameters, our model correctly predicted that long-term placebo-subtracted energy intake was linearly related to early reductions in energy intake according to a prespecified equation with no free parameters. The simple model explained about 70% of the variance between drug studies with respect to the long-term effects on energy intake, although a significant proportional bias was evident. The exponential decay over time of obesity pharmacotherapies to suppress energy intake can be interpreted as a relatively constant effect of each drug superimposed on a physiological feedback control system regulating body weight. © 2017 The Obesity Society.

  3. Health care in the CIS countries : the case of hospitals in Ukraine.

    PubMed

    Pilyavsky, Anatoly; Staat, Matthias

    2006-09-01

    The study analyses the technical efficiency of community hospitals in Ukraine during 1997-2001. Hospital cost amount to two-thirds of Ukrainian spending on health care. Data are available on the number of beds, physicians and nurses employed, surgical procedures performed, and admissions and patient days. We employ data envelopment analysis to calculate the efficiency of hospitals and to assess productivity changes over time. The scores calculated with an output-oriented model assuming constant returns to scale range from 150% to 110%. Average relative inefficiency of the hospitals is initially above 30% and later drops to 15% or below. The average productivity change is positive but below 1%; a Malmquist index decomposition reveals that negative technological progress is overcompensated by positive catching-up.

  4. Viscosity Relaxation in Molten HgZnTe

    NASA Technical Reports Server (NTRS)

    Su, Ching-Hua; Lehoczky, S. L.; Kim, Yeong Woo; Baird, James K.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    Rotating cup measurements of the viscosity of the pseudo-binary melt, HgZnTe have shown that the isothermal liquid with zinc mole fraction 0.16 requires tens of hours of equilibration time before a steady viscous state can be achieved. Over this relaxation period, the viscosity at 790 C increases by a factor of two, while the viscosity at 810 C increases by 40%. Noting that the Group VI elements tend to polymerize when molten, we suggest that the viscosity of the melt is enhanced by the slow formation of Te atom chains. To explain the build-up of linear Te n-mers, we propose a scheme, which contains formation reactions with second order kinetics that increase the molecular weight, and decomposition reactions with first order kinetics that inactivate the chains. The resulting rate equations can be solved for the time dependence of each molecular weight fraction. Using these molecular weight fractions, we calculate the time dependence of the average molecular weight. Using the standard semi-empirical relation between polymer average molecular weight and viscosity, we then calculate the viscosity relaxation curve. By curve fitting, we find that the data imply that the rate constant for n-mer formation is much smaller than the rate constant for n-mer deactivation, suggesting that Te atoms only weakly polymerize in molten HgZnTe. The steady state toward which the melt relaxes occurs as the rate of formation of an n-mer becomes exactly balanced by the sum of the rate for its deactivation and the rate for its polymerization to form an (n+1)-mer.

  5. Anatomically contoured plates for fixation of rib fractures.

    PubMed

    Bottlang, Michael; Helzel, Inga; Long, William B; Madey, Steven

    2010-03-01

    : Intraoperative contouring of long bridging plates for stabilization of flail chest injuries is difficult and time consuming. This study implemented for the first time biometric parameters to derive anatomically contoured rib plates. These plates were tested on a range of cadaveric ribs to quantify plate fit and to extract a best-fit plating configuration. : Three left and three right rib plates were designed, which accounted for anatomic parameters required when conforming a plate to the rib surface. The length lP over which each plate could trace the rib surface was evaluated on 109 cadaveric ribs. For each rib level 3-9, the plate design with the highest lP value was extracted to determine a best-fit plating configuration. Furthermore, the characteristic twist of rib surfaces was measured on 49 ribs to determine the surface congruency of anatomic plates with a constant twist. : The tracing length lP of the best-fit plating configuration ranged from 12.5 cm to 14.7 cm for ribs 3-9. The corresponding range for standard plates was 7.1-13.7 cm. The average twist of ribs over 8-cm, 12-cm, and 16-cm segments was 8.3 degrees, 20.6 degrees, and 32.7 degrees, respectively. The constant twist of anatomic rib plates was not significantly different from the average rib twist. : A small set of anatomic rib plates can minimize the need for intraoperative plate contouring for fixation of ribs 3-9. Anatomic rib plates can therefore reduce the time and complexity of flail chest stabilization and facilitate spanning of flail segments with long plates.

  6. Effect of Different Loading Conditions on the Nucleation and Development of Shear Zones Around Material Heterogeneities

    NASA Astrophysics Data System (ADS)

    Rybacki, E.; Nardini, L.; Morales, L. F.; Dresen, G.

    2017-12-01

    Rock deformation at depths in the Earth's crust is often localized in high temperature shear zones, which occur in the field at different scales and in a variety of lithologies. The presence of material heterogeneities has long been recognized to be an important cause for shear zones evolution, but the mechanisms controlling initiation and development of localization are not fully understood, and the question of which loading conditions (constant stress or constant deformation rate) are most favourable is still open. To better understand the effect of boundary conditions on shear zone nucleation around heterogeneities, we performed a series of torsion experiments under constant twist rate (CTR) and constant torque (CT) conditions in a Paterson-type deformation apparatus. The sample assemblage consisted of copper-jacketed Carrara marble hollow cylinders with one weak inclusion of Solnhofen limestone. The CTR experiments were performed at maximum bulk strain rates of 1.8-1.9*10-4 s-1, yielding shear stresses of 19-20 MPa. CT tests were conducted at shear stresses between 18.4 and 19.8 MPa resulting in shear strain rates of 1-2*10-4 s-1. All experiments were run at 900 °C temperature and 400 MPa confining pressure. Maximum bulk shear strains (γ) were ca. 0.3 and 1. Strain localized within the host marble in front of the inclusion in an area termed process zone. Here grain size reduction is intense and local shear strain (estimated from markers on the jackets) is up to 8 times higher than the applied bulk strain, rapidly dropping to 2 times higher at larger distance from the inclusion. The evolution of key microstructural parameters such as average grain size and average grain orientation spread (GOS, a measure of lattice distortion) within the process zone, determined by electron backscatter diffraction analysis, differs significantly as a function of loading conditions. Both parameters indicate that, independent of bulk strain and distance from the inclusion, the contribution of small strain-free recrystallized grains is larger in CTR than in CT samples. Our results suggest that loading conditions substantially affect material heterogeneity-induced localization in its nucleation and transient stages.

  7. Metapopulation extinction risk: dispersal's duplicity.

    PubMed

    Higgins, Kevin

    2009-09-01

    Metapopulation extinction risk is the probability that all local populations are simultaneously extinct during a fixed time frame. Dispersal may reduce a metapopulation's extinction risk by raising its average per-capita growth rate. By contrast, dispersal may raise a metapopulation's extinction risk by reducing its average population density. Which effect prevails is controlled by habitat fragmentation. Dispersal in mildly fragmented habitat reduces a metapopulation's extinction risk by raising its average per-capita growth rate without causing any appreciable drop in its average population density. By contrast, dispersal in severely fragmented habitat raises a metapopulation's extinction risk because the rise in its average per-capita growth rate is more than offset by the decline in its average population density. The metapopulation model used here shows several other interesting phenomena. Dispersal in sufficiently fragmented habitat reduces a metapopulation's extinction risk to that of a constant environment. Dispersal between habitat fragments reduces a metapopulation's extinction risk insofar as local environments are asynchronous. Grouped dispersal raises the effective habitat fragmentation level. Dispersal search barriers raise metapopulation extinction risk. Nonuniform dispersal may reduce the effective fraction of suitable habitat fragments below the extinction threshold. Nonuniform dispersal may make demographic stochasticity a more potent metapopulation extinction force than environmental stochasticity.

  8. Continuous estimates of Survival through Eight Years of Service Using FY 1979 Cross-Sectional Data.

    DTIC Science & Technology

    1981-07-01

    performed for Class A school attendees and non-A school attendees, holding constant the effects of age, educational level, and mental group.* Mean...through eight years of service for _ non-prior service mail recruits. Average survival 0 times by education , mental group, and age are calculated from...attendees is 35 months and for non-A school attendees is 28 months. As expected, we found that educational level has the great- est impact on survival

  9. On the time-splitting scheme used in the Princeton Ocean Model

    NASA Astrophysics Data System (ADS)

    Kamenkovich, V. M.; Nechaev, D. A.

    2009-05-01

    The analysis of the time-splitting procedure implemented in the Princeton Ocean Model (POM) is presented. The time-splitting procedure uses different time steps to describe the evolution of interacting fast and slow propagating modes. In the general case the exact separation of the fast and slow modes is not possible. The main idea of the analyzed procedure is to split the system of primitive equations into two systems of equations for interacting external and internal modes. By definition, the internal mode varies slowly and the crux of the problem is to determine the proper filter, which excludes the fast component of the external mode variables in the relevant equations. The objective of this paper is to examine properties of the POM time-splitting procedure applied to equations governing the simplest linear non-rotating two-layer model of constant depth. The simplicity of the model makes it possible to study these properties analytically. First, the time-split system of differential equations is examined for two types of the determination of the slow component based on an asymptotic approach or time-averaging. Second, the differential-difference scheme is developed and some criteria of its stability are discussed for centered, forward, or backward time-averaging of the external mode variables. Finally, the stability of the POM time-splitting schemes with centered and forward time-averaging is analyzed. The effect of the Asselin filter on solutions of the considered schemes is studied. It is assumed that questions arising in the analysis of the simplest model are inherent in the general model as well.

  10. Anisotropic k-essence cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chimento, Luis P.; Forte, Monica

    We investigate a Bianchi type-I cosmology with k-essence and find the set of models which dissipate the initial anisotropy. There are cosmological models with extended tachyon fields and k-essence having a constant barotropic index. We obtain the conditions leading to a regular bounce of the average geometry and the residual anisotropy on the bounce. For constant potential, we develop purely kinetic k-essence models which are dust dominated in their early stages, dissipate the initial anisotropy, and end in a stable de Sitter accelerated expansion scenario. We show that linear k-field and polynomial kinetic function models evolve asymptotically to Friedmann-Robertson-Walker cosmologies.more » The linear case is compatible with an asymptotic potential interpolating between V{sub l}{proportional_to}{phi}{sup -{gamma}{sub l}}, in the shear dominated regime, and V{sub l}{proportional_to}{phi}{sup -2} at late time. In the polynomial case, the general solution contains cosmological models with an oscillatory average geometry. For linear k-essence, we find the general solution in the Bianchi type-I cosmology when the k field is driven by an inverse square potential. This model shares the same geometry as a quintessence field driven by an exponential potential.« less

  11. DETERMINATION OF HENRY'S LAW CONSTANTS OF SELECTED PRIORITY POLLUTANTS

    EPA Science Inventory

    The Henry's law constants (H) for 41 selected priority pollutants were determined to characterize these pollutants and provide information on their fate as they pass through wastewater treatment systems. All experimental values presented for H are averages of two or more replicat...

  12. Surface Plasmon Resonance Evaluation of Colloidal Metal Aerogel Filters

    NASA Technical Reports Server (NTRS)

    Smith, David D.; Sibille, Laurent; Cronise, Raymond J.; Noever, David A.

    1997-01-01

    Surface plasmon resonance imaging has in the past been applied to the characterization of thin films. In this study we apply the surface plasmon technique not to determine macroscopic spatial variations but rather to determine average microscopic information. Specifically, we deduce the dielectric properties of the surrounding gel matrix and information concerning the dynamics of the gelation process from the visible absorption characteristics of colloidal metal nanoparticles contained in aerogel pores. We have fabricated aerogels containing gold and silver nanoparticles. Because the dielectric constant of the metal particles is linked to that of the host matrix at the surface plasmon resonance, any change 'in the dielectric constant of the material surrounding the metal nanoparticles results in a shift in the surface plasmon wavelength. During gelation the surface plasmon resonance shifts to the red as the average or effective dielectric constant of the matrix increases. Conversely, formation of an aerogel or xerogel through supercritical extraction or evaporation of the solvent produces a blue shift in the resonance indicating a decrease in the dielectric constant of the matrix. From the magnitude of this shift we deduce the average fraction of air and of silica in contact with the metal particles. The surface area of metal available for catalytic gas reaction may thus be determined.

  13. Fracture in Westerly granite under AE feedback and constant strain rate loading: Nucleation, quasi-static propagation, and the transition to unstable fracture propagation

    USGS Publications Warehouse

    Thompson, B.D.; Young, R.P.; Lockner, D.A.

    2006-01-01

    New observations of fracture nucleation are presented from three triaxial compression experiments on intact samples of Westerly granite, using Acoustic Emission (AE) monitoring. By conducting the tests under different loading conditions, the fracture process is demonstrated for quasi-static fracture (under AE Feedback load), a slowly developing unstable fracture (loaded at a 'slow' constant strain rate of 2.5 ?? 10-6/s) and an unstable fracture that develops near instantaneously (loaded at a 'fast' constant strain rate of 5 ?? 10-5/s). By recording a continuous ultrasonic waveform during the critical period of fracture, the entire AE catalogue can be captured and the exact time of fracture defined. Under constant strain loading, three stages are observed: (1) An initial nucleation or stable growth phase at a rate of ??? 1.3 mm/s, (2) a sudden increase to a constant or slowly accelerating propagation speed of ??? 18 mm/s, and (3) unstable, accelerating propagation. In the ??? 100 ms before rupture, the high level of AE activity (as seen on the continuous record) prevented the location of discrete AE events. A lower bound estimate of the average propagation velocity (using the time-to-rupture and the existing fracture length) suggests values of a few m/s. However from a low gain acoustic record, we infer that in the final few ms, the fracture propagation speed increased to 175 m/s. These results demonstrate similarities between fracture nucleation in intact rock and the nucleation of dynamic instabilities in stick slip experiments. It is suggested that the ability to constrain the size of an evolving fracture provides a crucial tool in further understanding the controls on fracture nucleation. ?? Birkha??user Verlag, Basel, 2006.

  14. Seasonal ammonia losses from spray-irrigation with secondary-treated recycled water.

    PubMed

    Saez, Jose A; Harmon, Thomas C; Doshi, Sarika; Guerrero, Francisco

    2012-01-01

    This work examines ammonia volatilization associated with agricultural irrigation employing recycled water. Effluent from a secondary wastewater treatment plant was applied using a center pivot irrigation system on a 12 ha agricultural site in Palmdale, California. Irrigation water was captured in shallow pans and ammonia concentrations were quantified in four seasonal events. The average ammonia loss ranged from 15 to 35% (averaging 22%) over 2-h periods. Temporal mass losses were well-fit using a first-order model. The resulting rate constants correlated primarily with temperature and secondarily with wind speed. The observed application rates and timing were projected over an entire irrigation season using meteorological time series data from the site, which yielded volatilization estimates of 0.03 to 0.09 metric tons NH(3)-N/ha per year. These rates are consistent with average rates (0.04 to 0.08 MT NH(3)-N/ha per year) based on 10 to 20 mg NH(3)-N/L effluent concentrations and a 22% average removal. As less than 10% of the treated effluent in California is currently reused, there is potential for this source to increase, but the increase may be offset by a corresponding reduction in synthetic fertilizers usage. This point is a factor for consideration with respect to nutrient management using recycled water.

  15. Understanding the exposure-time effect on speckle contrast measurements for laser displays

    NASA Astrophysics Data System (ADS)

    Suzuki, Koji; Kubota, Shigeo

    2018-02-01

    To evaluate the influence of exposure time on speckle noise for laser displays, speckle contrast measurement method was developed observable at a human eye response time using a high-sensitivity camera which has a signal multiplying function. The nonlinearity of camera light sensitivity was calibrated to measure accurate speckle contrasts, and the measuring lower limit noise of speckle contrast was improved by applying spatial-frequency low pass filter to the captured images. Three commercially available laser displays were measured over a wide range of exposure times from tens of milliseconds to several seconds without adjusting the brightness of laser displays. The speckle contrast of raster-scanned mobile projector without any speckle-reduction device was nearly constant over various exposure times. On the contrary to this, in full-frame projection type laser displays equipped with a temporally-averaging speckle-reduction device, some of their speckle contrasts close to the lower limits noise were slightly increased at the shorter exposure time due to the noise. As a result, the exposure-time effect of speckle contrast could not be observed in our measurements, although it is more reasonable to think that the speckle contrasts of laser displays, which are equipped with the temporally-averaging speckle-reduction device, are dependent on the exposure time. This discrepancy may be attributed to the underestimation of temporal averaging factor. We expected that this method is useful for evaluating various laser displays and clarify the relationship between the speckle noise and the exposure time for a further verification of speckle reduction.

  16. Age-dependence of the average and equivalent refractive indices of the crystalline lens

    PubMed Central

    Charman, W. Neil; Atchison, David A.

    2013-01-01

    Lens average and equivalent refractive indices are required for purposes such as lens thickness estimation and optical modeling. We modeled the refractive index gradient as a power function of the normalized distance from lens center. Average index along the lens axis was estimated by integration. Equivalent index was estimated by raytracing through a model eye to establish ocular refraction, and then backward raytracing to determine the constant refractive index yielding the same refraction. Assuming center and edge indices remained constant with age, at 1.415 and 1.37 respectively, average axial refractive index increased (1.408 to 1.411) and equivalent index decreased (1.425 to 1.420) with age increase from 20 to 70 years. These values agree well with experimental estimates based on different techniques, although the latter show considerable scatter. The simple model of index gradient gives reasonable estimates of average and equivalent lens indices, although refinements in modeling and measurements are required. PMID:24466474

  17. TIME-INTERVAL MEASURING DEVICE

    DOEpatents

    Gross, J.E.

    1958-04-15

    An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.

  18. Propagation of gaseous detonation waves in a spatially inhomogeneous reactive medium

    NASA Astrophysics Data System (ADS)

    Mi, XiaoCheng; Higgins, Andrew J.; Ng, Hoi Dick; Kiyanda, Charles B.; Nikiforakis, Nikolaos

    2017-05-01

    Detonation propagation in a compressible medium wherein the energy release has been made spatially inhomogeneous is examined via numerical simulation. The inhomogeneity is introduced via step functions in the reaction progress variable, with the local value of energy release correspondingly increased so as to maintain the same average energy density in the medium and thus a constant Chapman-Jouguet (CJ) detonation velocity. A one-step Arrhenius rate governs the rate of energy release in the reactive zones. The resulting dynamics of a detonation propagating in such systems with one-dimensional layers and two-dimensional squares are simulated using a Godunov-type finite-volume scheme. The resulting wave dynamics are analyzed by computing the average wave velocity and one-dimensional averaged wave structure. In the case of sufficiently inhomogeneous media wherein the spacing between reactive zones is greater than the inherent reaction zone length, average wave speeds significantly greater than the corresponding CJ speed of the homogenized medium are obtained. If the shock transit time between reactive zones is less than the reaction time scale, then the classical CJ detonation velocity is recovered. The spatiotemporal averaged structure of the waves in these systems is analyzed via a Favre-averaging technique, with terms associated with the thermal and mechanical fluctuations being explicitly computed. The analysis of the averaged wave structure identifies the super-CJ detonations as weak detonations owing to the existence of mechanical nonequilibrium at the effective sonic point embedded within the wave structure. The correspondence of the super-CJ behavior identified in this study with real detonation phenomena that may be observed in experiments is discussed.

  19. Definition of SMOS Level 3 Land Products for the Villafranca del Castillo Data Processing Centre (CP34)

    NASA Astrophysics Data System (ADS)

    Lopez-Baeza, E.; Monsoriu Torres, A.; Font, J.; Alonso, O.

    2009-04-01

    The ESA SMOS (Soil Moisture and Ocean Salinity) Mission is planned to be launched in July 2009. The satellite will measure soil moisture over the continents and surface salinity of the oceans at resolutions that are sufficient for climatological-type studies. This paper describes the procedure to be used at the Spanish SMOS Level 3 and 4 Data Processing Centre (CP34) to generate Soil Moisture and other Land Surface Product maps from SMOS Level 2 data. This procedure can be used to map Soil Moisture, Vegetation Water Content and Soil Dielectric Constant data into different pre-defined spatial grids with fixed temporal frequency. The L3 standard Land Surface Products to be generated at CP34 are: Soil Moisture products: maximum spatial resolution with no spatial averaging, temporal averaging of 3 days, daily generation maximum spatial resolution with no spatial averaging, temporal averaging of 10 days, generation frequency of once every 10 days. b': maximum spatial resolution with no spatial averaging, temporal averaging of monthly decades (1st to 10th of the month, 11th to 20th of the month, 21st to last day of the month), generation frequency of once every decade monthly average, temporal averaging from L3 decade averages, monthly generation Seasonal average, temporal averaging from L3 monthly averages, seasonally generation yearly average, temporal averaging from L3 monthly averages, yearly generation Vegetation Water Content products: maximum spatial resolution with no spatial averaging, temporal averaging of 10 days, generation frequency of once every 10 days. a': maximum spatial resolution with no spatial averaging, temporal averaging of monthly decades (1st to 10th of the month, 11th to 20th of the month, 21st to last day of the month) using simple averaging method over the L2 products in ISEA grid, generation frequency of once every decade monthly average, temporal averaging from L3 decade averages, monthly generation seasonal average, temporal averaging from L3 monthly averages, seasonally generation yearly average, temporal averaging from L3 monthly averages, yearly generation Dielectric Constant products: (the dielectric constant products are delivered together with soil moisture products, with the same averaging periods and generation frequency): maximum spatial resolution with no spatial averaging, temporal averaging of 3 days, daily generation maximum spatial resolution with no spatial averaging, temporal averaging of 10 days, generation frequency of once every 10 days. b': maximum spatial resolution with no spatial averaging, temporal averaging of monthly decades (1st to 10th of the month, 11th to 20th of the month, 21st to last day of the month), generation frequency of once every decade monthly average, temporal averaging from L3 decade averages, monthly generation seasonal average, temporal averaging from L3 monthly averages, seasonally generation yearly average, temporal averaging from L3 monthly averages, yearly generation.

  20. Relationship between neighbor number and vibrational spectra in disordered colloidal clusters with attractive interactions

    NASA Astrophysics Data System (ADS)

    Yunker, Peter J.; Zhang, Zexin; Gratale, Matthew; Chen, Ke; Yodh, A. G.

    2013-03-01

    We study connections between vibrational spectra and average nearest neighbor number in disordered clusters of colloidal particles with attractive interactions. Measurements of displacement covariances between particles in each cluster permit calculation of the stiffness matrix, which contains effective spring constants linking pairs of particles. From the cluster stiffness matrix, we derive vibrational properties of corresponding "shadow" glassy clusters, with the same geometric configuration and interactions as the "source" cluster but without damping. Here, we investigate the stiffness matrix to elucidate the origin of the correlations between the median frequency of cluster vibrational modes and average number of nearest neighbors in the cluster. We find that the mean confining stiffness of particles in a cluster, i.e., the ensemble-averaged sum of nearest neighbor spring constants, correlates strongly with average nearest neighbor number, and even more strongly with median frequency. Further, we find that the average oscillation frequency of an individual particle is set by the total stiffness of its nearest neighbor bonds; this average frequency increases as the square root of the nearest neighbor bond stiffness, in a manner similar to the simple harmonic oscillator.

  1. The effective propagation constants of SH wave in composites reinforced by dispersive parallel nanofibers

    NASA Astrophysics Data System (ADS)

    Qiang, FangWei; Wei, PeiJun; Li, Li

    2012-07-01

    In the present paper, the effective propagation constants of elastic SH waves in composites with randomly distributed parallel cylindrical nanofibers are studied. The surface stress effects are considered based on the surface elasticity theory and non-classical interfacial conditions between the nanofiber and the host are derived. The scattering waves from individual nanofibers embedded in an infinite elastic host are obtained by the plane wave expansion method. The scattering waves from all fibers are summed up to obtain the multiple scattering waves. The interactions among random dispersive nanofibers are taken into account by the effective field approximation. The effective propagation constants are obtained by the configurational average of the multiple scattering waves. The effective speed and attenuation of the averaged wave and the associated dynamical effective shear modulus of composites are numerically calculated. Based on the numerical results, the size effects of the nanofibers on the effective propagation constants and the effective modulus are discussed.

  2. Feasibility of using limited-population-based average R10 for pharmacokinetic modeling of osteosarcoma dynamic contrast-enhanced magnetic resonance imaging data.

    PubMed

    Huang, Wei; Wang, Ya; Panicek, David M; Schwartz, Lawrence H; Koutcher, Jason A

    2009-07-01

    Retrospective analyses of clinical dynamic contrast-enhanced (DCE) MRI studies may be limited by failure to measure the longitudinal relaxation rate constant (R(1)) initially, which is necessary for quantitative analysis. In addition, errors in R(1) estimation in each individual experiment can cause inconsistent results in derivations of pharmacokinetic parameters, K(trans) and v(e), by kinetic modeling of the DCE-MRI time course data. A total of 18 patients with lower extremity osteosarcomas underwent multislice DCE-MRI prior to surgery. For the individual R(1) measurement approach, the R(1) time course was obtained using the two-point R(1) determination method. For the average R(10) (precontrast R(1)) approach, the R(1) time course was derived using the DCE-MRI pulse sequence signal intensity equation and the average R(10) value of this population. The whole tumor and histogram median K(trans) (0.57+/-0.37 and 0.45+/-0.32 min(-1)) and v(e) (0.59+/-0.20 and 0.56+/-0.17) obtained with the individual R(1) measurement approach are not significantly different (paired t test) from those (K(trans): 0.61+/-0.46 and 0.44+/-0.33 min(-1); v(e): 0.61+/-0.19 and 0.55+/-0.14) obtained with the average R(10) approach. The results suggest that it is feasible, as well as practical, to use a limited-population-based average R(10) for pharmacokinetic modeling of osteosarcoma DCE-MRI data.

  3. Measurement of the J = 0-1 rotational transitions of three isotopes of ArD(+)

    NASA Technical Reports Server (NTRS)

    Bowman, W. C.; Plummer, G. M.; Herbst, E.; De Lucia, F. C.

    1983-01-01

    The rotational transitions of all three isotopic species of ArD(+) in samples containing the Ar isotopes in their natural abundances have been measured by means of millimeter and submillimeter techniques that employ a magnetically enhanced abnormal glow discharge. All three transition frequency measurements were made from digitally averaged signals detected through a lock-in amplifier with a 10-msec time constant. The Ar-4OD(+) transition was easily visible in real time on an oscilloscope with SNR of about 15. It is noted that the observed transition of Ar-38D(+) is more than five orders of magnitude weaker than that due to HCO(+).

  4. Capital dissipation minimization for a class of complex irreversible resource exchange processes

    NASA Astrophysics Data System (ADS)

    Xia, Shaojun; Chen, Lingen

    2017-05-01

    A model of a class of irreversible resource exchange processes (REPes) between a firm and a producer with commodity flow leakage from the producer to a competitive market is established in this paper. The REPes are assumed to obey the linear commodity transfer law (LCTL). Optimal price paths for capital dissipation minimization (CDM) (it can measure economic process irreversibility) are obtained. The averaged optimal control theory is used. The optimal REP strategy is also compared with other strategies, such as constant-firm-price operation and constant-commodity-flow operation, and effects of the amount of commodity transferred and the commodity flow leakage on the optimal REP strategy are also analyzed. The commodity prices of both the producer and the firm for the CDM of the REPes with commodity flow leakage change with the time exponentially.

  5. Efficient production of ultrapure manganese oxides via electrodeposition.

    PubMed

    Cheney, Marcos A; Joo, Sang Woo; Banerjee, Arghya; Min, Bong-Ki

    2012-08-01

    A new process for the production of electrolytic amorphous nanomanganese oxides (EAMD) with uniform size and morphology is described. EAMD are produced for the first time by cathodic deposition from a basic aqueous solution of potassium permanganate at a constant temperature of 16°C. The synthesized materials are characterized by XRD, SEM, TEM, and HRTEM. The materials produced at 5.0 V at constant temperature are amorphous with homogeneous size and morphology with an average particle size around 20 nm, which appears to be much lesser than the previously reported anodic EAMD. A potentiostatic electrodeposition with much lesser deposition rate (with respect to previously reported anodic depositions) is considered to be the reason behind the very low and homogenous particle size distribution due to the lesser agglomeration of our as-synthesized nanoparticles. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Multi-fluid Dynamics for Supersonic Jet-and-Crossflows and Liquid Plug Rupture

    NASA Astrophysics Data System (ADS)

    Hassan, Ezeldin A.

    Multi-fluid dynamics simulations require appropriate numerical treatments based on the main flow characteristics, such as flow speed, turbulence, thermodynamic state, and time and length scales. In this thesis, two distinct problems are investigated: supersonic jet and crossflow interactions; and liquid plug propagation and rupture in an airway. Gaseous non-reactive ethylene jet and air crossflow simulation represents essential physics for fuel injection in SCRAMJET engines. The regime is highly unsteady, involving shocks, turbulent mixing, and large-scale vortical structures. An eddy-viscosity-based multi-scale turbulence model is proposed to resolve turbulent structures consistent with grid resolution and turbulence length scales. Predictions of the time-averaged fuel concentration from the multi-scale model is improved over Reynolds-averaged Navier-Stokes models originally derived from stationary flow. The response to the multi-scale model alone is, however, limited, in cases where the vortical structures are small and scattered thus requiring prohibitively expensive grids in order to resolve the flow field accurately. Statistical information related to turbulent fluctuations is utilized to estimate an effective turbulent Schmidt number, which is shown to be highly varying in space. Accordingly, an adaptive turbulent Schmidt number approach is proposed, by allowing the resolved field to adaptively influence the value of turbulent Schmidt number in the multi-scale turbulence model. The proposed model estimates a time-averaged turbulent Schmidt number adapted to the computed flowfield, instead of the constant value common to the eddy-viscosity-based Navier-Stokes models. This approach is assessed using a grid-refinement study for the normal injection case, and tested with 30 degree injection, showing improved results over the constant turbulent Schmidt model both in mean and variance of fuel concentration predictions. For the incompressible liquid plug propagation and rupture study, numerical simulations are conducted using an Eulerian-Lagrangian approach with a continuous-interface method. A reconstruction scheme is developed to allow topological changes during plug rupture by altering the connectivity information of the interface mesh. Rupture time is shown to be delayed as the initial precursor film thickness increases. During the plug rupture process, a sudden increase of mechanical stresses on the tube wall is recorded, which can cause tissue damage.

  7. R-Matrix Analysis of Structures in Economic Indices: from Nuclear Reactions to High-Frequency Trading

    NASA Astrophysics Data System (ADS)

    Firk, Frank W. K.

    2014-03-01

    It is shown that the R-matrix theory of nuclear reactions is a viable mathematical theory for the description of the fine, intermediate and gross structure observed in the time-dependence of economic indices in general, and the daily Dow Jones Industrial Average in particular. A Lorentzian approximation to R-matrix theory is used to analyze the complex structures observed in the Dow Jones Industrial Average on a typical trading day. Resonant structures in excited nuclei are characterized by the values of their fundamental strength function, (average total width of the states)/(average spacing between adjacent states). Here, values of the ratios (average lifetime of individual states of a given component of the daily Dow Jones Industrial Average)/(average interval between the adjacent states) are determined. The ratios for the observed fine and intermediate structure of the index are found to be essentially constant throughout the trading day. These quantitative findings are characteristic of the highly statistical nature of many-body, strongly interacting systems, typified by daily trading. It is therefore proposed that the values of these ratios, determined in the first hour-or-so of trading, be used to provide valuable information concerning the likely performance of the fine and intermediate components of the index for the remainder of the trading day.

  8. Single-channel activations and concentration jumps: comparison of recombinant NR1a/NR2A and NR1a/NR2D NMDA receptors

    PubMed Central

    Wyllie, David J A; Béhé, Philippe; Colquhoun, David

    1998-01-01

    We have expressed recombinant NR1a/NR2A and NR1a/NR2D N-methyl-D-aspartate (NMDA) receptor channels in Xenopus oocytes and made recordings of single-channel and macroscopic currents in outside-out membrane patches. For each receptor type we measured (a) the individual single-channel activations evoked by low glutamate concentrations in steady-state recordings, and (b) the macroscopic responses elicited by brief concentration jumps with high agonist concentrations, and we explore the relationship between these two sorts of observation. Low concentration (5–100 nM) steady-state recordings of NR1a/NR2A and NR1a/NR2D single-channel activity generated shut-time distributions that were best fitted with a mixture of five and six exponential components, respectively. Individual activations of either receptor type were resolved as bursts of openings, which we refer to as ‘super-clusters’. During a single activation, NR1a/NR2A receptors were open for 36 % of the time, but NR1a/NR2D receptors were open for only 4 % of the time. For both, distributions of super-cluster durations were best fitted with a mixture of six exponential components. Their overall mean durations were 35.8 and 1602 ms, respectively. Steady-state super-clusters were aligned on their first openings and averaged. The average was well fitted by a sum of exponentials with time constants taken from fits to super-cluster length distributions. It is shown that this is what would be expected for a channel that shows simple Markovian behaviour. The current through NR1a/NR2A channels following a concentration jump from zero to 1 mM glutamate for 1 ms was well fitted by three exponential components with time constants of 13 ms (rising phase), 70 ms and 350 ms (decaying phase). Similar concentration jumps on NR1a/NR2D channels were well fitted by two exponentials with means of 45 ms (rising phase) and 4408 ms (decaying phase) components. During prolonged exposure to glutamate, NR1a/NR2A channels desensitized with a time constant of 649 ms, while NR1a/NR2D channels exhibited no apparent desensitization. We show that under certain conditions, the time constants for the macroscopic jump response should be the same as those for the distribution of super-cluster lengths, though the resolution of the latter is so much greater that it cannot be expected that all the components will be resolvable in a macroscopic current. Good agreement was found for jumps on NR1a/NR2D receptors, and for some jump experiments on NR1a/NR2A. However, the latter were rather variable and some were slower than predicted. Slow decays were associated with patches that had large currents. PMID:9625862

  9. Quenching of I(2P1/2) by NO2, N2O4, and N2O.

    PubMed

    Kabir, Md Humayun; Azyazov, Valeriy N; Heaven, Michael C

    2007-10-11

    Quenching of excited iodine atoms (I(5p5, 2P1/2)) by nitrogen oxides are processes of relevance to discharge-driven oxygen iodine lasers. Rate constants at ambient and elevated temperatures (293-380 K) for quenching of I(2P1/2) atoms by NO2, N2O4, and N2O have been measured using time-resolved I(2P1/2) --> I(2P3/2) 1315 nm emission. The excited atoms were generated by pulsed laser photodissociation of CF3I at 248 nm. The rate constants for I(2P1/2) quenching by NO2 and N2O were found to be independent of temperature over the range examined with average values of (2.9 +/- 0.3) x 10(-15) and (1.4 +/- 0.1) x 10(-15) cm3 s(-1), respectively. The rate constant for quenching of I(2P1/2) by N2O4 was found to be (3.5 +/- 0.5) x 10(-13) cm3 s(-1) at ambient temperature.

  10. The clavicle hook plate for Neer type II lateral clavicle fractures.

    PubMed

    Renger, R J; Roukema, G R; Reurings, J C; Raams, P M; Font, J; Verleisdonk, E J M M

    2009-09-01

    To evaluate functional and radiologic outcome in patients with a Neer type II lateral clavicle fracture treated with the clavicle hook plate. Multicenter retrospective study. Five level I and II trauma centers. Forty-four patients, average age 38.4 years (18-66 years), with a Neer type II lateral clavicle fracture treated with the clavicle hook plate between January 1, 2003, and December 31, 2006. Open reduction and internal fixation with the clavicle hook plate. Removal of all 44 implants after consolidation at a mean of 8.4 months (2-33 months) postoperatively. At an average follow-up of 27.4 months (13-48 months), functional outcome was assessed with the Constant-Murley scoring system. Radiographs were taken to evaluate consolidation and to determine the distance between the coracoid process and the clavicle. The average Constant score was 92.4 (74-100). The average distance between the coracoid process and the clavicle was 9.8 mm (7.3-14.8 mm) compared with 9.4 mm (6.9-14.3 mm) on the contralateral nonoperative side. We observed 1 dislocation of an implant (2.2%), 2 cases of pseudarthrosis (4.5%), 2 superficial wound infections (4.5%), 2 patients with hypertrophic scar tissue (4.5%), and 3 times an acromial osteolysis (6.8%). Thirty patients (68%) reported discomfort due to the implant. These implant-related complaints and the acromial osteolysis disappeared after removal of the hook plate. With all the patients, direct functional aftercare was possible. The clavicle hook plate is a suitable implant for Neer type II clavicle fractures. The advantage of this osteosynthesis is the possibility of immediate functional aftercare. We observed a high percentage of discomfort due to the implant; therefore, we advise to remove the implant as soon as consolidation has taken place.

  11. Noise adaptation in integrate-and fire neurons.

    PubMed

    Rudd, M E; Brown, L G

    1997-07-01

    The statistical spiking response of an ensemble of identically prepared stochastic integrate-and-fire neurons to a rectangular input current plus gaussian white noise is analyzed. It is shown that, on average, integrate-and-fire neurons adapt to the root-mean-square noise level of their input. This phenomenon is referred to as noise adaptation. Noise adaptation is characterized by a decrease in the average neural firing rate and an accompanying decrease in the average value of the generator potential, both of which can be attributed to noise-induced resets of the generator potential mediated by the integrate-and-fire mechanism. A quantitative theory of noise adaptation in stochastic integrate-and-fire neurons is developed. It is shown that integrate-and-fire neurons, on average, produce transient spiking activity whenever there is an increase in the level of their input noise. This transient noise response is either reduced or eliminated over time, depending on the parameters of the model neuron. Analytical methods are used to prove that nonleaky integrate-and-fire neurons totally adapt to any constant input noise level, in the sense that their asymptotic spiking rates are independent of the magnitude of their input noise. For leaky integrate-and-fire neurons, the long-run noise adaptation is not total, but the response to noise is partially eliminated. Expressions for the probability density function of the generator potential and the first two moments of the potential distribution are derived for the particular case of a nonleaky neuron driven by gaussian white noise of mean zero and constant variance. The functional significance of noise adaptation for the performance of networks comprising integrate-and-fire neurons is discussed.

  12. Cost associated with stroke: outpatient rehabilitative services and medication.

    PubMed

    Godwin, Kyler M; Wasserman, Joan; Ostwald, Sharon K

    2011-10-01

    This study aimed to capture direct costs of outpatient rehabilitative stroke care and medications for a 1-year period after discharge from inpatient rehabilitation. Outpatient rehabilitative services and medication costs for 1 year, during the time period of 2001 to 2005, were calculated for 54 first-time stroke survivors. Costs for services were based on Medicare reimbursement rates. Medicaid reimbursement rates and average wholesale price were used to estimate medication costs. Of the 54 stroke survivors, 40 (74.1%) were categorized as independent, 12 (22.2%) had modified dependence, and 2 (3.7%) were dependent at the time of discharge from inpatient rehabilitation. Average cost for outpatient stroke rehabilitation services and medications the first year post inpatient rehabilitation discharge was $17,081. The corresponding average yearly cost of medication was $5,392, while the average cost of yearly rehabilitation service utilization was $11,689. Cost attributed to medication remained relatively constant throughout the groups. Outpatient rehabilitation service utilization constituted a large portion of cost within each group: 69.7% (dependent), 72.5% (modified dependence), and 66.7% (independent). Stroke survivors continue to incur significant costs associated with their stroke for the first 12 months following discharge from an inpatient rehabilitation setting. Changing public policies affect the cost and availability of care. This study provides a snapshot of outpatient medication and therapy costs prior to the enactment of major changes in federal legislation and serves as a baseline for future studies.

  13. Correcting GOES-R Magnetometer Data for Stray Fields

    NASA Technical Reports Server (NTRS)

    Carter, Delano; Freesland, Douglas; Tadikonda, Sivakumar; Kronenwetter, Jeffrey; Todirita, Monica; Dahya, Melissa; Chu, Donald

    2016-01-01

    Time-varying spacecraft magnetic fields, i.e. stray fields, are a problem for magnetometer systems. While constant fields can be removed by calibration, stray fields are difficult to distinguish from ambient field variations. Putting two magnetometers on a long boom and solving for both the ambient and stray fields can help, but this gradiometer solution is more sensitive to noise than a single magnetometer. As shown here for the R-series Geostationary Operational Environmental Satellites (GOES-R), unless the stray fields are larger than the noise, simply averaging the two magnetometer readings gives a more accurate solution. If averaging is used, it may be worthwhile to estimate and remove stray fields explicitly. Models and estimation algorithms to do so are provided for solar array, arcjet and reaction wheel fields.

  14. How Do Changes in Speed Affect the Perception of Duration?

    ERIC Educational Resources Information Center

    Matthews, William J.

    2011-01-01

    Six experiments investigated how changes in stimulus speed influence subjective duration. Participants saw rotating or translating shapes in three conditions: constant speed, accelerating motion, and decelerating motion. The distance moved and average speed were the same in all three conditions. In temporal judgment tasks, the constant-speed…

  15. Carnegie Hubble Program: A Mid-Infrared Calibration of the Hubble Constant

    NASA Technical Reports Server (NTRS)

    Freedman, Wendy L.; Madore, Barry F.; Scowcroft, Victoria; Burns, Chris; Monson, Andy; Persson, S. Eric; Seibert, Mark; Rigby, Jane

    2012-01-01

    Using a mid-infrared calibration of the Cepheid distance scale based on recent observations at 3.6 micrometers with the Spitzer Space Telescope, we have obtained a new, high-accuracy calibration of the Hubble constant. We have established the mid-IR zero point of the Leavitt law (the Cepheid period-luminosity relation) using time-averaged 3.6 micrometers data for 10 high-metallicity, MilkyWay Cepheids having independently measured trigonometric parallaxes. We have adopted the slope of the PL relation using time-averaged 3.6micrometers data for 80 long-period Large Magellanic Cloud (LMC) Cepheids falling in the period range 0.8 < log(P) < 1.8.We find a new reddening-corrected distance to the LMC of 18.477 +/- 0.033 (systematic) mag. We re-examine the systematic uncertainties in H(sub 0), also taking into account new data over the past decade. In combination with the new Spitzer calibration, the systematic uncertainty in H(sub 0) over that obtained by the Hubble Space Telescope Key Project has decreased by over a factor of three. Applying the Spitzer calibration to the Key Project sample, we find a value of H(sub 0) = 74.3 with a systematic uncertainty of +/-2.1 (systematic) kilometers per second Mpc(sup -1), corresponding to a 2.8% systematic uncertainty in the Hubble constant. This result, in combination with WMAP7measurements of the cosmic microwave background anisotropies and assuming a flat universe, yields a value of the equation of state for dark energy, w(sub 0) = -1.09 +/- 0.10. Alternatively, relaxing the constraints on flatness and the numbers of relativistic species, and combining our results with those of WMAP7, Type Ia supernovae and baryon acoustic oscillations yield w(sub 0) = -1.08 +/- 0.10 and a value of N(sub eff) = 4.13 +/- 0.67, mildly consistent with the existence of a fourth neutrino species.

  16. Performance of the fixed-bed of granular activated carbon for the removal of pesticides from water supply.

    PubMed

    Alves, Alcione Aparecida de Almeida; Ruiz, Giselle Louise de Oliveira; Nonato, Thyara Campos Martins; Müller, Laura Cecilia; Sens, Maurício Luiz

    2018-02-26

    The application of a fixed bed adsorption column of granular activated carbon (FBAC-GAC), in the removal of carbaryl, methomyl and carbofuran at a concentration of 25 μg L -1 for each carbamate, from the public water supply was investigated. For the determination of the presence of pesticides in the water supply, the analytical technique of high-performance liquid chromatography with post-column derivatization was used. Under conditions of constant diffusivity, the FBAC-GAC was saturated after 196 h of operation on a pilot scale. The exhaust rate of the granular activated carbon (GAC) in the FBAC-GAC until the point of saturation was 0.02 kg GAC m -3 of treated water. By comparing a rapid small-scale column test and FBAC-GAC, it was confirmed that the predominant intraparticle diffusivity in the adsorption column was constant diffusivity. Based on the results obtained on a pilot scale, it was possible to estimate the values to be applied in the FBAC-GAC (full scale) to remove the pesticides, which are particle size with an average diameter of 1.5 mm GAC; relationship between the internal diameter of the column and the average diameter of GAC ≥50 in order to avoid preferential flow near the adsorption column wall; surface application rate 240 m 3  m -2  d -1 and an empty bed contact time of 3 min. BV: bed volume; CD: constant diffusivity; EBCT: empty bed contact time; FBAC-GAC: fixed bed adsorption column of granular activated carbon; GAC: granular activated carbon; MPV: maximum permitted values; NOM: natural organic matter; PD: proportional diffusivity; pH PCZ : pH of the zero charge point; SAR: surface application rate; RSSCT: rapid small-scale column test; WTCS: water treated conventional system.

  17. Ab initio multiple spawning dynamics study of dimethylnitramine and dimethylnitramine-Fe complex to model their ultrafast nonadiabatic chemistry

    NASA Astrophysics Data System (ADS)

    Bera, Anupam; Ghosh, Jayanta; Bhattacharya, Atanu

    2017-07-01

    Conical intersections are now firmly established to be the key features in the excited electronic state processes of polyatomic energetic molecules. In the present work, we have explored conical intersection-mediated nonadiabatic chemical dynamics of a simple analogue nitramine molecule, dimethylnitramine (DMNA, containing one N-NO2 energetic group), and its complex with an iron atom (DMNA-Fe). For this task, we have used the ab initio multiple spawning (AIMS) dynamics simulation at the state averaged-complete active space self-consistent field(8,5)/6-31G(d) level of theory. We have found that DMNA relaxes back to the ground (S0) state following electronic excitation to the S1 excited state [which is an (n,π*) excited state] with a time constant of approximately 40 fs. This AIMS result is in very good agreement with the previous surface hopping-result and femtosecond laser spectroscopy result. DMNA does not dissociate during this fast internal conversion from the S1 to the S0 state. DMNA-Fe also undergoes extremely fast relaxation from the upper S1 state to the S0 state; however, this relaxation pathway is dissociative in nature. DMNA-Fe undergoes initial Fe-O, N-O, and N-N bond dissociations during relaxation from the upper S1 state to the ground S0 state through the respective conical intersection. The AIMS simulation reveals the branching ratio of these three channels as N-N:Fe-O:N-O = 6:3:1 (based on 100 independent simulations). Furthermore, the AIMS simulation reveals that the Fe-O bond dissociation channel exhibits the fastest (time constant 24 fs) relaxation, while the N-N bond dissociation pathway features the slowest (time constant 128 fs) relaxation. An intermediate time constant (30 fs) is found for the N-O bond dissociation channel. This is the first nonadiabatic chemical dynamics study of metal-contained energetic molecules through conical intersections.

  18. A flowing partially penetrating well in a finite-thickness aquifer: a mixed-type initial boundary value problem

    NASA Astrophysics Data System (ADS)

    Chang, Chien-Chieh; Chen, Chia-Shyun

    2003-02-01

    An analytical approach using integral transform techniques is developed to deal with a well hydraulics model involving a mixed boundary of a flowing partially penetrating well, where constant drawdown is stipulated along the well screen and no-flux condition along the remaining unscreened part. The aquifer is confined of finite thickness. First, the mixed boundary is changed into a homogeneous Neumann boundary by discretizing the well screen into a finite number of segments, each of which at constant drawdown is subject to unknown a priori well bore flux. Then, the Laplace and the finite Fourier transforms are used to solve this modified model. Finally, the prescribed constant drawdown condition is reinstated to uniquely determine the well bore flux function, and to restore the relation between the solution and the original model. The transient and the steady-state solutions for infinite aquifer thickness can be derived from the semi-analytical solution, complementing the currently available dual integral solution. If the distance from the edge of the well screen to the bottom/top of the aquifer is 100 times greater than the well screen length, aquifer thickness can be assumed infinite for times of practical significance, and groundwater flow can reach a steady-state condition, where the well will continuously supply water under a constant discharge. However, if aquifer thickness is smaller, the well discharge decreases with time. The partial penetration effect is most pronounced in the vicinity of the flowing well, decreases with increasing horizontal distance, and vanishes at distances larger than 1-2 times the aquifer thickness divided by the square root of aquifer anisotropy. The horizontal hydraulic conductivity and the specific storage coefficient can be determined from vertically averaged drawdown as measured by fully penetrating observation wells. The vertical hydraulic conductivity can be determined from the well discharge under two particular partial penetration conditions.

  19. Estimating the timing of quantal releases during end-plate currents at the frog neuromuscular junction.

    PubMed Central

    Van der Kloot, W

    1988-01-01

    1. Following motor nerve stimulation there is a period of greatly enhanced quantal release, called the early release period or ERP (Barrett & Stevens, 1972b). Until now, measurements of the probability of quantal releases at different points in the ERP have come from experiments in which quantal output was greatly reduced, so that the time of release of individual quanta could be detected or so that the latency to the release of the first quantum could be measured. 2. A method has been developed to estimate the timing of quantal release during the ERP that can be used at much higher levels of quantal output. The assumption is made that each quantal release generates an end-plate current (EPC) that rises instantaneously and then decays exponentially. The peak amplitude of the quantal currents and the time constant for their decay are measured from miniature end-plate currents (MEPCs). Then a number of EPCs are averaged, and the times of release of the individual quanta during the ERP estimated by a simple mathematical method for deconvolution derived by Cohen, Van der Kloot & Attwell (1981). 3. The deconvolution method was tested using data from preparations in high-Mg2+ low-Ca2+ solution. One test was to reconstitute the averaged EPCs from the estimated times of quantal release and the quantal currents, by using Fourier convolution. The reconstructions fit well to the originals. 4. Reconstructions were also made from averaged MEPCs which do not rise instantaneously and the estimated times of quantal release.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2466987

  20. Anaerobic co-digestion technology in solid wastes treatment for biomethane generation

    NASA Astrophysics Data System (ADS)

    Al Mamun, Muhammad Rashed; Torii, Shuichi

    2017-05-01

    Anaerobic co-digestion is considered to be an efficient way of disposing solid wastes which can not only reduce environmental burden, but also produce bioenergy. Co-digestion of solid wastes in the absence of bacteria inoculums with variable mixing ratios of three wastes has been experimentally tested for 35 days digestion time to determine the biogas potential. The temperature remained relatively constant at a mesophilic range of 29-36°C throughout the study. An average pH of 7.4 was recorded from all digesters. The average biogas yields obtained from the four digesters (D1, D2, D3 and D4) were 13.31, 15.67, 16.52 and 19.12 L/day, respectively. The cumulative result showed that from co-digestion of D4 43.67%, 22.02% and 15.71% more biogas was produced, respectively, than others. The maximum and average COD reduction was 57% and 31%, respectively, in co-digestion wastes. The biogas comprised average of 61% CH4, 33.5% CO2, 222 ppm H2S, and 4.7% H2O, respectively.

  1. Computational IR spectroscopy of water: OH stretch frequencies, transition dipoles, and intermolecular vibrational coupling constants

    NASA Astrophysics Data System (ADS)

    Choi, Jun-Ho; Cho, Minhaeng

    2013-05-01

    The Hessian matrix reconstruction method initially developed to extract the basis mode frequencies, vibrational coupling constants, and transition dipoles of the delocalized amide I, II, and III vibrations of polypeptides and proteins from quantum chemistry calculation results is used to obtain those properties of delocalized O-H stretch modes in liquid water. Considering the water symmetric and asymmetric O-H stretch modes as basis modes, we here develop theoretical models relating vibrational frequencies, transition dipoles, and coupling constants of basis modes to local water configuration and solvent electric potential. Molecular dynamics simulation was performed to generate an ensemble of water configurations that was in turn used to construct vibrational Hamiltonian matrices. Obtaining the eigenvalues and eigenvectors of the matrices and using the time-averaging approximation method, which was developed by the Skinner group, to calculating the vibrational spectra of coupled oscillator systems, we could numerically simulate the O-H stretch IR spectrum of liquid water. The asymmetric line shape and weak shoulder bands were quantitatively reproduced by the present computational procedure based on vibrational exciton model, where the polarization effects on basis mode transition dipoles and inter-mode coupling constants were found to be crucial in quantitatively simulating the vibrational spectra of hydrogen-bond networking liquid water.

  2. Jitter compensation circuit

    DOEpatents

    Sullivan, James S.; Ball, Don G.

    1997-01-01

    The instantaneous V.sub.co signal on a charging capacitor is sampled and the charge voltage on capacitor C.sub.o is captured just prior to its discharge into the first stage of magnetic modulator. The captured signal is applied to an averaging circuit with a long time constant and to the positive input terminal of a differential amplifier. The averaged V.sub. co signal is split between a gain stage (G=0.975) and a feedback stage that determines the slope of the voltage ramp applied to the high speed comparator. The 97.5% portion of the averaged V.sub.co signal is applied to the negative input of a differential amplifier gain stage (G=10). The differential amplifier produces an error signal by subtracting 97.5% of the averaged V.sub.co signal from the instantaneous value of sampled V.sub.co signal and multiplying the difference by ten. The resulting error signal is applied to the positive input of a high speed comparator. The error signal is then compared to a voltage ramp that is proportional to the averaged V.sub.co values squared divided by the total volt-second product of the magnetic compression circuit.

  3. Jitter compensation circuit

    DOEpatents

    Sullivan, J.S.; Ball, D.G.

    1997-09-09

    The instantaneous V{sub co} signal on a charging capacitor is sampled and the charge voltage on capacitor C{sub o} is captured just prior to its discharge into the first stage of magnetic modulator. The captured signal is applied to an averaging circuit with a long time constant and to the positive input terminal of a differential amplifier. The averaged V{sub co} signal is split between a gain stage (G = 0.975) and a feedback stage that determines the slope of the voltage ramp applied to the high speed comparator. The 97.5% portion of the averaged V{sub co} signal is applied to the negative input of a differential amplifier gain stage (G = 10). The differential amplifier produces an error signal by subtracting 97.5% of the averaged V{sub co} signal from the instantaneous value of sampled V{sub co} signal and multiplying the difference by ten. The resulting error signal is applied to the positive input of a high speed comparator. The error signal is then compared to a voltage ramp that is proportional to the averaged V{sub co} values squared divided by the total volt-second product of the magnetic compression circuit. 11 figs.

  4. Effect of temperature on postillumination isoprene emission in oak and poplar.

    PubMed

    Li, Ziru; Ratliff, Ellen A; Sharkey, Thomas D

    2011-02-01

    Isoprene emission from broadleaf trees is highly temperature dependent, accounts for much of the hydrocarbon emission from plants, and has a profound effect on atmospheric chemistry. We studied the temperature response of postillumination isoprene emission in oak (Quercus robur) and poplar (Populus deltoides) leaves in order to understand the regulation of isoprene emission. Upon darkening a leaf, isoprene emission fell nearly to zero but then increased for several minutes before falling back to nearly zero. Time of appearance of this burst of isoprene was highly temperature dependent, occurring sooner at higher temperatures. We hypothesize that this burst represents an intermediate pool of metabolites, probably early metabolites in the methylerythritol 4-phosphate pathway, accumulated upstream of dimethylallyl diphosphate (DMADP). The amount of this early metabolite(s) averaged 2.9 times the amount of plastidic DMADP. DMADP increased with temperature up to 35°C before starting to decrease; in contrast, the isoprene synthase rate constant increased up to 40°C, the highest temperature at which it could be assessed. During a rapid temperature switch from 30°C to 40°C, isoprene emission increased transiently. It was found that an increase in isoprene synthase activity is primarily responsible for this transient increase in emission levels, while DMADP level stayed constant during the switch. One hour after switching to 40°C, the amount of DMADP fell but the rate constant for isoprene synthase remained constant, indicating that the high temperature falloff in isoprene emission results from a reduction in the supply of DMADP rather than from changes in isoprene synthase activity.

  5. An IPv6 routing lookup algorithm using weight-balanced tree based on prefix value for virtual router

    NASA Astrophysics Data System (ADS)

    Chen, Lingjiang; Zhou, Shuguang; Zhang, Qiaoduo; Li, Fenghua

    2016-10-01

    Virtual router enables the coexistence of different networks on the same physical facility and has lately attracted a great deal of attention from researchers. As the number of IPv6 addresses is rapidly increasing in virtual routers, designing an efficient IPv6 routing lookup algorithm is of great importance. In this paper, we present an IPv6 lookup algorithm called weight-balanced tree (WBT). WBT merges Forwarding Information Bases (FIBs) of virtual routers into one spanning tree, and compresses the space cost. WBT's average time complexity and the worst case time complexity of lookup and update process are both O(logN) and space complexity is O(cN) where N is the size of routing table and c is a constant. Experiments show that WBT helps reduce more than 80% Static Random Access Memory (SRAM) cost in comparison to those separation schemes. WBT also achieves the least average search depth comparing with other homogeneous algorithms.

  6. Variations in the chemical properties of landfill leachate

    NASA Astrophysics Data System (ADS)

    Chu, L. M.; Cheung, K. C.; Wong, M. H.

    1994-01-01

    Landfill leachates were collected and their chemical properties analyzed once every two months over a ten-month period from the Gin Drinkers' Bay (GDB) and Junk Bay (JB) landfills. The contents of solids, and inorganic and organic components fluctuated considerably with time. In general, the chemical properties of the two leachates correlated negatively ( P<0.05) with the amounts of rainfall prior to the sampling periods. However, magnesium and pH of the leachates remained relatively constant with respect to sampling time. The JB leachate contained higher average contents of solids and inorganic and organic matter than those of GDB with the exception of trace metals. Trace metals were present in the two leachates in trace quantities (<1.0 mg/liter). The concentrations of average ammoniacal nitrogen were 1040 and 549 mg/liter, while chemical oxygen demand (COD) values were 767 and 695 mg/liter for JB and GDB leachates, respectively. These results suggest that the leachates need further treatment before they can be discharged to the coastal waters.

  7. Some General Laws of Chemical Elements Composition Dynamics in the Hydrosphere

    NASA Astrophysics Data System (ADS)

    Korzh, V.

    2012-12-01

    The biophysical oceanic composition is a result of substance migration and transformation on river-sea and ocean- atmosphere boundaries. Chemical composition of oceanic water is a fundamental multi-dimensional constant for our planet. Detailed studies revealed three types of chemical element distribution in the ocean: 1) Conservative: concentration normalized to salinity is constant in space and time; 2) Nutrient-type: element concentration in the surface waters decreases due to the biosphere consumption; and 3) Litho-generative: complex character of distribution of elements, which enter the ocean with the river runoff and interred almost entirely in sediments (Fig. 1). The correlation between the chemical compositions of the river and oceanic water is high (r = 0.94). We conclude that biogeochemical features of each element are determined by the relationship between its average concentration in the ocean and the intensity of its migration through hydrosphere boundary zones. In Fig.1 we show intensities of global migration and average concentrations in the ocean in the coordinates lgC - lg τ, where C is an average element concentration and τ is its residual time in the ocean. Fig. 1 shows a relationship between three main geochemical parameters of the dissolved forms of chemical elements in the hydrosphere: 1) average concentration in the ocean, 2) average concentration in the river runoff and 3) the type of distribution in oceanic water. Using knowledge of two of these parameters, it allows gaining theoretical knowledge of the third. The System covers all chemical elements for the entire range of observed concentrations. It even allows to predict the values of the annual river transport of dissolved Be, C, N, Ge, Tl, Re, to refine such estimates for P, V, Zn, Br, I, and to determine the character of distribution in the ocean for Au and U. Furthermore, the System allowed to estimate natural (unaffected by anthropogenic influence) mean concentrations of elements in the river runoff and use them as ecological reference data. Finally, due to the long response time of the ocean, the mean concentrations of elements and patterns of their distribution in the ocean can be used to determine pre-technogenic concentrations of elements in the river runoff. An example of such studies for the Northern Eurasia Arctic Rivers will be presented at the conference. References Korzh 1974: J. de Recher. Atmos, 8, 653-660. Korzh 2008: J. Ecol., 15, 13-21. Korzh 2012: Water: Chem. & Ecol., No. 1, 56-62; Fig.1. The System of chemical elements distribution in the hydrosphere. Types of distribution in the ocean: 1) conservative; 2) nutrient-type; 3) litho-generative.

  8. Quantification aspects of constant pressure (ultra) high pressure liquid chromatography using mass-sensitive detectors with a nebulizing interface.

    PubMed

    Verstraeten, M; Broeckhoven, K; Lynen, F; Choikhet, K; Landt, K; Dittmann, M; Witt, K; Sandra, P; Desmet, G

    2013-01-25

    The present contribution investigates the quantitation aspects of mass-sensitive detectors with nebulizing interface (ESI-MSD, ELSD, CAD) in the constant pressure gradient elution mode. In this operation mode, the pressure is controlled and maintained at a set value and the liquid flow rate will vary according to the inverse mobile phase viscosity. As the pressure is continuously kept at the allowable maximum during the entire gradient run, the average liquid flow rate is higher compared to that in the conventional constant flow rate operation mode, thus shortening the analysis time. The following three mass-sensitive detectors were investigated: mass spectrometry detector (MS), evaporative light scattering detector (ELSD) and charged aerosol detector (CAD) and a wide variety of samples (phenones, polyaromatic hydrocarbons, wine, cocoa butter) has been considered. It was found that the nebulizing efficiency of the LC-interfaces of the three detectors under consideration changes with the increasing liquid flow rate. For the MS, the increasing flow rate leads to a lower peak area whereas for the ELSD the peak area increases compared to the constant flow rate mode. The peak area obtained with a CAD is rather insensitive to the liquid flow rate. The reproducibility of the peak area remains similar in both modes, although variation in system permeability compromises the 'long-term' reproducibility. This problem can however be overcome by running a flow rate program with an optimized flow rate and composition profile obtained from the constant pressure mode. In this case, the quantification remains reproducibile, despite any occuring variations of the system permeability. Furthermore, the same fragmentation pattern (MS) has been found in the constant pressure mode compared to the customary constant flow rate mode. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. A novel method for bacterial inactivation using electrosprayed water nanostructures

    NASA Astrophysics Data System (ADS)

    Pyrgiotakis, Georgios; McDevitt, James; Yamauchi, Toshiyuki; Demokritou, Philip

    2012-08-01

    This is a study focusing on the potential to deactivate biological agents (bacteria and endospores) using engineered water nanostructures (EWNS). The EWNS were generated using an electrospray device that collects water by condensing atmospheric water vapor on a Peltier-cooled electrode. A high voltage is applied between the collection electrode and a grounded electrode resulting in aerosolization of the condensed water and a constant generation of EWNS. Gram-negative Serratia marcescens, gram-positive Staphylococcus aureus, and Bacillus atrophaeus endospores were placed on stainless steel coupons and exposed to generated EWNS at multiple time intervals. Upon exposures, the bacteria were recovered and placed on nutrient agar to grow, and the colony forming units were counted. Ozone levels as well as air temperature and relative humidity were monitored during the experiments. Qualitative confirmation of bacterial destruction was also obtained by transmission electron microscopy. In addition, important EWNS aerosol properties such as particle number concentration as a function of size as well as the average surface charge of the generated EWNS were measured using real-time instrumentation. It was shown that the novel electrospray method can generate over time a constant flux of EWNS. EWNS have a peak number concentration of 8,000 particles per cubic centimeter with a modal peak size around 20 nm. The average surface charge of the generated EWNS was found to be 10 ± 2 electrons per particle. In addition, it was shown that the EWNS have the potential to deactivate both bacteria types from surfaces. At the same administrate dose, however, the endospores were not inactivated. This novel method and the unique properties of the generated EWNS could potentially be used to develop an effective, environmentally friendly, and inexpensive method for bacteria inactivation.

  10. ON THE STAR FORMATION LAW FOR SPIRAL AND IRREGULAR GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elmegreen, Bruce G., E-mail: bge@us.ibm.com

    2015-12-01

    A dynamical model for star formation on a galactic scale is proposed in which the interstellar medium is constantly condensing to star-forming clouds on the dynamical time of the average midplane density, and the clouds are constantly being disrupted on the dynamical timescale appropriate for their higher density. In this model, the areal star formation rate scales with the 1.5 power of the total gas column density throughout the main regions of spiral galaxies, and with a steeper power, 2, in the far outer regions and in dwarf irregular galaxies because of the flaring disks. At the same time, theremore » is a molecular star formation law that is linear in the main and outer parts of disks and in dIrrs because the duration of individual structures in the molecular phase is also the dynamical timescale, canceling the additional 0.5 power of surface density. The total gas consumption time scales directly with the midplane dynamical time, quenching star formation in the inner regions if there is no accretion, and sustaining star formation for ∼100 Gyr or more in the outer regions with no qualitative change in gas stability or molecular cloud properties. The ULIRG track follows from high densities in galaxy collisions.« less

  11. Comparison of Accuracy and Performance for Lattice Boltzmann and Finite Difference Simulations of Steady Viscous Flow

    NASA Astrophysics Data System (ADS)

    Noble, David R.; Georgiadis, John G.; Buckius, Richard O.

    1996-07-01

    The lattice Boltzmann method (LBM) is used to simulate flow in an infinite periodic array of octagonal cylinders. Results are compared with those obtained by a finite difference (FD) simulation solved in terms of streamfunction and vorticity using an alternating direction implicit scheme. Computed velocity profiles are compared along lines common to both the lattice Boltzmann and finite difference grids. Along all such slices, both streamwise and transverse velocity predictions agree to within 05% of the average streamwise velocity. The local shear on the surface of the cylinders also compares well, with the only deviations occurring in the vicinity of the corners of the cylinders, where the slope of the shear is discontinuous. When a constant dimensionless relaxation time is maintained, LBM exhibits the same convergence behaviour as the FD algorithm, with the time step increasing as the square of the grid size. By adjusting the relaxation time such that a constant Mach number is achieved, the time step of LBM varies linearly with the grid size. The efficiency of LBM on the CM-5 parallel computer at the National Center for Supercomputing Applications (NCSA) is evaluated by examining each part of the algorithm. Overall, a speed of 139 GFLOPS is obtained using 512 processors for a domain size of 2176×2176.

  12. Quantum supremacy in constant-time measurement-based computation: A unified architecture for sampling and verification

    NASA Astrophysics Data System (ADS)

    Miller, Jacob; Sanders, Stephen; Miyake, Akimasa

    2017-12-01

    While quantum speed-up in solving certain decision problems by a fault-tolerant universal quantum computer has been promised, a timely research interest includes how far one can reduce the resource requirement to demonstrate a provable advantage in quantum devices without demanding quantum error correction, which is crucial for prolonging the coherence time of qubits. We propose a model device made of locally interacting multiple qubits, designed such that simultaneous single-qubit measurements on it can output probability distributions whose average-case sampling is classically intractable, under similar assumptions as the sampling of noninteracting bosons and instantaneous quantum circuits. Notably, in contrast to these previous unitary-based realizations, our measurement-based implementation has two distinctive features. (i) Our implementation involves no adaptation of measurement bases, leading output probability distributions to be generated in constant time, independent of the system size. Thus, it could be implemented in principle without quantum error correction. (ii) Verifying the classical intractability of our sampling is done by changing the Pauli measurement bases only at certain output qubits. Our usage of random commuting quantum circuits in place of computationally universal circuits allows a unique unification of sampling and verification, so they require the same physical resource requirements in contrast to the more demanding verification protocols seen elsewhere in the literature.

  13. Changes in animal performance and profitability of Holstein dairy operations after introduction of crossbreeding with Montbéliarde, Normande, and Scandinavian Red.

    PubMed

    Dezetter, C; Bareille, N; Billon, D; Côrtes, C; Lechartier, C; Seegers, H

    2017-10-01

    An individual-based mechanistic, stochastic, and dynamic simulation model was developed to assess economic effects resulting from changes in performance for milk yield and solid contents, reproduction, health, and replacement, induced by the introduction of crossbreeding in Holstein dairy operations. Three crossbreeding schemes, Holstein × Montbéliarde, Holstein × Montbéliarde × Normande, and Holstein × Montbéliarde × Scandinavian Red, were implemented in Holstein dairy operations and compared with Holstein pure breeding. Sires were selected based on their estimated breeding value for milk. Two initial operations were simulated according to the prevalence (average or high) of reproductive and health disorders in the lactating herd. Evolution of operations was simulated during 15 yr under 2 alternative managerial goals (constant number of cows or constant volume of milk sold). After 15 yr, breed percentages reached equilibrium for the 2-breed but not for the 3-breed schemes. After 5 yr of simulation, all 3 crossbreeding schemes reduced average milk yield per cow-year compared with the pure Holstein scheme. Changes in other animal performance (milk solid contents, reproduction, udder health, and longevity) were always in favor of crossbreeding schemes. Under an objective of constant number of cows, margin over variable costs in average discounted value over the 15 yr of simulation was slightly increased by crossbreeding schemes, with an average prevalence of disorders up to €32/cow-year. In operations with a high prevalence of disorders, crossbreeding schemes increased the margin over variable costs up to €91/cow-year. Under an objective of constant volume of milk sold, crossbreeding schemes improved margin over variable costs up to €10/1,000L (corresponding to around €96/cow-year) for average prevalence of disorders, and up to €13/1,000L (corresponding to around €117/cow-year) for high prevalence of disorders. Under an objective of constant number of cows, an unfavorable pricing context (milk price vs. concentrates price) increased slightly crossbreeding positive effects on margin over variable costs. Under an objective of constant volume of milk, only very limited changes in differences of margins were found between the breeding schemes. Our results, obtained conditionally to the parameterization values used here, suggest that dairy crossbreeding should be considered as a relevant option for Holstein dairy operations with a production level until 9,000 kg/cow-year in France, and possibly in other countries. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. USE OF PELTIER COOLERS AS SOIL HEAT FLUX TRANSDUCERS.

    USGS Publications Warehouse

    Weaver, H.L.; Campbell, G.S.

    1985-01-01

    Peltier coolers were modified and calibrated to serve as soil heat flux transducers. The modification was to fill their interiors with epoxy. The average calibration constant on 21 units was 13. 6 plus or minus 0. 8 kW m** minus **2 V** minus **1 at 20 degree C. This sensitivity is about eight times that of the two thermopile transducers with which comparisons were made. The thermal conductivity of the Peltier cooler transducers was 0. 4 W m** minus **1 degree C** minus **1, which is comparable to that of dry soil.

  15. Optical Characterization of the SPT-3G Camera

    NASA Astrophysics Data System (ADS)

    Pan, Z.; Ade, P. A. R.; Ahmed, Z.; Anderson, A. J.; Austermann, J. E.; Avva, J. S.; Thakur, R. Basu; Bender, A. N.; Benson, B. A.; Carlstrom, J. E.; Carter, F. W.; Cecil, T.; Chang, C. L.; Cliche, J. F.; Cukierman, A.; Denison, E. V.; de Haan, T.; Ding, J.; Dobbs, M. A.; Dutcher, D.; Everett, W.; Foster, A.; Gannon, R. N.; Gilbert, A.; Groh, J. C.; Halverson, N. W.; Harke-Hosemann, A. H.; Harrington, N. L.; Henning, J. W.; Hilton, G. C.; Holzapfel, W. L.; Huang, N.; Irwin, K. D.; Jeong, O. B.; Jonas, M.; Khaire, T.; Kofman, A. M.; Korman, M.; Kubik, D.; Kuhlmann, S.; Kuo, C. L.; Lee, A. T.; Lowitz, A. E.; Meyer, S. S.; Michalik, D.; Montgomery, J.; Nadolski, A.; Natoli, T.; Nguyen, H.; Noble, G. I.; Novosad, V.; Padin, S.; Pearson, J.; Posada, C. M.; Rahlin, A.; Ruhl, J. E.; Saunders, L. J.; Sayre, J. T.; Shirley, I.; Shirokoff, E.; Smecher, G.; Sobrin, J. A.; Stark, A. A.; Story, K. T.; Suzuki, A.; Tang, Q. Y.; Thompson, K. L.; Tucker, C.; Vale, L. R.; Vanderlinde, K.; Vieira, J. D.; Wang, G.; Whitehorn, N.; Yefremenko, V.; Yoon, K. W.; Young, M. R.

    2018-05-01

    The third-generation South Pole Telescope camera is designed to measure the cosmic microwave background across three frequency bands (centered at 95, 150 and 220 GHz) with ˜ 16,000 transition-edge sensor (TES) bolometers. Each multichroic array element on a detector wafer has a broadband sinuous antenna that couples power to six TESs, one for each of the three observing bands and both polarizations, via lumped element filters. Ten detector wafers populate the detector array, which is coupled to the sky via a large-aperture optical system. Here we present the frequency band characterization with Fourier transform spectroscopy, measurements of optical time constants, beam properties, and optical and polarization efficiencies of the detector array. The detectors have frequency bands consistent with our simulations and have high average optical efficiency which is 86, 77 and 66% for the 95, 150 and 220 GHz detectors. The time constants of the detectors are mostly between 0.5 and 5 ms. The beam is round with the correct size, and the polarization efficiency is more than 90% for most of the bolometers.

  16. Effects of a parallel resistor on electrical characteristics of a piezoelectric transformer in open-circuit transient state.

    PubMed

    Chang, Kuo-Tsai

    2007-01-01

    This paper investigates electrical transient characteristics of a Rosen-type piezoelectric transformer (PT), including maximum voltages, time constants, energy losses and average powers, and their improvements immediately after turning OFF. A parallel resistor connected to both input terminals of the PT is needed to improve the transient characteristics. An equivalent circuit for the PT is first given. Then, an open-circuit voltage, involving a direct current (DC) component and an alternating current (AC) component, and its related energy losses are derived from the equivalent circuit with initial conditions. Moreover, an AC power control system, including a DC-to-AC resonant inverter, a control switch and electronic instruments, is constructed to determine the electrical characteristics of the OFF transient state. Furthermore, the effects of the parallel resistor on the transient characteristics at different parallel resistances are measured. The advantages of adding the parallel resistor also are discussed. From the measured results, the DC time constant is greatly decreased from 9 to 0.04 ms by a 10 k(omega) parallel resistance under open output.

  17. Killing approximation for vacuum and thermal stress-energy tensor in static space-times

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frolov, V.P.; Zel'nikov, A.I.

    1987-05-15

    The problem of the vacuum polarization of conformal massless fields in static space-times is considered. A tensor T/sub ..mu..//sub ..nu../ constructed from the curvature, the Killing vector, and their covariant derivatives is proposed which can be used to approximate the average value of the stress-energy tensor /sup ren/ in such spaces. It is shown that if (i) its trace T /sub epsilon//sup epsilon/ coincides with the trace anomaly /sup ren/, (ii) it satisfies the conservation law T/sup ..mu..//sup epsilon/ /sub ;//sub epsilon/ = 0, and (iii) it has the correct behavior under the scale transformations, then it is uniquely definedmore » up to a few arbitrary constants. These constants must be chosen to satisfy the boundary conditions. In the case of a static black hole in a vacuum these conditions single out the unique tensor T/sub ..mu..//sub ..nu../ which provides a good approximation for /sup ren/ in the Hartle-Hawking vacuum. The relation between this approach and the Page-Brown-Ottewill approach is discussed.« less

  18. Optical Characterization of the SPT-3G Focal Plane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Z.; et al.

    The third-generation South Pole Telescope camera is designed to measure the cosmic microwave background across three frequency bands (95, 150 and 220 GHz) with ~16,000 transition-edge sensor (TES) bolometers. Each multichroic pixel on a detector wafer has a broadband sinuous antenna that couples power to six TESs, one for each of the three observing bands and both polarization directions, via lumped element filters. Ten detector wafers populate the focal plane, which is coupled to the sky via a large-aperture optical system. Here we present the frequency band characterization with Fourier transform spectroscopy, measurements of optical time constants, beam properties, andmore » optical and polarization efficiencies of the focal plane. The detectors have frequency bands consistent with our simulations, and have high average optical efficiency which is 86%, 77% and 66% for the 95, 150 and 220 GHz detectors. The time constants of the detectors are mostly between 0.5 ms and 5 ms. The beam is round with the correct size, and the polarization efficiency is more than 90% for most of the bolometers« less

  19. Equivalent Electromagnetic Constants for Microwave Application to Composite Materials for the Multi-Scale Problem

    PubMed Central

    Fujisaki, Keisuke; Ikeda, Tomoyuki

    2013-01-01

    To connect different scale models in the multi-scale problem of microwave use, equivalent material constants were researched numerically by a three-dimensional electromagnetic field, taking into account eddy current and displacement current. A volume averaged method and a standing wave method were used to introduce the equivalent material constants; water particles and aluminum particles are used as composite materials. Consumed electrical power is used for the evaluation. Water particles have the same equivalent material constants for both methods; the same electrical power is obtained for both the precise model (micro-model) and the homogeneous model (macro-model). However, aluminum particles have dissimilar equivalent material constants for both methods; different electric power is obtained for both models. The varying electromagnetic phenomena are derived from the expression of eddy current. For small electrical conductivity such as water, the macro-current which flows in the macro-model and the micro-current which flows in the micro-model express the same electromagnetic phenomena. However, for large electrical conductivity such as aluminum, the macro-current and micro-current express different electromagnetic phenomena. The eddy current which is observed in the micro-model is not expressed by the macro-model. Therefore, the equivalent material constant derived from the volume averaged method and the standing wave method is applicable to water with a small electrical conductivity, although not applicable to aluminum with a large electrical conductivity. PMID:28788395

  20. Relative importance of first and second derivatives of nuclear magnetic resonance chemical shifts and spin-spin coupling constants for vibrational averaging.

    PubMed

    Dracínský, Martin; Kaminský, Jakub; Bour, Petr

    2009-03-07

    Relative importance of anharmonic corrections to molecular vibrational energies, nuclear magnetic resonance (NMR) chemical shifts, and J-coupling constants was assessed for a model set of methane derivatives, differently charged alanine forms, and sugar models. Molecular quartic force fields and NMR parameter derivatives were obtained quantum mechanically by a numerical differentiation. In most cases the harmonic vibrational function combined with the property second derivatives provided the largest correction of the equilibrium values, while anharmonic corrections (third and fourth energy derivatives) were found less important. The most computationally expensive off-diagonal quartic energy derivatives involving four different coordinates provided a negligible contribution. The vibrational corrections of NMR shifts were small and yielded a convincing improvement only for very accurate wave function calculations. For the indirect spin-spin coupling constants the averaging significantly improved already the equilibrium values obtained at the density functional theory level. Both first and complete second shielding derivatives were found important for the shift corrections, while for the J-coupling constants the vibrational parts were dominated by the diagonal second derivatives. The vibrational corrections were also applied to some isotopic effects, where the corrected values reasonably well reproduced the experiment, but only if a full second-order expansion of the NMR parameters was included. Contributions of individual vibrational modes for the averaging are discussed. Similar behavior was found for the methane derivatives, and for the larger and polar molecules. The vibrational averaging thus facilitates interpretation of previous experimental results and suggests that it can make future molecular structural studies more reliable. Because of the lengthy numerical differentiation required to compute the NMR parameter derivatives their analytical implementation in future quantum chemistry packages is desirable.

  1. Multi-specie isothermal flow calculations of widely-spaced co-axial jets in a confined sudden expansion, with the central jet dominant

    NASA Astrophysics Data System (ADS)

    Sturgess, G. J.; Syed, S. A.

    1982-06-01

    A numerical simulation is made of the flow in the Wright Aeronautical Propulsion Laboratory diffusion flame research combustor operating with a strong central jet of carbon dioxide in a weak and removed co-axial jet of air. The simulation is based on a finite difference solution of the time-average, steady-state, elliptic form of the Reynolds equations. Closure for these equations is provided by a two-equation turbulence model. Comparisons between measurements and predictions are made for centerline axial velocities and radial profiles of CO2 concentration. Earlier findings for a single specie, constant density, single jet flow that a large expansion ratio confined jet behaves initially as if it were unconfined, are confirmed for the multiple-specie, variable density, multiple-jet system. The lack of universality in the turbulence model constants and the turbulent Schmidt/Prandtl number is discussed.

  2. Asynchronous Incremental Stochastic Dual Descent Algorithm for Network Resource Allocation

    NASA Astrophysics Data System (ADS)

    Bedi, Amrit Singh; Rajawat, Ketan

    2018-05-01

    Stochastic network optimization problems entail finding resource allocation policies that are optimum on an average but must be designed in an online fashion. Such problems are ubiquitous in communication networks, where resources such as energy and bandwidth are divided among nodes to satisfy certain long-term objectives. This paper proposes an asynchronous incremental dual decent resource allocation algorithm that utilizes delayed stochastic {gradients} for carrying out its updates. The proposed algorithm is well-suited to heterogeneous networks as it allows the computationally-challenged or energy-starved nodes to, at times, postpone the updates. The asymptotic analysis of the proposed algorithm is carried out, establishing dual convergence under both, constant and diminishing step sizes. It is also shown that with constant step size, the proposed resource allocation policy is asymptotically near-optimal. An application involving multi-cell coordinated beamforming is detailed, demonstrating the usefulness of the proposed algorithm.

  3. Biodegradation of propylene glycol and associated hydrodynamic effects in sand.

    PubMed

    Bielefeldt, Angela R; Illangasekare, Tissa; Uttecht, Megan; LaPlante, Rosanna

    2002-04-01

    At airports around the world, propylene glycol (PG) based fluids are used to de-ice aircraft for safe operation. PG removal was investigated in 15-cm deep saturated sand columns. Greater than 99% PG biodegradation was achieved for all flow rates and loading conditions tested, which decreased the hydraulic conductivity of the sand by 1-3 orders of magnitude until a steady-state minimum was reached. Under constant loading at 120 mg PG/d for 15-30 d, the hydraulic conductivity (K) decreased by 2-2.5 orders of magnitude when the average linear velocity of the water was 4.9-1.4 cm/h. Variable PG loading in recirculation tests resulted in slower conductivity declines and lower final steady-state conductivity than constant PG feeding. After significant sand plugging, endogenous periods of time without PG resulted in significant but partial recovery of the original conductivity. Biomass growth also increased the dispersivity of the sand.

  4. Methods of generating synthetic acoustic logs from resistivity logs for gas-hydrate-bearing sediments

    USGS Publications Warehouse

    Lee, Myung W.

    1999-01-01

    Methods of predicting acoustic logs from resistivity logs for hydrate-bearing sediments are presented. Modified time average equations derived from the weighted equation provide a means of relating the velocity of the sediment to the resistivity of the sediment. These methods can be used to transform resistivity logs into acoustic logs with or without using the gas hydrate concentration in the pore space. All the parameters except the unconsolidation constants, necessary for the prediction of acoustic log from resistivity log, can be estimated from a cross plot of resistivity versus porosity values. Unconsolidation constants in equations may be assumed without rendering significant errors in the prediction. These methods were applied to the acoustic and resistivity logs acquired at the Mallik 2L-38 gas hydrate research well drilled at the Mackenzie Delta, northern Canada. The results indicate that the proposed method is simple and accurate.

  5. Distribution of fine-scale mantle heterogeneity from observations of Pdiff coda

    USGS Publications Warehouse

    Earle, P.S.; Shearer, P.M.

    2001-01-01

    We present stacked record sections of Global Seismic Network data that image the average amplitude and polarization of the high-frequency Pdiff coda and investigate their implications on the depth extent of fine-scale (~10 km) mantle heterogeneity. The extended 1-Hz coda lasts for at least 150 sec and is observed to a distance of 130??. The coda's polarization angle is about the same as the main Pdiff arrival (4.4 sec/deg) and is nearly constant with time. Previous studies show that multiple scattering from heterogeneity restricted to the lowermost mantle generates an extended Pdiff coda with a constant polarization. Here we present an alternative model that satisfies our Pdiff observations. The model consists of single scattering from weak (~1%) fine-scale (~2 km) structures distributed throughout the mantle. Although this model is nonunique, it demonstrates that Pdiff coda observations do not preclude the existence of scattering contributions from the entire mantle.

  6. Magnetic field directional discontinuities - Characteristics between 0.46 and 1.0 AU

    NASA Technical Reports Server (NTRS)

    Lepping, R. P.; Behannon, K. W.

    1986-01-01

    Based on Mariner 10 data, a statistical survey and an application of the Sonnerup-Cahill variance procedure to a visual identification with 1.2-s averages for time intervals corresponding to the equally spaced heliocentric distances of 1.0, 0.72 and 0.46 AU, are employed to study the characteristics of directional discontinuities (DDs) in the interplanetary magnetic field. Analysis using two methods demonstrated that the ratio of tangential discontinuities (TDs) to rotational discontinuities (RDs) decreased with decreasing radial distance. Decreases in average discontinuity thickness of 41 percent between 1.0 and 0.72 AU, and 56 percent between 1.0 and 0.46 AU, were found for both TDs and RDs, in agreement with Pioneer 10 data between 1 and 5 AU. Normalization of the individual DD thicknesses with respect to the estimated local proton gyroradius (R sub L) gave a nearly constant average thickness at the three locations, 36 + or - 5 R sub L, for both RDs and TDs.

  7. The effect of inlet boundary conditions in image-based CFD modeling of aortic flow

    NASA Astrophysics Data System (ADS)

    Madhavan, Sudharsan; Kemmerling, Erica Cherry

    2016-11-01

    CFD of cardiovascular flow is a growing and useful field, but simulations are subject to a number of sources of uncertainty which must be quantified. Our work focuses on the uncertainty introduced by the selection of inlet boundary conditions in an image-based, patient-specific model of the aorta. Specifically, we examined the differences between plug flow, fully developed parabolic flow, linear shear flows, skewed parabolic flow profiles, and Womersley flow. Only the shape of the inlet velocity profile was varied-all other parameters were held constant between simulations, including the physiologically realistic inlet flow rate waveform and outlet flow resistance. We found that flow solutions with different inlet conditions did not exhibit significant differences beyond 1 . 75 inlet diameters from the aortic root. Time averaged wall shear stress (TAWSS) was also calculated. The linear shear velocity boundary condition solution exhibited the highest spatially averaged TAWSS, about 2 . 5 % higher than the fully developed parabolic velocity boundary condition, which had the lowest spatially averaged TAWSS.

  8. Basic PK/PD principles of drug effects in circular/proliferative systems for disease modelling.

    PubMed

    Jacqmin, Philippe; McFadyen, Lynn; Wade, Janet R

    2010-04-01

    Disease progression modelling can provide information about the time course and outcome of pharmacological intervention on the disease. The basic PK/PD principles of proliferative and circular systems within the context of modelling disease progression and the effect of treatment thereupon are illustrated with the goal to better understand/predict eventual clinical outcome. Circular/proliferative systems can be very complex. To facilitate the understanding of how a dosing regimen can be defined in such systems we have shown the derivation of a system parameter named the Reproduction Minimum Inhibitory Concentration (RMIC) which represents the critical concentration at which the system switches from growth to extinction. The RMIC depends on two parameters (RMIC = (R(0) - 1) x IC(50)): the basic reproductive ratio (R(0)) a fundamental parameter of the circular/proliferative system that represents the number of offspring produced by one replicating species during its lifespan, and the IC(50), the potency of the drug to inhibit the proliferation of the system. The RMIC is constant for a given system and a given drug and represents the lowest concentration that needs to be achieved for eradication of the system. When exposure is higher than the RMIC, success can be expected in the long term. Time varying inhibition of replicating species proliferation is a natural consequence of the time varying inhibitor drug concentrations and when combined with the dynamics of the circular/proliferative system makes it difficult to predict the eventual outcome. Time varying inhibition of proliferative/circular systems can be handled by calculating the equivalent effective constant concentration (ECC), the constant plasma concentration that would give rise to the average inhibition at steady state. When ECC is higher than the RMIC, eradication of the system can be expected. In addition, it is shown that scenarios that have the same steady state ECC whatever the dose, dosage schedule or PK parameters have also the same average R (0) in the presence of the inhibitor (i.e. R (0-INH)) and therefore lead to the same outcome. This allows predicting equivalent active doses and dosing schedules in circular and proliferative systems when the IC(50) and pharmacokinetic characteristics of the drugs are known. The results from the simulations performed demonstrate that, for a given system (defined by its RMIC), treatment success depends mainly on the pharmacokinetic characteristics of the drug and the dosing schedule.

  9. Constant Current versus Constant Voltage Subthalamic Nucleus Deep Brain Stimulation in Parkinson's Disease.

    PubMed

    Ramirez de Noriega, Fernando; Eitan, Renana; Marmor, Odeya; Lavi, Adi; Linetzky, Eduard; Bergman, Hagai; Israel, Zvi

    2015-02-18

    Background: Subthalamic nucleus (STN) deep brain stimulation (DBS) is an established therapy for advanced Parkinson's disease (PD). Motor efficacy and safety have been established for constant voltage (CV) devices and more recently for constant current (CC) devices. CC devices adjust output voltage to provide CC stimulation irrespective of impedance fluctuation, while the current applied by CV stimulation depends on the impedance that may change over time. No study has directly compared the clinical effects of these two stimulation modalities. Objective: To compare the safety and clinical impact of CC STN DBS to CV STN DBS in patients with advanced PD 2 years after surgery. Methods: Patients were eligible for inclusion if they had undergone STN DBS surgery for idiopathic PD, had been implanted with a Medtronic Activa PC and if their stimulation program and medication had been stable for at least 1 year. This single-center trial was designed as a double-blind, randomized, prospective study with crossover after 2 weeks. Motor equivalence of the 2 modalities was confirmed utilizing part III of the Unified Parkinson's Disease Rating Scale (UPDRS). PD diaries and multiple subjective and objective evaluations of quality of life, depression, cognition and emotional processing were evaluated on both CV and on CC stimulation. Analysis using the paired t test with Bonferroni correction for multiple comparisons was performed to identify any significant difference between the stimulation modalities. Results: 8 patients were recruited (6 men, 2 women); 1 patient did not complete the study. The average age at surgery was 56.7 years (range 47-63). Disease duration at the time of surgery was 7.5 years (range 3-12). Patients were recruited 23.8 months (range 22.5-24) after surgery. At the postoperative study baseline, this patient group showed an average motor improvement of 69% (range 51-97) as measured by the change in UPDRS part III with stimulation alone. Levodopa equivalent medication was reduced on average by 67% (range 15-88). Patients were poorly compliant with PD diaries, and these did not yield useful information. The minor deterioration in quality-of-life scores (Parkinson's Disease Questionnaire-39, Quality of Life Enjoyment and Satisfaction Questionnaire) with CC stimulation were not statistically significant. Two measures of depression (Hamilton Rating Scale D17, Quick Inventory of Depressive Symptomatology - Self-Report) showed a nonsignificant lower score (less depression) with CC stimulation, but a third (Beck Depression Inventory) showed equivalence. Cognitive testing (Mini Mental State Examination) and emotional processing (Montreal Affective Voices) were equivalent for CC and CV. Conclusion: CC STN DBS is safe. For equivalent motor efficacy, no significant difference could be identified between CC and CV stimulation for nonmotor evaluations in PD patients 2 years after surgery. © 2015 S. Karger AG, Basel.

  10. VO2 kinetics in supra-anaerobic threshold constant tests allow the visualization and quantification of the O2 saving after cytochrome c oxidase inhibition by aerobic training or nitrate administration.

    PubMed

    Maione, D; Cicero, A Fg; Bacchelli, S; Cosentino, E; Degli Esposti, D; Senaldi, R; Strocchi, E; D'Addato, S; Borghi, C

    2013-01-01

    We tested whether the known cytochrome c oxidase (COX) inhibition by nitric oxide (NO) could be quantified by VO(2) kinetics during constant load supra-Anaerobic Threshold (AT) exercises in healthy trained or untrained subjects following aerobic training or nitrate administration. In cycle ergometer constant load exercises supra-AT, identified in previous incremental tests, VO(2) kinetics describe a double exponential curve, one rapid and one appreciably slower, allowing the area between them to be calculate in O(2) l. After training, with increased NO availability, this area decreases in inverse ratio to treatment efficacy. In fact, in 11 healthy subjects after aerobic training for 6-7 weeks, area was decreased on average by 51 %. In 11 untrained subjects, following the assumption of an NO donor, 20 mg isosorbide 5 mononitrate, area was decreased on average by 53 %. In conclusion, supra-AT VO(2) kinetics in constant load exercises permit the quantification of the inhibitory effect NO-dependent on COX after either physical training or nitrate assumption.

  11. Antarctic Firn Compaction Rates from Repeat-Track Airborne Radar Data: I. Methods

    NASA Technical Reports Server (NTRS)

    Medley, B.; Ligtenberg, S. R. M.; Joughin, I.; Van Den Broeke, M. R.; Gogineni, S.; Nowicki, S.

    2015-01-01

    While measurements of ice-sheet surface elevation change are increasingly used to assess mass change, the processes that control the elevation fluctuations not related to ice-flow dynamics (e.g. firn compaction and accumulation) remain difficult to measure. Here we use radar data from the Thwaites Glacier (West Antarctica) catchment to measure the rate of thickness change between horizons of constant age over different time intervals: 2009-10, 2010-11 and 2009-11. The average compaction rate to approximately 25m depth is 0.33ma(exp -1), with largest compaction rates near the surface. Our measurements indicate that the accumulation rate controls much of the spatio-temporal variations in the compaction rate while the role of temperature is unclear due to a lack of measurements. Based on a semi-empirical, steady-state densification model, we find that surveying older firn horizons minimizes the potential bias resulting from the variable depth of the constant age horizon. Our results suggest that the spatiotemporal variations in the firn compaction rate are an important consideration when converting surface elevation change to ice mass change. Compaction rates varied by up to 0.12ma(exp -1) over distances less than 6km and were on average greater than 20% larger during the 2010-11 interval than during 2009-10.

  12. [Surgical treatment strategy of the floating shoulder injury].

    PubMed

    Song, Zhe; Xue, Han-Zhong; Li, Zhong; Zhuang, Yan; Wang, Qian; Ma, Teng; Zhang, Kun

    2013-10-18

    To discuss the clinical characteristics and the surgical treatment strategy of the floating shoulder injury. 26 cases with the floating shoulder injury between January 2006 and January 2012 were retrospectively evaluated. There were 15 males and 11 females with an average age of 35.2 (22-60) years. According to Wong's classification of floating shoulder injury: type IA, 3 cases; type IB, 9 cases; type II, 4 cases; type IIIA, 6 cases; type IIIB, 4 cases. All the 26 cases had accepted the surgical treatment. We observed the postoperative fracture reduction, damage repair, fracture healing and internal fixation through the X-ray films. We also evaluated the shoulder function regularly according to the Constant scores and Herscovici evaluation criteria. The 26 cases were followed up for an average of 16.8 (12-24) months.All the fractures healed for a mean time of 2.4 months, the mean Constant score was 89.4 (60-100). The effect of Herscovici evaluation criteria: excellent, 15 cases; good, 8 cases;fair, 3 cases;the excellent rate 88.5%. Open reduction and internal fixation is an effective method for the treatment of floating shoulder injury, but we should select the reset sequence and fixation methods according to the type of fracture and degree of displacement.

  13. Effects of Auroral Potential Drops on Field-Aligned Currents and Nightside Reconnection Dynamos

    NASA Astrophysics Data System (ADS)

    Lotko, W.; Xi, S.; Zhang, B.; Wiltberger, M. J.; Lyon, J.

    2016-12-01

    The reaction of the magnetosphere-ionosphere system to dynamic auroral potential drops is investigated using the Lyon-Fedder-Mobarry global model and, for the first time in a global simulation, including the dissipative load of field-aligned potential drops in the low-altitude boundary condition. This extra load reduces the demand for field-aligned current (j||) from nightside reconnection dynamos. The system adapts by forcing the nightside x-line closer to Earth to reduce current lensing (j||/B = constant) at the ionosphere, with the plasma sheet undergoing additional contraction during substorm recovery and steady magnetospheric convection. For steady and moderate solar wind driving and with constant ionospheric conductance, the cross-polar cap potential and hemispheric field-aligned current are lower by approximately the ratio of the peak field-aligned potential drop to the cross polar cap potential (10-15%) when potential drops are included. Hemispheric ionospheric Joule dissipation is less by 8%, while the area-integrated, average work done on the fluid by the reconnecting magnetotail field increases by 50% within |y| < 8 RE. Effects on the nightside plasma sheet include: (1) an average x-line 4 RE closer to Earth; (2) a 12% higher mean reconnection rate; and (3) dawn-dusk asymmetry in reconnection with a 17% higher rate in the premidnight sector.

  14. Effects of auroral potential drops on plasma sheet dynamics

    NASA Astrophysics Data System (ADS)

    Xi, Sheng; Lotko, William; Zhang, Binzheng; Wiltberger, Michael; Lyon, John

    2016-11-01

    The reaction of the magnetosphere-ionosphere system to dynamic auroral potential drops is investigated using the Lyon-Fedder-Mobarry global model including, for the first time in a global simulation, the dissipative load of field-aligned potential drops in the low-altitude boundary condition. This extra load reduces the field-aligned current (j||) supplied by nightside reconnection dynamos. The system adapts by forcing the nightside X line closer to Earth, with a corresponding reduction in current lensing (j||/B = constant) at the ionosphere and additional contraction of the plasma sheet during substorm recovery and steady magnetospheric convection. For steady and moderate solar wind driving and with constant ionospheric conductance, the cross polar cap potential and hemispheric field-aligned current are lower by approximately the ratio of the peak field-aligned potential drop to the cross polar cap potential (10-15%) when potential drops are included. Hemispheric ionospheric Joule dissipation is less by 8%, while the area-integrated, average work done on the fluid by the reconnecting magnetotail field increases by 50% within |y| < 8 RE. Effects on the nightside plasma sheet include (1) an average X line 4 RE closer to Earth; (2) a 12% higher mean reconnection rate; and (3) dawn-dusk asymmetry in reconnection with a 17% higher rate in the premidnight sector.

  15. Critical points of the cosmic velocity field and the uncertainties in the value of the Hubble constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hao; Naselsky, Pavel; Mohayaee, Roya, E-mail: liuhao@nbi.dk, E-mail: roya@iap.fr, E-mail: naselsky@nbi.dk

    2016-06-01

    The existence of critical points for the peculiar velocity field is a natural feature of the correlated vector field. These points appear at the junctions of velocity domains with different orientations of their averaged velocity vectors. Since peculiar velocities are the important cause of the scatter in the Hubble expansion rate, we propose that a more precise determination of the Hubble constant can be made by restricting analysis to a subsample of observational data containing only the zones around the critical points of the peculiar velocity field, associated with voids and saddle points. On large-scales the critical points, where themore » first derivative of the gravitational potential vanishes, can easily be identified using the density field and classified by the behavior of the Hessian of the gravitational potential. We use high-resolution N-body simulations to show that these regions are stable in time and hence are excellent tracers of the initial conditions. Furthermore, we show that the variance of the Hubble flow can be substantially minimized by restricting observations to the subsample of such regions of vanishing velocity instead of aiming at increasing the statistics by averaging indiscriminately using the full data sets, as is the common approach.« less

  16. FP-LAPW based investigation of structural, electronic and mechanical properties of CePb{sub 3} intermetallic compound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pagare, Gitanjali, E-mail: gita-pagare@yahoo.co.in; Jain, Ekta, E-mail: jainekta05@gmail.com; Abraham, Jisha Annie, E-mail: disisjisha@yahoo.com

    A theoretical study of structural, electronic, elastic and mechanical properties of CePb{sub 3} intermetallic compound has been investigated systematically using first principles density functional theory. The calculations are carried out within the three different forms of generalized gradient approximation (GGA) and LSDA for the exchange correlation potential. The ground state properties such as lattice parameter (a{sub 0}), bulk modulus (B) and its pressure derivative (B′) are calculated and obtained lattice parameter of this compound shows well agreement with the experimental results. We have calculated three independent second order elastic constants (C{sub 11}, C{sub 12} and C{sub 44}), which has notmore » been calculated and measured yet. From energy dispersion curves, it is found that the studied compound is metallic in nature. Ductility of this compound is analyzed using Pugh’s criteria and Cauchy's pressure (C{sub 11}-C{sub 12}). The mechanical properties such as Young's modulus, shear modulus, anisotropic ratio, Poison's ratio have been calculated for the first time using the Voigt–Reuss–Hill (VRH) averaging scheme. The average sound velocities (v{sub m}), density (ρ) and Debye temperature (θ{sub D}) of this compound are also estimated from the elastic constants.« less

  17. Simplified Two-Time Step Method for Calculating Combustion and Emission Rates of Jet-A and Methane Fuel With and Without Water Injection

    NASA Technical Reports Server (NTRS)

    Molnar, Melissa; Marek, C. John

    2005-01-01

    A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two time step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting rates of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx are obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering the turbine (T4) was also correlated as a function of the initial combustor temperature (T3), equivalence ratio, water to fuel mass ratio, and pressure.

  18. Transient effects in ice nucleation of a water drop impacting onto a cold substrate

    NASA Astrophysics Data System (ADS)

    Schremb, Markus; Roisman, Ilia V.; Tropea, Cameron

    2017-02-01

    The impact of water drops onto a solid surface at subfreezing temperatures has been experimentally studied. Drop nucleation has been observed using a high-speed video system. The statistics of nucleation allows the estimation of the average number of nucleation sites per unit area of the wetted part of the substrate. We have discovered that the nucleation rate in the impacting drop is not constant. The observed significant increase of the nucleation rate at small times after impact t <50 ms can be explained by the generation of nanobubbles at early times of drop impact. These bubbles serve as additional nucleation sites and enhance the nucleation rate.

  19. Experimental Study of Heat Transfer to Small Cylinders in a Subsonic, High-temperature Gas Stream

    NASA Technical Reports Server (NTRS)

    Glawe, George E; Johnson, Robert C

    1957-01-01

    A Nusselt-Reynolds number relation for cylindrical thermocouple wires in crossflow was obtained from the experimental determination of time constants. Tests were conducted in exhaust gas over a temperature range of 2000 to 3400 R, a Mach number range of 0.3 to 0.8, and a static-pressure range from 2/3 to 1-1/3 atmospheres, yielding a Reynolds number range of 450 to 3000. The correlation obtained is Nu=(0.428 plus or minus 0.003) times the square root of Re* with average deviations of a single observation of 8.5 percent. This relation is the same as one previously reported for room-temperature conditions.

  20. Subcritical crack growth in fibrous materials

    NASA Astrophysics Data System (ADS)

    Santucci, S.; Cortet, P.-P.; Deschanel, S.; Vanel, L.; Ciliberto, S.

    2006-05-01

    We present experiments on the slow growth of a single crack in a fax paper sheet submitted to a constant force F. We find that statistically averaged crack growth curves can be described by only two parameters: the mean rupture time τ and a characteristic growth length ζ. We propose a model based on a thermally activated rupture process that takes into account the microstructure of cellulose fibers. The model is able to reproduce the shape of the growth curve, the dependence of ζ on F as well as the effect of temperature on the rupture time τ. We find that the length scale at which rupture occurs in this model is consistently close to the diameter of cellulose microfibrils.

  1. Convective and morphological instabilities during crystal growth: Effect of gravity modulation

    NASA Technical Reports Server (NTRS)

    Coreill, S. R.; Murray, B. T.; Mcfadden, G. B.; Wheeler, A. A.; Saunders, B. V.

    1992-01-01

    During directional solidification of a binary alloy at constant velocity in the vertical direction, morphological and convective instabilities may occur due to the temperature and solute gradients associated with the solidification process. The effect of time-periodic modulation (vibration) is studied by considering a vertical gravitational acceleration which is sinusoidal in time. The conditions for the onset of solutal convection are calculated numerically, employing two distinct computational procedures based on Floquet theory. In general, a stable state can be destabilized by modulation and an unstable state can be stabilized. In the limit of high frequency modulation, the method of averaging and multiple-scale asymptotic analysis can be used to simplify the calculations.

  2. Transient effects in ice nucleation of a water drop impacting onto a cold substrate.

    PubMed

    Schremb, Markus; Roisman, Ilia V; Tropea, Cameron

    2017-02-01

    The impact of water drops onto a solid surface at subfreezing temperatures has been experimentally studied. Drop nucleation has been observed using a high-speed video system. The statistics of nucleation allows the estimation of the average number of nucleation sites per unit area of the wetted part of the substrate. We have discovered that the nucleation rate in the impacting drop is not constant. The observed significant increase of the nucleation rate at small times after impact t<50 ms can be explained by the generation of nanobubbles at early times of drop impact. These bubbles serve as additional nucleation sites and enhance the nucleation rate.

  3. The changing role of internal auditors in health care.

    PubMed

    Edwards, D E; Kusel, J; Oxner, T H

    2000-08-01

    Two surveys of directors of internal auditing in health care conducted in 1990 and 1998 found that healthcare internal auditors are spending proportionately more time on management and operational improvement activities and less time on traditional financial/compliance activities. The average staff size has remained relatively constant, but salaries at all levels of experience have risen. More importantly, the tenure of healthcare internal auditors has increased significantly since 1990. The profile of the healthcare internal auditing director also has changed. The director is older, more experienced, and has held the position for twice as long as was the case in 1990. On the other hand, the director is more stressed and less satisfied with compensation.

  4. Pharmacokinetic Analysis of Dynamic 18F-Fluoromisonidazole PET Data in Non-Small Cell Lung Cancer.

    PubMed

    Schwartz, Jazmin; Grkovski, Milan; Rimner, Andreas; Schöder, Heiko; Zanzonico, Pat B; Carlin, Sean D; Staton, Kevin D; Humm, John L; Nehmeh, Sadek A

    2017-06-01

    Hypoxic tumors exhibit increased resistance to radiation, chemical, and immune therapies. 18 F-fluoromisonidazole ( 18 F-FMISO) PET is a noninvasive, quantitative imaging technique used to evaluate the magnitude and spatial distribution of tumor hypoxia. In this study, pharmacokinetic analysis (PKA) of 18 F-FMISO dynamic PET extended to 3 h after injection is reported for the first time, to our knowledge, in stage III-IV non-small cell lung cancer (NSCLC) patients. Methods: Sixteen patients diagnosed with NSCLC underwent 2 PET/CT scans (1-3 d apart) before radiation therapy: a 3-min static 18 F-FDG and a dynamic 18 F-FMISO scan lasting 168 ± 15 min. The latter data were acquired in 3 serial PET/CT dynamic imaging sessions, registered with each other and analyzed using pharmacokinetic modeling software. PKA was performed using a 2-tissue, 3-compartment irreversible model, and kinetic parameters were estimated for the volumes of interest determined using coregistered 18 F-FDG images for both the volume of interest-averaged and the voxelwise time-activity curves for each patient's lesions, normal lung, and muscle. Results: We derived average values of 18 F-FMISO kinetic parameters for NSCLC lesions as well as for normal lung and muscle. We also investigated the correlation between the trapping rate ( k 3 ) and delivery rate ( K 1 ), influx rate ( K i ) constants, and tissue-to-blood activity concentration ratios (TBRs) for all tissues. Lesions had trapping rates 1.6 times larger, on average, than those of normal lung and 4.4 times larger than those in muscle. Additionally, for almost all cases, k 3 and K i had a significant strong correlation for all tissue types. The TBR- k 3 correlation was less straightforward, showing a moderate to strong correlation for only 41% of lesions. Finally, K 1 - k 3 voxelwise correlations for tumors were varied, but negative for 76% of lesions, globally exhibiting a weak inverse relationship (average R = -0.23 ± 0.39). However, both normal tissue types exhibited significant positive correlations for more than 60% of patients, with 41% having moderate to strong correlations (R > 0.5). Conclusion: All lesions showed distinct 18 F-FMISO uptake. Variable 18 F-FMISO delivery was observed across lesions, as indicated by the variable values of the kinetic rate constant K 1 Except for 3 cases, some degree of hypoxia was apparent in all lesions based on their nonzero k 3 values. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  5. Comparison of active and passive sampling strategies for the monitoring of pesticide contamination in streams

    NASA Astrophysics Data System (ADS)

    Assoumani, Azziz; Margoum, Christelle; Guillemain, Céline; Coquery, Marina

    2014-05-01

    The monitoring of water bodies regarding organic contaminants, and the determination of reliable estimates of concentrations are challenging issues, in particular for the implementation of the Water Framework Directive. Several strategies can be applied to collect water samples for the determination of their contamination level. Grab sampling is fast, easy, and requires little logistical and analytical needs in case of low frequency sampling campaigns. However, this technique lacks of representativeness for streams with high variations of contaminant concentrations, such as pesticides in rivers located in small agricultural watersheds. Increasing the representativeness of this sampling strategy implies greater logistical needs and higher analytical costs. Average automated sampling is therefore a solution as it allows, in a single analysis, the determination of more accurate and more relevant estimates of concentrations. Two types of automatic samplings can be performed: time-related sampling allows the assessment of average concentrations, whereas flow-dependent sampling leads to average flux concentrations. However, the purchase and the maintenance of automatic samplers are quite expensive. Passive sampling has recently been developed as an alternative to grab or average automated sampling, to obtain at lower cost, more realistic estimates of the average concentrations of contaminants in streams. These devices allow the passive accumulation of contaminants from large volumes of water, resulting in ultratrace level detection and smoothed integrative sampling over periods ranging from days to weeks. They allow the determination of time-weighted average (TWA) concentrations of the dissolved fraction of target contaminants, but they need to be calibrated in controlled conditions prior to field applications. In other words, the kinetics of the uptake of the target contaminants into the sampler must be studied in order to determine the corresponding sampling rate constants (Rs). Each constant links the mass of the a target contaminant accumulated in the sampler to its concentration in water. At the end of the field application, the Rs are used to calculate the TWA concentration of each target contaminant with the final mass of the contaminants accumulated in the sampler. Stir Bar Sorptive Extraction (SBSE) is a solvent free sample preparation technique dedicated to the analysis of moderately hydrophobic to hydrophobic compounds in liquid and gas samples. It is composed of a magnet enclosed in a glass tube coated with a thick film of polydimethysiloxane (PDMS). We recently developed the in situ application of SBSE as a passive sampling technique (herein named "Passive SBSE") for the monitoring of agricultural pesticides. The aim of this study is to perform the calibration of the passive SBSE in the laboratory, and to apply and compare this technique to active sampling strategies for the monitoring of 16 relatively hydrophobic to hydrophobic pesticides in streams, during 2 1-month sampling campaigns. Time-weighted averaged concentrations of the target pesticides obtained from passive SBSE were compared to the target pesticide concentrations of grab samples, and time-related and flow-dependent samples of the streams. Results showed passive SBSE as an efficient alternative to conventional active sampling strategies.

  6. FAST TRACK COMMUNICATION: Evaluation of the In concentration of an InxGa1-xSb alloy layer in cross-sectional HRTEM images of III-V semiconductor superlattices

    NASA Astrophysics Data System (ADS)

    Quan, Maohua; Guo, Fengyun; Li, Meicheng; Zhao, Liancheng

    2010-08-01

    Atomic-scale positional resolved lattice spacing measurement is used to study the In concentration of the alloy layer in InAs/InxGa1-xSb superlattices by the molecular beam epitaxy techniques. The unstrained lattice distance d along three directions, [0 0 1], [1 1 0] and [1 1 1], was measured and the average lattice constant was calculated. The experimental lattice constants of InAs layers are almost equal to the theoretical ones. We have found that the average lattice constant of In0.25Ga0.75Sb alloy layers is in good agreement with previously reported Vegard's values, being slightly larger. The results indicate that the In concentration of x = 0.18 has a larger deviation compared with the designed values.

  7. Assessment of GPS carrier-phase stability for time-transfer applications.

    PubMed

    Larson, K M; Levine, J; Nelson, L M; Parker, T E

    2000-01-01

    We have conducted global positioning system (GPS) carrier-phase time-transfer experiments between the master clock (MC) at the U.S. Naval Observatory (USNO) in Washington, DC and the alternate master clock (AMC) at Schriever Air Force Base near Colorado Springs, Colorado. These clocks are also monitored on an hourly basis with two-way satellite time-transfer (TWSTT) measurements. We compared the performance of the GPS carrier phase and TWSTT systems over a 236-d period. Because of power problems and data outages during the carrier-phase experiment, the longest continuous time span is 96 d. The data from this period show agreement with TWSTT within +/-1 ns, apart from an overall constant time offset (caused by unknown delays in the GPS hardware at both ends). For averaging times of a day, the carrier-phase and TWSTT systems have a frequency uncertainty of 2.5 and 5.5 parts in 10(15), respectively.

  8. Effect of a 5-min cold-water immersion recovery on exercise performance in the heat.

    PubMed

    Peiffer, J J; Abbiss, C R; Watson, G; Nosaka, K; Laursen, P B

    2010-05-01

    This study examined the effect of a 5-min cold-water immersion (14 degrees C) recovery intervention on repeated cycling performance in the heat. 10 male cyclists performed two bouts of a 25-min constant-paced (254 (22) W) cycling session followed by a 4-km time trial in hot conditions (35 degrees C, 40% relative humidity). The two bouts were separated by either 15 min of seated recovery in the heat (control) or the same condition with 5-min cold-water immersion (5th-10th minute), using a counterbalanced cross-over design (CP(1)TT(1) --> CWI or CON --> CP(2)TT(2)). Rectal temperature was measured immediately before and after both the constant-paced sessions and 4-km timed trials. Cycling economy and Vo(2) were measured during the constant-paced sessions, and the average power output and completion times were recorded for each time trial. Compared with control, rectal temperature was significantly lower (0.5 (0.4) degrees C) in cold-water immersion before CP(2) until the end of the second 4-km timed trial. However, the increase in rectal temperature (0.5 (0.2) degrees C) during CP(2) was not significantly different between conditions. During the second 4-km timed trial, power output was significantly greater in cold-water immersion (327.9 (55.7) W) compared with control (288.0 (58.8) W), leading to a faster completion time in cold-water immersion (6.1 (0.3) min) compared with control (6.4 (0.5) min). Economy and Vo(2) were not influenced by the cold-water immersion recovery intervention. 5-min cold-water immersion recovery significantly lowered rectal temperature and maintained endurance performance during subsequent high-intensity exercise. These data indicate that repeated exercise performance in heat may be improved when a short period of cold-water immersion is applied during the recovery period.

  9. Clinic research on the treatment for humeral shaft fracture with minimal invasive plate osteosynthesis: a retrospective study of 128 cases.

    PubMed

    Chen, H; Hu, X; Yang, G; Xiang, M

    2017-04-01

    Minimal invasive plate osteosynthesis (MIPO) is one of the most important techniques in the treatment for humeral shaft fractures. This study was performed to evaluate the efficacy of MIPO technique for the treatment for humeral shaft fractures. We retrospectively evaluated 128 cases with humeral shaft fractures that were treated with MIPO technique from March 2005 to August 2008. All the patients were followed up by routine radiological imaging and clinical examinations. Constant-Murley score and HSS elbow joint score were used to evaluate the treatment outcome. The average duration of the surgery was 60 min (range 40-95 min) without blood transfusion. All fractures healed without infection. All cases recovered carrying angle except four cases with 10°-15° cubitus varus. After the average follow-up of 23 (13-38) months, satisfactory function was achieved according to Constant-Murley score and HSS elbow joint score. Constant-Murley score was 80 on average (range 68-91). According to HSS elbow joint score, there were 123 cases of excellent clinical outcome and five cases of effective outcome. It seems to be a safe and effective method for managing humeral shaft fractures with MIPO technique.

  10. The Combined Influence of Molecular Weight and Temperature on the Aging and Viscoelastic Response of a Glassy Thermoplastic Polyimide

    NASA Technical Reports Server (NTRS)

    Nicholson, Lee M.; Whitley, Karen S.; Gates, Thomas S.

    2000-01-01

    The effect of molecular weight on the viscoelastic performance of an advanced polymer (LaRC-SI) was investigated through the use of creep compliance tests. Testing consisted of short-term isothermal creep and recovery with the creep segments performed under constant load. The tests were conducted at three temperatures below the glass transition temperature of five materials of different molecular weight. Through the use of time-aging-time superposition procedures, the material constants, material master curves and aging-related parameters were evaluated at each temperature for a given molecular weight. The time-temperature superposition technique helped to describe the effect of temperature on the timescale of the viscoelastic response of each molecular weight. It was shown that the low molecular weight materials have higher creep compliance and creep rate, and are more sensitive to temperature than the high molecular weight materials. Furthermore, a critical molecular weight transition was observed to occur at a weight-average molecular weight of M (bar) (sub w) 25000 g/mol below which, the temperature sensitivity of the time-temperature superposition shift factor increases rapidly. The short-term creep compliance data were used in association with Struik's effective time theory to predict the long-term creep compliance behavior for the different molecular weights. At long timescales, physical aging serves to significantly decrease the creep compliance and creep rate of all the materials tested.

  11. PROMIS Physical Function Computer Adaptive Test Compared With Other Upper Extremity Outcome Measures in the Evaluation of Proximal Humerus Fractures in Patients Older Than 60 Years.

    PubMed

    Morgan, Jordan H; Kallen, Michael A; Okike, Kanu; Lee, Olivia C; Vrahas, Mark S

    2015-06-01

    To compare the PROMIS Physical Function Computer Adaptive Test (PROMIS PF CAT) to commonly used traditional PF measures for the evaluation of patients with proximal humerus fractures. Prospective. Two Level I trauma centers. Forty-seven patients older than 60 years with displaced proximal humerus fractures treated between 2006 and 2009. Evaluation included completion of the PROMIS PF CAT, the Constant Shoulder Score, the Disabilities of the Arm, Shoulder, and Hand (DASH) and the Short Musculoskeletal Functional Assessment (SMFA). Observed correlations among the administered PF outcome measures. On average, patients responded to 86 outcome-related items for this study: 4 for the PROMIS PF CAT (range: 4-8 items), 6 for the Constant Shoulder Score, 30 for the DASH, and 46 for the SMFA. Time to complete the PROMIS PF CAT (median completion time = 98 seconds) was significantly less than that for the DASH (median completion time = 336 seconds, P < 0.001) and for the SMFA (median completion time = 482 seconds, P < 0.001). PROMIS PF CAT scores correlated statistically significantly and were of moderate-to-high magnitude with all other PF outcome measure scores administered. This study suggests using the PROMIS PF CAT as a sole PF outcome measure can yield an assessment of upper extremity function similar to those provided by traditional PF measures, while substantially reducing patient assessment time.

  12. [Case control study on the treatment of acromioclavicular dislocation with Endobutton plates combined with an anchor].

    PubMed

    Hu, Jin-Tao; Lu, Jian-Wei; Fu, Li-Feng

    2016-09-25

    To compare the clinical effect of Endobutton plates combined with an anchor and clavicle hook plate in the treatment of acromioclavicular dislocation. From January 2012 to August 2014, 83 patients with Rockwood type III acromioclavicular dislocation underwent surgical treatments. Among them, 34 patients were treated with Endobutton plate and anchor repair(Endobutton group), including 23 males and 11 females, and the mean age was(39.0±6.3) years old (26 to 51 years old); the average time from injury to operation was(4.1±1.3) days(3 to 7 days);the injured side:14 left, 20 right; the dislocation in 28 patients dues to fall, 6 patients dues traffic accident. There were 49 patients treated with clavicular hook plate(hook plate group), including 33 males and 16 females;the mean age was(37.9±6.3) years old (27 to 53 years old); the average time from injury to operation was(4.1±1.1) days (2 to 7 days);the injured side: 18 left, 31 right;the dislication in 36 patients dues to fall, 13 patients dues traffic accidents. The indexes such as intraoperative bleeding volume, operation time, incision size, postoperative complication and postoperative coracoclavicular space, shoulder joint function, and life quality were compared between two groups. In the hook plate group with 49 patients, the plates in 43 patients were removed at the secondary operation, and 32 patients had shoulder pain or limited active range. Thirty four patients in the Endobutton group had no pain symptoms and limited active range. All the patients did not suffer acromioclavicular dislocation again. There was no significant difference between the two groups in operation time, and intraoperative bleeding volume( P >0.05). The incision length in the hook plate group was longer than that in Endobutton group( P <0.05). The coracoclavicular space of the uninjured and injured side in two groups respectively had no significant differences, and the coracoclavicular space in the injured side between two group had no significant difference( P >0.05). There were no significant differences of Constant score and SF-36 between two groups 2 months after operation( P >0.05). Sixteen months after operation, the Constant score in the injured side of both groups was higher than that in 2 months postoperative. But the Constant score in the injured side of hook plate group was higher than that in Endobutton group( P <0.05). The Constant score in the uninjured side had no significant differences between two group( P >0.05). In hook plate group, the Constant score in the uninjured side was higher than that in the injured side. In Endobutton group, there were no significant differences of Constant score between two sides. The 16 month postoperative SF-36 in the injured side of both groups was higher than the 2 month postoperative one, but 16 month postoperative SF-36 in hook plate group was lower than that in Endobutton group ( P <0.05). Endobutton plate combined with an anchor can effectively fix Rockwood type III or more acute acromioclavicular dislocation. The method has less complications, avoiding secondary removal of internal fixation.

  13. Viscoelastic shear zone model of a strike-slip earthquake cycle

    USGS Publications Warehouse

    Pollitz, F.F.

    2001-01-01

    I examine the behavior of a two-dimensional (2-D) strike-slip fault system embedded in a 1-D elastic layer (schizosphere) overlying a uniform viscoelastic half-space (plastosphere) and within the boundaries of a finite width shear zone. The viscoelastic coupling model of Savage and Prescott [1978] considers the viscoelastic response of this system, in the absence of the shear zone boundaries, to an earthquake occurring within the upper elastic layer, steady slip beneath a prescribed depth, and the superposition of the responses of multiple earthquakes with characteristic slip occurring at regular intervals. So formulated, the viscoelastic coupling model predicts that sufficiently long after initiation of the system, (1) average fault-parallel velocity at any point is the average slip rate of that side of the fault and (2) far-field velocities equal the same constant rate. Because of the sensitivity to the mechanical properties of the schizosphere-plastosphere system (i.e., elastic layer thickness, plastosphere viscosity), this model has been used to infer such properties from measurements of interseismic velocity. Such inferences exploit the predicted behavior at a known time within the earthquake cycle. By modifying the viscoelastic coupling model to satisfy the additional constraint that the absolute velocity at prescribed shear zone boundaries is constant, I find that even though the time-averaged behavior remains the same, the spatiotemporal pattern of surface deformation (particularly its temporal variation within an earthquake cycle) is markedly different from that predicted by the conventional viscoelastic coupling model. These differences are magnified as plastosphere viscosity is reduced or as the recurrence interval of periodic earthquakes is lengthened. Application to the interseismic velocity field along the Mojave section of the San Andreas fault suggests that the region behaves mechanically like a ???600-km-wide shear zone accommodating 50 mm/yr fault-parallel motion distributed between the San Andreas fault system and Eastern California Shear Zone. Copyright 2001 by the American Geophysical Union.

  14. On determining dose rate constants spectroscopically.

    PubMed

    Rodriguez, M; Rogers, D W O

    2013-01-01

    To investigate several aspects of the Chen and Nath spectroscopic method of determining the dose rate constants of (125)I and (103)Pd seeds [Z. Chen and R. Nath, Phys. Med. Biol. 55, 6089-6104 (2010)] including the accuracy of using a line or dual-point source approximation as done in their method, and the accuracy of ignoring the effects of the scattered photons in the spectra. Additionally, the authors investigate the accuracy of the literature's many different spectra for bare, i.e., unencapsulated (125)I and (103)Pd sources. Spectra generated by 14 (125)I and 6 (103)Pd seeds were calculated in vacuo at 10 cm from the source in a 2.7 × 2.7 × 0.05 cm(3) voxel using the EGSnrc BrachyDose Monte Carlo code. Calculated spectra used the initial photon spectra recommended by AAPM's TG-43U1 and NCRP (National Council of Radiation Protection and Measurements) Report 58 for the (125)I seeds, or TG-43U1 and NNDC(2000) (National Nuclear Data Center, 2000) for (103)Pd seeds. The emitted spectra were treated as coming from a line or dual-point source in a Monte Carlo simulation to calculate the dose rate constant. The TG-43U1 definition of the dose rate constant was used. These calculations were performed using the full spectrum including scattered photons or using only the main peaks in the spectrum as done experimentally. Statistical uncertainties on the air kerma/history and the dose rate/history were ≤0.2%. The dose rate constants were also calculated using Monte Carlo simulations of the full seed model. The ratio of the intensity of the 31 keV line relative to that of the main peak in (125)I spectra is, on average, 6.8% higher when calculated with the NCRP Report 58 initial spectrum vs that calculated with TG-43U1 initial spectrum. The (103)Pd spectra exhibit an average 6.2% decrease in the 22.9 keV line relative to the main peak when calculated with the TG-43U1 rather than the NNDC(2000) initial spectrum. The measured values from three different investigations are in much better agreement with the calculations using the NCRP Report 58 and NNDC(2000) initial spectra with average discrepancies of 0.9% and 1.7% for the (125)I and (103)Pd seeds, respectively. However, there are no differences in the calculated TG-43U1 brachytherapy parameters using either initial spectrum in both cases. Similarly, there were no differences outside the statistical uncertainties of 0.1% or 0.2%, in the average energy, air kerma/history, dose rate/history, and dose rate constant when calculated using either the full photon spectrum or the main-peaks-only spectrum. Our calculated dose rate constants based on using the calculated on-axis spectrum and a line or dual-point source model are in excellent agreement (0.5% on average) with the values of Chen and Nath, verifying the accuracy of their more approximate method of going from the spectrum to the dose rate constant. However, the dose rate constants based on full seed models differ by between +4.6% and -1.5% from those based on the line or dual-point source approximations. These results suggest that the main value of spectroscopic measurements is to verify full Monte Carlo models of the seeds by comparison to the calculated spectra.

  15. Replacing dark energy by silent virialisation

    NASA Astrophysics Data System (ADS)

    Roukema, Boudewijn F.

    2018-02-01

    Context. Standard cosmological N-body simulations have background scale factor evolution that is decoupled from non-linear structure formation. Prior to gravitational collapse, kinematical backreaction (𝒬𝒟) justifies this approach in a Newtonian context. Aims: However, the final stages of a gravitational collapse event are sudden; a globally imposed smooth expansion rate forces at least one expanding region to suddenly and instantaneously decelerate in compensation for the virialisation event. This is relativistically unrealistic. A more conservative hypothesis is to allow non-collapsed domains to continue their volume evolution according to the 𝒬𝒟 Zel'dovich approximation (QZA). We aim to study the inferred average expansion under this "silent" virialisation hypothesis. Methods: We set standard (MPGRAFIC) EdS 3-torus (T3) cosmological N-body initial conditions. Using RAMSES, we partitioned the volume into domains and called the DTFE library to estimate the per-domain initial values of the three invariants of the extrinsic curvature tensor that determine the QZA. We integrated the Raychaudhuri equation in each domain using the INHOMOG library, and adopted the stable clustering hypothesis to represent virialisation (VQZA). We spatially averaged to obtain the effective global scale factor. We adopted an early-epoch-normalised EdS reference-model Hubble constant H1EdS = 37.7 km s-1 /Mpc and an effective Hubble constant Heff,0 = 67.7 km s-1 /Mpc. Results: From 2000 simulations at resolution 2563, we find that reaching a unity effective scale factor at 13.8 Gyr (16% above EdS), occurs for an averaging scale of L13.813 = 2.5-0.1+0.1 Mpc/heff. Relativistically interpreted, this corresponds to strong average negative curvature evolution, with the mean (median) curvature functional Ωℛ𝒟 growing from zero to about 1.5-2 by the present. Over 100 realisations, the virialisation fraction and super-EdS expansion correlate strongly at fixed cosmological time. Conculsions. Thus, starting from EdS initial conditions and averaging on a typical non-linear structure formation scale, the VQZA dark-energy-free average expansion matches ΛCDM expansion to first order. The software packages used here are free-licensed.

  16. 47 CFR 15.403 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... power control level. Power must be summed across all antennas and antenna elements. The average must not... symbols, during which the average symbol envelope power is constant. (q) RLAN. Radio Local Area Network. (r) Transmit Power Control (TPC). A feature that enables a U-NII device to dynamically switch between...

  17. Experimental measurement of self-diffusion in a strongly coupled plasma

    DOE PAGES

    Strickler, Trevor S.; Langin, Thomas K.; McQuillen, Paul; ...

    2016-05-17

    Here, we present a study of the collisional relaxation of ion velocities in a strongly coupled, ultracold neutral plasma on short time scales compared to the inverse collision rate. The measured average velocity of a tagged population of ions is shown to be equivalent to the ion-velocity autocorrelation function. We thus gain access to fundamental aspects of the single-particle dynamics in strongly coupled plasmas and to the ion self-diffusion constant under conditions where experimental measurements have been lacking. Nonexponential decay towards equilibrium of the average velocity heralds non-Markovian dynamics that are not predicted by traditional descriptions of weakly coupled plasmas.more » This demonstrates the utility of ultracold neutral plasmas for studying the effects of strong coupling on collisional processes, which is of interest for dense laboratory and astrophysical plasmas.« less

  18. Relationship between dynamical entropy and energy dissipation far from thermodynamic equilibrium.

    PubMed

    Green, Jason R; Costa, Anthony B; Grzybowski, Bartosz A; Szleifer, Igal

    2013-10-08

    Connections between microscopic dynamical observables and macroscopic nonequilibrium (NE) properties have been pursued in statistical physics since Boltzmann, Gibbs, and Maxwell. The simulations we describe here establish a relationship between the Kolmogorov-Sinai entropy and the energy dissipated as heat from a NE system to its environment. First, we show that the Kolmogorov-Sinai or dynamical entropy can be separated into system and bath components and that the entropy of the system characterizes the dynamics of energy dissipation. Second, we find that the average change in the system dynamical entropy is linearly related to the average change in the energy dissipated to the bath. The constant energy and time scales of the bath fix the dynamical relationship between these two quantities. These results provide a link between microscopic dynamical variables and the macroscopic energetics of NE processes.

  19. Relationship between dynamical entropy and energy dissipation far from thermodynamic equilibrium

    PubMed Central

    Green, Jason R.; Costa, Anthony B.; Grzybowski, Bartosz A.; Szleifer, Igal

    2013-01-01

    Connections between microscopic dynamical observables and macroscopic nonequilibrium (NE) properties have been pursued in statistical physics since Boltzmann, Gibbs, and Maxwell. The simulations we describe here establish a relationship between the Kolmogorov–Sinai entropy and the energy dissipated as heat from a NE system to its environment. First, we show that the Kolmogorov–Sinai or dynamical entropy can be separated into system and bath components and that the entropy of the system characterizes the dynamics of energy dissipation. Second, we find that the average change in the system dynamical entropy is linearly related to the average change in the energy dissipated to the bath. The constant energy and time scales of the bath fix the dynamical relationship between these two quantities. These results provide a link between microscopic dynamical variables and the macroscopic energetics of NE processes. PMID:24065832

  20. Dynamic analysis of a circulating fluidized bed riser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panday, Rupen; Shadle, Lawrence J.; Guenther, Chris

    2012-01-01

    A linear state model is proposed to analyze dynamic behavior of a circulating fluidized bed riser. Different operating regimes were attained with high density polyethylene beads at low and high system inventories. The riser was operated between the classical choking velocity and the upper transport velocity demarcating fast fluidized and transport regimes. At a given riser superficial gas velocity, the aerations fed at the standpipe were modulated resulting in a sinusoidal solids circulation rate that goes into the riser via L-valve. The state model was derived based on the mass balance equation in the riser. It treats the average solidsmore » fraction across the entire riser as a state variable. The total riser pressure drop was modeled using Newton’s second law of motion. The momentum balance equation involves contribution from the weight of solids and the wall friction caused by the solids to the riser pressure drop. The weight of solids utilizes the state variable and hence, the riser inventory could be easily calculated. The modeling problem boils down to estimating two parameters including solids friction coefficient and time constant of the riser. It has been shown that the wall friction force acts in the upward direction in fast fluidized regime which indicates that the solids were moving downwards on the average with respect to the riser wall. In transport regimes, the friction acts in the opposite direction. This behavior was quantified based on a sign of Fanning friction factor in the momentum balance equation. The time constant of the riser appears to be much higher in fast fluidized regime than in transport conditions.« less

  1. Study on coal char ignition by radiant heat flux.

    NASA Astrophysics Data System (ADS)

    Korotkikh, A. G.; Slyusarskiy, K. V.

    2017-11-01

    The study on coal char ignition by CO2-continuous laser was carried out. The coal char samples of T-grade bituminous coal and 2B-grade lignite were studied via CO2-laser ignition setup. Ignition delay times were determined at ambient condition in heat flux density range 90-200 W/cm2. The average ignition delay time value for lignite samples were 2 times lower while this difference is larger in high heat flux region and lower in low heat flux region. The kinetic constants for overall oxidation reaction were determined using analytic solution of simplified one-dimensional heat transfer equation with radiant heat transfer boundary condition. The activation energy for lignite char was found to be less than it is for bituminous coal char by approximately 20 %.

  2. Lyra’s cosmology of hybrid universe in Bianchi-V space-time

    NASA Astrophysics Data System (ADS)

    Yadav, Anil Kumar; Bhardwaj, Vinod Kumar

    2018-06-01

    In this paper we have searched for the existence of Lyra’s cosmology in a hybrid universe with minimal interaction between dark energy and normal matter using Bianchi-V space-time. To derive the exact solution, the average scale factor is taken as a={({t}n{e}kt)}\\frac{1{m}} which describes the hybrid nature of the scale factor and generates a model of the transitioning universe from the early deceleration phase to the present acceleration phase. The quintessence model makes the matter content of the derived universe remarkably able to satisfy the null, dominant and strong energy condition. It has been found that the time varying displacement β(t) co-relates with the nature of cosmological constant Λ(t). We also discuss some physical and geometrical features of the universe.

  3. Equilibrium energy spectrum of point vortex motion with remarks on ensemble choice and ergodicity

    NASA Astrophysics Data System (ADS)

    Esler, J. G.

    2017-01-01

    The dynamics and statistical mechanics of N chaotically evolving point vortices in the doubly periodic domain are revisited. The selection of the correct microcanonical ensemble for the system is first investigated. The numerical results of Weiss and McWilliams [Phys. Fluids A 3, 835 (1991), 10.1063/1.858014], who argued that the point vortex system with N =6 is nonergodic because of an apparent discrepancy between ensemble averages and dynamical time averages, are shown to be due to an incorrect ensemble definition. When the correct microcanonical ensemble is sampled, accounting for the vortex momentum constraint, time averages obtained from direct numerical simulation agree with ensemble averages within the sampling error of each calculation, i.e., there is no numerical evidence for nonergodicity. Further, in the N →∞ limit it is shown that the vortex momentum no longer constrains the long-time dynamics and therefore that the correct microcanonical ensemble for statistical mechanics is that associated with the entire constant energy hypersurface in phase space. Next, a recently developed technique is used to generate an explicit formula for the density of states function for the system, including for arbitrary distributions of vortex circulations. Exact formulas for the equilibrium energy spectrum, and for the probability density function of the energy in each Fourier mode, are then obtained. Results are compared with a series of direct numerical simulations with N =50 and excellent agreement is found, confirming the relevance of the results for interpretation of quantum and classical two-dimensional turbulence.

  4. The Flux Variability of Markarian 501 in Very High Energy Gamma Rays

    NASA Astrophysics Data System (ADS)

    Quinn, J.; Bond, I. H.; Boyle, P. J.; Bradbury, S. M.; Breslin, A. C.; Buckley, J. H.; Burdett, A. M.; Gordo, J. Bussons; Carter-Lewis, D. A.; Catanese, M.; Cawley, M. F.; Fegan, D. J.; Finley, J. P.; Gaidos, J. A.; Hall, T.; Hillas, A. M.; Krennrich, F.; Lamb, R. C.; Lessard, R. W.; Masterson, C.; McEnery, J. E.; Moriarty, P.; Rodgers, A. J.; Rose, H. J.; Samuelson, F. W.; Sembroski, G. H.; Srinivasan, R.; Vassiliev, V. V.; Weekes, T. C.

    1999-06-01

    The BL Lacertae object Markarian 501 was identified as a source of γ-ray emission at the Whipple Observatory in 1995 March. Here we present a flux variability analysis on several timescales of the 233 hr data set accumulated over 213 nights (from March 1995 to July 1998) with the Whipple Observatory 10 m atmospheric Cerenkov imaging telescope. In 1995, with the exception of a single night, the flux from Markarian 501 was constant on daily and monthly timescales and had an average flux of only 10% that of the Crab Nebula, making it the weakest very high energy source detected to date. In 1996, the average flux was approximately twice the 1995 flux and showed significant month-to-month variability. No significant day-scale variations were detected. The average γ-ray flux above ~350 GeV in the 1997 observing season rose to 1.4 times that of the Crab Nebula--14 times the 1995 discovery level--allowing a search for variability on timescales shorter than 1 day. Significant hour-scale variability was present in the 1997 data, with the shortest, observed on MJD 50,607, having a doubling time of ~2 hr. In 1998 the average emission level decreased considerably from that of 1997 (to ~20% of the Crab Nebula flux), but two significant flaring events were observed. Thus the emission from Markarian 501 shows large amplitude and rapid flux variability at very high energies, as does Markarian 421. It also shows large mean flux level variations on year-to-year timescales, behavior that has not been seen from Markarian 421 so far.

  5. Comparative effects of constant versus fluctuating thermal regimens on yellow perch growth, feed conversion and survival

    USDA-ARS?s Scientific Manuscript database

    The effects of fluctuating or constant thermal regimens on growth, mortality, and feed conversion were determined for juvenile yellow perch (Perca flavescens). Yellow perch averaging 156mm total length and 43g body weight were held in replicate 288L circular tanks for 129 days under: 1) a diel therm...

  6. Mechanistic Studies of Human Spermine Oxidase: Kinetic Mechanism and pH Effects†

    PubMed Central

    Adachi, Maria S.; Juarez, Paul R.; Fitzpatrick, Paul F.

    2009-01-01

    In mammalian cells, the flavoprotein spermine oxidase (SMO) catalyzes the oxidation of spermine to spermidine and 3-aminopropanal. Mechanistic studies have been carried out with the recombinant human enzyme. The initial velocity pattern when the ratio between the concentrations of spermine and oxygen is kept constant establishes the steady-state kinetic pattern as ping-pong. Reduction of SMO by spermine in the absence of oxygen is biphasic. The rate constant for the rapid phase varies with the substrate concentration, with a limiting value (k3) of 49 s−1 and an apparent Kd value of 48 µM at pH 8.3. The rate constant for the slow step is independent of the spermine concentration, with a value of 5.5 s−1, comparable to the kcat value of 6.6 s−1. The kinetics of the oxidative half-reaction depend on the aging time after spermine and enzyme are mixed in a double mixing experiment. At an aging time of 6 s the reaction is monophasic with a second order rate constant of 4.2 mM−1 s−1. At an aging time of 0.3 s the reaction is biphasic with two second order constants equal to 4.0 and 40 mM−1 s−1. Neither is equal to the kcat/KO2 value of 13 mM−1s−1. These results establish the existence of more than one pathway for the reaction of the reduced flavin intermediate with oxygen. The kcat/KM value for spermine exhibits a bell-shaped pH-profile, with an average pKa value of 8.3. This profile is consistent with the active form of spermine having three charged nitrogens. The pH profile for k3 shows a pKa value of 7.4 for a group that must be unprotonated. The pKi-pH profiles for the competitive inhibitors N,N’-dibenzylbutane-1,4-diamine and spermidine show that the fully protonated forms of the inhibitors and the unprotonated form of an amino acid residue with a pKa of about 7.4 in the active site are preferred for binding. PMID:20000632

  7. Gas sorption and barrier properties of polymeric membranes from molecular dynamics and Monte Carlo simulations.

    PubMed

    Cozmuta, Ioana; Blanco, Mario; Goddard, William A

    2007-03-29

    It is important for many industrial processes to design new materials with improved selective permeability properties. Besides diffusion, the molecule's solubility contributes largely to the overall permeation process. This study presents a method to calculate solubility coefficients of gases such as O2, H2O (vapor), N2, and CO2 in polymeric matrices from simulation methods (Molecular Dynamics and Monte Carlo) using first principle predictions. The generation and equilibration (annealing) of five polymer models (polypropylene, polyvinyl alcohol, polyvinyl dichloride, polyvinyl chloride-trifluoroethylene, and polyethylene terephtalate) are extensively described. For each polymer, the average density and Hansen solubilities over a set of ten samples compare well with experimental data. For polyethylene terephtalate, the average properties between a small (n = 10) and a large (n = 100) set are compared. Boltzmann averages and probability density distributions of binding and strain energies indicate that the smaller set is biased in sampling configurations with higher energies. However, the sample with the lowest cohesive energy density from the smaller set is representative of the average of the larger set. Density-wise, low molecular weight polymers tend to have on average lower densities. Infinite molecular weight samples do however provide a very good representation of the experimental density. Solubility constants calculated with two ensembles (grand canonical and Henry's constant) are equivalent within 20%. For each polymer sample, the solubility constant is then calculated using the faster (10x) Henry's constant ensemble (HCE) from 150 ps of NPT dynamics of the polymer matrix. The influence of various factors (bad contact fraction, number of iterations) on the accuracy of Henry's constant is discussed. To validate the calculations against experimental results, the solubilities of nitrogen and carbon dioxide in polypropylene are examined over a range of temperatures between 250 and 650 K. The magnitudes of the calculated solubilities agree well with experimental results, and the trends with temperature are predicted correctly. The HCE method is used to predict the solubility constants at 298 K of water vapor and oxygen. The water vapor solubilities follow more closely the experimental trend of permeabilities, both ranging over 4 orders of magnitude. For oxygen, the calculated values do not follow entirely the experimental trend of permeabilities, most probably because at this temperature some of the polymers are in the glassy regime and thus are diffusion dominated. Our study also concludes large confidence limits are associated with the calculated Henry's constants. By investigating several factors (terminal ends of the polymer chains, void distribution, etc.), we conclude that the large confidence limits are intimately related to the polymer's conformational changes caused by thermal fluctuations and have to be regarded--at least at microscale--as a characteristic of each polymer and the nature of its interaction with the solute. Reducing the mobility of the polymer matrix as well as controlling the distribution of the free (occupiable) volume would act as mechanisms toward lowering both the gas solubility and the diffusion coefficients.

  8. Design verification of large time constant thermal shields for optical reference cavities.

    PubMed

    Zhang, J; Wu, W; Shi, X H; Zeng, X Y; Deng, K; Lu, Z H

    2016-02-01

    In order to achieve high frequency stability in ultra-stable lasers, the Fabry-Pérot reference cavities shall be put inside vacuum chambers with large thermal time constants to reduce the sensitivity to external temperature fluctuations. Currently, the determination of thermal time constants of vacuum chambers is based either on theoretical calculation or time-consuming experiments. The first method can only apply to simple system, while the second method will take a lot of time to try out different designs. To overcome these limitations, we present thermal time constant simulation using finite element analysis (FEA) based on complete vacuum chamber models and verify the results with measured time constants. We measure the thermal time constants using ultrastable laser systems and a frequency comb. The thermal expansion coefficients of optical reference cavities are precisely measured to reduce the measurement error of time constants. The simulation results and the experimental results agree very well. With this knowledge, we simulate several simplified design models using FEA to obtain larger vacuum thermal time constants at room temperature, taking into account vacuum pressure, shielding layers, and support structure. We adopt the Taguchi method for shielding layer optimization and demonstrate that layer material and layer number dominate the contributions to the thermal time constant, compared with layer thickness and layer spacing.

  9. Monte carlo simulation of vesicular release, spatiotemporal distribution of glutamate in synaptic cleft and generation of postsynaptic currents.

    PubMed

    Glavinovíc, M I

    1999-02-01

    The release of vesicular glutamate, spatiotemporal changes in glutamate concentration in the synaptic cleft and the subsequent generation of fast excitatory postsynaptic currents at a hippocampal synapse were modeled using the Monte Carlo method. It is assumed that glutamate is released from a spherical vesicle through a cylindrical fusion pore into the synaptic cleft and that S-alpha-amino-3-hydroxy -5-methyl-4-isoxazolepropionic acid (AMPA) receptors are uniformly distributed postsynaptically. The time course of change in vesicular concentration can be described by a single exponential, but a slow tail is also observed though only following the release of most of the glutamate. The time constant of decay increases with vesicular size and a lower diffusion constant, and is independent of the initial concentration, becoming markedly shorter for wider fusion pores. The cleft concentration at the fusion pore mouth is not negligible compared to vesicular concentration, especially for wider fusion pores. Lateral equilibration of glutamate is rapid, and within approximately 50 micros all AMPA receptors on average see the same concentration of glutamate. Nevertheless the single-channel current and the number of channels estimated from mean-variance plots are unreliable and different when estimated from rise- and decay-current segments. Greater saturation of AMPA receptor channels provides higher but not more accurate estimates. Two factors contribute to the variability of postsynaptic currents and render the mean-variance nonstationary analysis unreliable, even when all receptors see on average the same glutamate concentration. Firstly, the variability of the instantaneous cleft concentration of glutamate, unlike the mean concentration, first rapidly decreases before slowly increasing; the variability is greater for fewer molecules in the cleft and is spatially nonuniform. Secondly, the efficacy with which glutamate produces a response changes with time. Understanding the factors that determine the time course of vesicular content release as well as the spatiotemporal changes of glutamate concentration in the cleft is crucial for understanding the mechanism that generates postsynaptic currents.

  10. [A study of proximal humerus fractures using close reduction and percutaneous minimally invasive fixation].

    PubMed

    Liu, Yin-wen; Kuang, Yong; Gu, Xin-feng; Zheng, Yu-xin; Li, Zhi-qiang; Wei, Xiao-en; Lu, Wei-da; Zhan, Hong-sheng; Shi, Yin-yu

    2011-11-01

    To investigate the clinical effects of close reduction and percutaneous minimally invasive fixation in the treatment of proximal humerus fractures. From April 2008 to March 2010, 28 patients with proximal humerus fracture were treated with close reduction and percutaneous minimally invasive fixation. There were 21 males and 7 females, ranging in age from 22 to 78 years,with an average of 42.6 years. The mean time from suffering injuries to the operation was 1.7 d. Nineteen cases caused by falling down, 9 cases by traffic accident. The main clinical manifestation was swelling, pain and limited mobility of shoulders. According to Neer classification, two part fractures were in 17 cases and three part fractures in 11 cases. The locking proximal humerus plate was used to minimally fixation through deltoid muscle under acromion. The operating time,volume of blood loss, the length of incision and Constant-Murley assessment were applied to evaluate the therapeutic effects. The mean operating time was 40 min, the mean blood loss was 110 ml, and the mean length of incision was about 5.6 cm. The postoperative X-ray showed excellent reduction and the plate and screws were successfully place. Twenty-eight patients were followed up for 6 to 24 months (averaged 14.2 months). The healing time ranged from 6 to 8 weeks and all incision was primarily healed. There were no cases with necrosis head humerus, 24 cases without omalgia, and 4 cases with o-malgia occasionally. All the patients can complete the daily life. The mean score of Constant-Murley assessment was 91.0 +/- 5.8, 24 cases got an excellent result, 3 good and 1 fair. Close reduction and percutaneous minimally invasive fixation, not only can reduce surgical invasive, but also guarantee the early function activities. It has the advantages of less invasive, fixed well and less damage of blood circulation.

  11. Very-High-Energy γ-Ray Observations of the Blazar 1ES 2344+514 with VERITAS

    NASA Astrophysics Data System (ADS)

    Allen, C.; Archambault, S.; Archer, A.; Benbow, W.; Bird, R.; Bourbeau, E.; Brose, R.; Buchovecky, M.; Buckley, J. H.; Bugaev, V.; Cardenzana, J. V.; Cerruti, M.; Chen, X.; Christiansen, J. L.; Connolly, M. P.; Cui, W.; Daniel, M. K.; Eisch, J. D.; Falcone, A.; Feng, Q.; Fernandez-Alonso, M.; Finley, J. P.; Fleischhack, H.; Flinders, A.; Fortson, L.; Furniss, A.; Gillanders, G. H.; Griffin, S.; Grube, J.; Hütten, M.; Håkansson, N.; Hanna, D.; Hervet, O.; Holder, J.; Hughes, G.; Humensky, T. B.; Johnson, C. A.; Kaaret, P.; Kar, P.; Kelley-Hoskins, N.; Kertzman, M.; Kieda, D.; Krause, M.; Krennrich, F.; Kumar, S.; Lang, M. J.; Maier, G.; McArthur, S.; McCann, A.; Meagher, K.; Moriarty, P.; Mukherjee, R.; Nguyen, T.; Nieto, D.; O'Brien, S.; de Bhróithe, A. O'Faoláin; Ong, R. A.; Otte, A. N.; Park, N.; Petrashyk, A.; Pichel, A.; Pohl, M.; Popkow, A.; Pueschel, E.; Quinn, J.; Ragan, K.; Reynolds, P. T.; Richards, G. T.; Roache, E.; Rovero, A. C.; Rulten, C.; Sadeh, I.; Santander, M.; Sembroski, G. H.; Shahinyan, K.; Telezhinsky, I.; Tucci, J. V.; Tyler, J.; Wakely, S. P.; Weinstein, A.; Wilhelm, A.; Williams, D. A.

    2017-10-01

    We present very-high-energy γ-ray observations of the BL Lac object 1ES 2344+514 taken by the Very Energetic Radiation Imaging Telescope Array System between 2007 and 2015. 1ES 2344+514 is detected with a statistical significance above the background of 20.8σ in 47.2 h (livetime) of observations, making this the most comprehensive very-high-energy study of 1ES 2344+514 to date. Using these observations, the temporal properties of 1ES 2344+514 are studied on short and long times-scales. We fit a constant-flux model to nightly and seasonally binned light curves and apply a fractional variability test to determine the stability of the source on different time-scales. We reject the constant-flux model for the 2007-2008 and 2014-2015 nightly binned light curves and for the long-term seasonally binned light curve at the >3σ level. The spectra of the time-averaged emission before and after correction for attenuation by the extragalactic background light are obtained. The observed time-averaged spectrum above 200 GeV is satisfactorily fitted (χ2/NDF = 7.89/6) by a power-law function with an index Γ = 2.46 ± 0.06stat ± 0.20sys and extends to at least 8 TeV. The extragalactic-background-light-deabsorbed spectrum is adequately fit (χ2/NDF = 6.73/6) by a power-law function with an index Γ = 2.15 ± 0.06stat ± 0.20sys while an F-test indicates that the power law with an exponential cut-off function provides a marginally better fit (χ2/NDF = 2.56/5) at the 2.1σ level. The source location is found to be consistent with the published radio location and its spatial extent is consistent with a point source.

  12. Response of the human vestibulo-ocular reflex system to constant angular acceleration. I. Theoretical study.

    PubMed

    Boumans, L J; Rodenburg, M; Maas, A J

    1983-01-01

    The response of the human vestibulo-ocular reflex system to a constant angular acceleration is calculated using a second order model with an adaptation term. After first reaching a maximum the peracceleratory response declines. When the stimulus duration is long the decay is mainly governed by the adaptation time constant Ta, which enables to reliably estimate this time constant. In the postacceleratory period of constant velocity there is a reversal in response. The magnitude and the time course of the per- and postacceleratory response are calculated for various values of the cupular time constant T1, the adaptation time constant Ta, and the stimulus duration, thus enabling their influence to be assessed.

  13. The effect of age on the racing speed of Thoroughbred racehorses

    PubMed Central

    TAKAHASHI, Toshiyuki

    2015-01-01

    ABSTRACT The running performance of Thoroughbred racehorses has been reported to peak when they are between 4 and 5 years old. However, changes in their racing speed by month or season have not been reported. The purposes of this study were to reveal the average racing speed of Thoroughbreds, and observe changes in their average speed with age. The surveyed races were flat races on turf and dirt tracks with firm or standard track conditions held by the Japan Racing Association from January 1st, 2002 to December 31st, 2010. The racing speed of each horse was calculated by dividing the race distance (m) by the horse’s final time (sec). Average speeds per month for each age and distance condition were calculated for each gender group when there were 30 or more starters per month for each age and distance condition for each gender group. The common characteristic change for all conditions was an average speed increase up until the first half of the age of 4 years old. The effect of increased carry weight on average speed was small, and average speed increased with the growth of the horse. After the latter half of the age of 4 years old, the horses’ average speed remained almost constant, with little variation. It is speculated that decreases in the weight carried; and the retirement of less well performing horses; are responsible for the maintenance of average speed. PMID:26170760

  14. 40 CFR 57.302 - Performance level of interim constant controls.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... exceed the following: (i) For sulfuric acid plants on copper smelters, 12-hour running average; (ii) For sulfuric acid plants on lead smelters, 6-hour running average; (iii) For sulfuric acid plants on zinc... limitation shall take into account unavoidable catalyst deterioration in sulfuric acid plants, but may...

  15. Design and performance of limestone drains to increase pH and remove metals from acidic mine drainage, Chapter 2

    USGS Publications Warehouse

    Cravotta,, Charles A.; Watzlaf, George R.

    2002-01-01

    Data on the construction characteristics and the composition of influent and effluent at 13 underground, limestone-filled drains in Pennsylvania and Maryland are reported to evaluate the design and performance of limestone drains for the attenuation of acidity and dissolved metals in acidic mine drainage. On the basis of the initial mass of limestone, dimensions of the drains, and average flow rates, the initial porosity and average detention time for each drain were computed. Calculated porosity ranged from 0.12 to 0.50 with corresponding detention times at average flow from 1.3 to 33 h. The effectiveness of treatment was dependent on influent chemistry, detention time, and limestone purity. At two sites where influent contained elevated dissolved Al (>5 mg/liter), drain performance declined rapidly; elsewhere the drains consistently produced near-neutral effluent, even when influent contained small concentrations of dissolved Fe^+ (<5 mg/liter). Rates of limestone dissolution computed on the basis of average long-term Ca ion flux normalized by initial mass and purity of limestone at each of the drains ranged from 0.008 to 0.079 year-1. Data for alkalinity concentration and flux during 11-day closed-container tests using an initial mass of 4kg crushed limestone and a solution volume of 2.3 liter yielded dissolution rate constants that were comparable to these long-term field rates. An analytical method is proposed using closed-container test data to evaluate long-term performance (longevity) or to estimate the mass of limestone needed for a limestone treatment. This method condisers flow rate, influent alkalinity, steady-state alkalinity of effluent, and desired effluent alkalinity or detention time at a future time(s) and aplies first-order rate laws for limestone dissolution (continuous) and production of alkalinity (bounded).

  16. Distance measurements from supernovae and dark energy constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Yun

    2009-12-15

    Constraints on dark energy from current observational data are sensitive to how distances are measured from Type Ia supernova (SN Ia) data. We find that flux averaging of SNe Ia can be used to test the presence of unknown systematic uncertainties, and yield more robust distance measurements from SNe Ia. We have applied this approach to the nearby+SDSS+ESSENCE+SNLS+HST set of 288 SNe Ia, and the 'Constitution' set of 397 SNe Ia. Combining the SN Ia data with cosmic microwave background anisotropy data from Wilkinson Microwave Anisotropy Probe 5 yr observations, the Sloan Digital Sky Survey baryon acoustic oscillation measurements, themore » data of 69 gamma-ray bursts (GRBs) , and the Hubble constant measurement from the Hubble Space Telescope project SHOES, we measure the dark energy density function X(z){identical_to}{rho}{sub X}(z)/{rho}{sub X}(0) as a free function of redshift (assumed to be a constant at z>1 or z>1.5). Without the flux averaging of SNe Ia, the combined data using the Constitution set of SNe Ia seem to indicate a deviation from a cosmological constant at {approx}95% confidence level at 0 < or apporx. z < or approx. 0.8; they are consistent with a cosmological constant at {approx}68% confidence level when SNe Ia are flux averaged. The combined data using the nearby+SDSS+ESSENCE+SNLS+HST data set of SNe Ia are consistent with a cosmological constant at 68% confidence level with or without flux averaging of SNe Ia, and give dark energy constraints that are significantly more stringent than that using the Constitution set of SNe Ia. Assuming a flat Universe, dark energy is detected at >98% confidence level for z{<=}0.75 using the combined data with 288 SNe Ia from nearby+SDSS+ESSENCE+SNLS+HST, independent of the assumptions about X(z{>=}1). We quantify dark energy constraints without assuming a flat Universe using the dark energy figure of merit for both X(z) and a dark energy equation-of-state linear in the cosmic scale factor.« less

  17. Combustion of Interacting Droplet Arrays in a Microgravity Environment

    NASA Technical Reports Server (NTRS)

    Dietrich, D. L.; Struk, P. M.; Kitano, K.; Ikegami, M.

    1999-01-01

    Investigations into droplet interactions date back to Rex et al. Recently, Annamalai and Ryan and Annamalai published extensive reviews of droplet array and cloud combustion studies. The authors studied the change in the burning rate constant, k, (relative to that of the single droplet) that results from interactions. Under certain conditions, there exists a separation distance where the droplet lifetime reaches a minimum, or average burning rate constant is a maximum . Additionally, since inter-droplet separation distance, L, increases relative to the droplet size, D, as the burning proceeds, the burning rate is not constant throughout the burn, but changes continuously with time. Only Law and co-workers and Mikami et al. studied interactions under conditions where buoyant forces were negligible. Comparing their results with existing theory, Law and co-workers found that theory over predicted the persistency and intensity of droplet interactions. The droplet interactions also depended on the initial array configuration as well as the instantaneous array configuration. They also concluded that droplet heating was retarded due to interactions and that the burning process did not follow the "D-squared" law. Mikami et al. studied the combustion of a two-droplet array of heptane burning in air at one atm pressure in microgravity. They showed that the instantaneous burning rate constant increases throughout the droplet lifetime, even for a single droplet. Also, the burn time of the array reached a minimum at a critical inter-droplet spacing. In this article, we examine droplet interactions in normal and microgravity environments. The microgravity experiments were in the NASA GRC 2.2 and 5.2 second drop towers, and the JAMIC (Japan Microgravity Center) 10 second drop tower. Special emphasis is directed to combustion under conditions that yield finite extinction diameters, and to determine how droplet interactions affect the extinction process.

  18. Constant speed control of four-stroke micro internal combustion swing engine

    NASA Astrophysics Data System (ADS)

    Gao, Dedong; Lei, Yong; Zhu, Honghai; Ni, Jun

    2015-09-01

    The increasing demands on safety, emission and fuel consumption require more accurate control models of micro internal combustion swing engine (MICSE). The objective of this paper is to investigate the constant speed control models of four-stroke MICSE. The operation principle of the four-stroke MICSE is presented based on the description of MICSE prototype. A two-level Petri net based hybrid model is proposed to model the four-stroke MICSE engine cycle. The Petri net subsystem at the upper level controls and synchronizes the four Petri net subsystems at the lower level. The continuous sub-models, including breathing dynamics of intake manifold, thermodynamics of the chamber and dynamics of the torque generation, are investigated and integrated with the discrete model in MATLAB Simulink. Through the comparison of experimental data and simulated DC voltage output, it is demonstrated that the hybrid model is valid for the four-stroke MICSE system. A nonlinear model is obtained from the cycle average data via the regression method, and it is linearized around a given nominal equilibrium point for the controller design. The feedback controller of the spark timing and valve duration timing is designed with a sequential loop closing design approach. The simulation of the sequential loop closure control design applied to the hybrid model is implemented in MATLAB. The simulation results show that the system is able to reach its desired operating point within 0.2 s, and the designed controller shows good MICSE engine performance with a constant speed. This paper presents the constant speed control models of four-stroke MICSE and carries out the simulation tests, the models and the simulation results can be used for further study on the precision control of four-stroke MICSE.

  19. Absence of disparities in anthropometric measures among Chilean indigenous and non-indigenous newborns.

    PubMed

    Amigo, Hugo; Bustos, Patricia; Kaufman, Jay S

    2010-07-03

    Studies throughout North America and Europe have documented adverse perinatal outcomes for racial/ethnic minorities. Nonetheless, the contrast in newborn characteristics between indigenous and non-indigenous populations in Latin America has been poorly characterized. This is due to many challenges, including a lack of vital registration information on ethnicity. The objective of this study was to analyze trends in anthropometric measures at birth in Chilean indigenous (Mapuche) and non-indigenous children over a 5-year period. We examined weight and length at birth using information available through a national data base of all birth records for the years 2000 through 2004 (n = 1,166.513). Newborns were classified ethnically according to the origins of the parents' last names. The average birthweight was stable over the 5 year period with variations of less than 20 g in each group, and with mean values trivially higher in indigenous newborns. The proportion weighing less than 2500 g at birth increased modestly from 5.2% to 5.6% in non-indigenous newborns whereas the indigenous births remained constant at 5.2%. In multiple regression analyses, adjusting flexibly for gestational age and maternal characteristics, the occurrence of an indigenous surname added only 14 g to an average infant's birthweight while holding other factors constant. Results for length at birth were similar, and adjusted time trend variation in both outcomes was trivially small after adjustment. Anthropometric indexes at birth in Chile are quite favorable by international standards. There is only a trivial degree of ethnic disparity in these values, in contrast to conditions for ethnic minorities in other countries. Moreover, these values remained roughly constant over the 5 years of observation in this study.

  20. Absence of disparities in anthropometric measures among Chilean indigenous and non-indigenous newborns

    PubMed Central

    2010-01-01

    Background Studies throughout North America and Europe have documented adverse perinatal outcomes for racial/ethnic minorities. Nonetheless, the contrast in newborn characteristics between indigenous and non-indigenous populations in Latin America has been poorly characterized. This is due to many challenges, including a lack of vital registration information on ethnicity. The objective of this study was to analyze trends in anthropometric measures at birth in Chilean indigenous (Mapuche) and non-indigenous children over a 5-year period. Methods We examined weight and length at birth using information available through a national data base of all birth records for the years 2000 through 2004 (n = 1,166.513). Newborns were classified ethnically according to the origins of the parents' last names. Result The average birthweight was stable over the 5 year period with variations of less than 20 g in each group, and with mean values trivially higher in indigenous newborns. The proportion weighing less than 2500 g at birth increased modestly from 5.2% to 5.6% in non-indigenous newborns whereas the indigenous births remained constant at 5.2%. In multiple regression analyses, adjusting flexibly for gestational age and maternal characteristics, the occurrence of an indigenous surname added only 14 g to an average infant's birthweight while holding other factors constant. Results for length at birth were similar, and adjusted time trend variation in both outcomes was trivially small after adjustment. Anthropometric indexes at birth in Chile are quite favorable by international standards. Conclusion There is only a trivial degree of ethnic disparity in these values, in contrast to conditions for ethnic minorities in other countries. Moreover, these values remained roughly constant over the 5 years of observation in this study. PMID:20598150

  1. Modelling and analysis of creep deformation and fracture in a 1 Cr 1/2 Mo ferritic steel

    NASA Astrophysics Data System (ADS)

    Dyson, B. F.; Osgerby, D.

    A quantitative model, based upon a proposed new mechanism of creep deformation in particle-hardened alloys, has been validated by analysis of creep data from a 13CrMo 4 4 (1Cr 1/2 Mo) material tested under a range of stresses and temperatures. The methodology that has been used to extract the model parameters quantifies, as a first approximation, only the main degradation (damage) processes - in the case of the 1CR 1/2 Mo steel, these are considered to be the parallel operation of particle-coarsening and a progressively increasing stress due to a constant-load boundary condition. These 'global' model parameters can then be modified (only slightly) as required to obtain a detailed description and 'fit' to the rupture lifetime and strain/time trajectory of any individual test. The global model parameter approach may be thought of as predicting average behavior and the detailed fits as taking account of uncertainties (scatter) due to variability in the material. Using the global parameter dataset, predictions have also been made of behavior under biaxial stressing; constant straining rate; constant total strain (stress relaxation) and the likely success or otherwise of metallographic and mechanical remanent lifetime procedures.

  2. Electroluminescence pulse shape and electron diffusion in liquid argon measured in a dual-phase TPC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agnes, P.; et al.

    We report the measurement of the longitudinal diffusion constant in liquid argon with the DarkSide-50 dual-phase time projection chamber. The measurement is performed at drift electric fields of 100 V/cm, 150 V/cm, and 200 V/cm using high statisticsmore » $$^{39}$$Ar decays from atmospheric argon. We derive an expression to describe the pulse shape of the electroluminescence signal (S2) in dual-phase TPCs. The derived S2 pulse shape is fit to events from the uppermost portion of the TPC in order to characterize the radial dependence of the signal. The results are provided as inputs to the measurement of the longitudinal diffusion constant DL, which we find to be (4.12 $$\\pm$$ 0.04) cm$^2$/s for a selection of 140keV electron recoil events in 200V/cm drift field and 2.8kV/cm extraction field. To study the systematics of our measurement we examine datasets of varying event energy, field strength, and detector volume yielding a weighted average value for the diffusion constant of (4.09 $$\\pm$$ 0.09) cm$^2$ /s. The measured longitudinal diffusion constant is observed to have an energy dependence, and within the studied energy range the result is systematically lower than other results in the literature.« less

  3. Radiofrequency tissue ablation of the inferior turbinates using a thermocouple feedback electrode.

    PubMed

    Smith, T L; Correa, A J; Kuo, T; Reinisch, L

    1999-11-01

    The objective of this clinical trial was to assess the safety and efficacy of radiofrequency (RF) tissue ablation of the inferior turbinates in the treatment of nasal obstruction using an RF energy delivery system with a thermocouple feedback electrode. A prospective, nonrandomized study of 11 patients (mean age, 47+/-12 y) with chronic nasal obstruction was conducted. Using patient-based visual analogue scales (VAS), symptom parameters were assessed. These included degree of nasal obstruction, frequency of nasal obstruction, and pain. Physician assessment of nasal obstruction was also collected by the principal investigator. Follow-up was conducted at 24 hours, 1 week, 4 weeks, 8 weeks, and 1 year. ANOVA was carried out to determine statistically significant differences in the data. Data were fit to a regression model, and confidence intervals were determined from a 95% confidence level. In patient-assessed degree of nasal obstruction, statistical significance was seen among baseline and 4 weeks, 8 weeks, and 1 year (P<.001, P<.0001, and P<.0008, respectively). There was no difference between 8 weeks and 1 year (P<.15). The data appeared to follow an exponential decay to a constant value. The pretreatment baseline average degree of obstruction was 7.5+/-0.5 on a scale of 0 to 10. The degree of obstruction after 8 weeks was 2.7+/-0.6. The time constant for this change was 21 days to reach 90% of the final value. At 1 year, degree of obstruction was 3.3+/-0.7. For frequency of nasal obstruction, statistical significance was seen among baseline and 4 weeks, 8 weeks, and 1 year (P<.0001, P<.0001, and P<.0001, respectively). There was no difference between 8 weeks and 1 year (P<.15). The pretreatment baseline average frequency of obstruction was 7.8+/-0.5. The remaining frequency of obstruction after 8 weeks was 2.9+/-0.6. The time constant was 18 days. At 1 year, frequency of obstruction was 3.3+/-0.6. Physician assessment of nasal obstruction revealed statistical significance among baseline and 4 weeks, and baseline and 8 weeks (P<.0055 and P<.0056, respectively). There was no difference between 4 weeks and 8 weeks (P<.24). The average initial obstruction was 83%+/-4%. The remaining obstruction after 8 weeks was 58% +/-5%. The time constant was 14 days. Mild pain was reported by 55% of patients during the procedure; the remaining 45% reported no pain. Only one patient required pain medication consisting of acetaminophen after the procedure. There were no significant complications. Degree and frequency of nasal obstruction, as reported by patients, decreased following RF tissue ablation of the inferior turbinates. This improvement in symptoms was still evident after 1 year (P<.001). Physician assessment of obstruction also correlated with patient reports for the initial 8-week study period. The procedure was safe and well tolerated. Thermocouples within the active electrode provided additional feedback to the operating surgeon allowing the use of relatively lower tissue temperatures, power, and energy as compared with traditional techniques. These results support the need for continued research to evaluate this modality as a treatment for chronic nasal obstruction.

  4. Regional Landslide Mapping Aided by Automated Classification of SqueeSAR™ Time Series (Northern Apennines, Italy)

    NASA Astrophysics Data System (ADS)

    Iannacone, J.; Berti, M.; Allievi, J.; Del Conte, S.; Corsini, A.

    2013-12-01

    Space borne InSAR has proven to be very valuable for landslides detection. In particular, extremely slow landslides (Cruden and Varnes, 1996) can be now clearly identified, thanks to the millimetric precision reached by recent multi-interferometric algorithms. The typical approach in radar interpretation for landslides mapping is based on average annual velocity of the deformation which is calculated over the entire times series. The Hotspot and Cluster Analysis (Lu et al., 2012) and the PSI-based matrix approach (Cigna et al., 2013) are examples of landslides mapping techniques based on average annual velocities. However, slope movements can be affected by non-linear deformation trends, (i.e. reactivation of dormant landslides, deceleration due to natural or man-made slope stabilization, seasonal activity, etc). Therefore, analyzing deformation time series is crucial in order to fully characterize slope dynamics. While this is relatively simple to be carried out manually when dealing with small dataset, the time series analysis over regional scale dataset requires automated classification procedures. Berti et al. (2013) developed an automatic procedure for the analysis of InSAR time series based on a sequence of statistical tests. The analysis allows to classify the time series into six distinctive target trends (0=uncorrelated; 1=linear; 2=quadratic; 3=bilinear; 4=discontinuous without constant velocity; 5=discontinuous with change in velocity) which are likely to represent different slope processes. The analysis also provides a series of descriptive parameters which can be used to characterize the temporal changes of ground motion. All the classification algorithms were integrated into a Graphical User Interface called PSTime. We investigated an area of about 2000 km2 in the Northern Apennines of Italy by using SqueeSAR™ algorithm (Ferretti et al., 2011). Two Radarsat-1 data stack, comprising of 112 scenes in descending orbit and 124 scenes in ascending orbit, were processed. The time coverage lasts from April 2003 to November 2012, with an average temporal frequency of 1 scene/month. Radar interpretation has been carried out by considering average annual velocities as well as acceleration/deceleration trends evidenced by PSTime. Altogether, from ascending and descending geometries respectively, this approach allowed detecting of 115 and 112 potential landslides on the basis of average displacement rate and 77 and 79 landslides on the basis of acceleration trends. In conclusion, time series analysis resulted to be very valuable for landslide mapping. In particular it highlighted areas with marked acceleration in a specific period in time while still being affected by low average annual velocity over the entire analysis period. On the other hand, even in areas with high average annual velocity, time series analysis was of primary importance to characterize the slope dynamics in terms of acceleration events.

  5. Resonant acoustic measurement of vapor phase transport phenomenon in porous media

    NASA Astrophysics Data System (ADS)

    Schuhmann, Richard; Garrett, Steven

    2002-05-01

    Diffusion of gases through porous media is commonly described using Fick's law and is characterized by a gas diffusion coefficient modified by a media-specific tortuosity parameter. A phase-locked-loop resonance frequency tracker [J. Acoust. Soc. Am. 108, 2520 (2000)] has been upgraded with an insulated copper resonator and a bellows-sealed piston instrumented with an accelerometer. Average system stability (temperature divided by frequency squared) is about 180 ppm. Glass-bead-filled cores of different lengths are fitted into an o-ring sealed opening at the top of the resonator. The rate at which the tracer gas is replaced by air within the resonator is controlled by the core's diffusion constant. Mean molecular weight of the gas mixture in the resonator is determined in real time from the ratio of the absolute temperature to the square of the fundamental acoustic resonance frequency. Molecular weight of the gas mixture is determined approximately six times per minute. Changes in the gas mixture concentration are exponential in time (within 0.1%) over nearly two decades in concentration. We will report diffusion constants for two different sizes of glass beads, in samples of five different lengths, using two different tracer gases, to establish the validity of this approach. [Work supported by ONR.

  6. Natural history and breeding biology of the Rusty-breasted Antpitta (Grallaricula ferrugineipectus)

    USGS Publications Warehouse

    Niklison, Alina M.; Areta, J.I.; Ruggera, R.A.; Decker, Karie L.; Bosque, C.; Martin, T.E.

    2008-01-01

    We provide substantial new information on the breeding biology of the Rusty-breasted Antpitta (Grallaricula ferrugineipectus ferrugineipectus) from 40 nests during four consecutive breeding seasons at Yacambu National Park in Venezuela. Vocalizations are quite variable in G. ferrugineipectus. Nesting activity peaked in April when laying began for half of all nests monitored. The date of nest initiation pattern suggests this species is single-brooded. Both parents incubate and the percent of time they incubate is high (87-99%) throughout the incubation period. The incubation period averaged (?? SE) 17.0 ?? 0.12 days, while the nestling period averaged 13.37 ?? 0.37 days. G. f. ferrugineipectus has the shortest developmental time described for its genus. Time spent brooding nestlings decreased as nestlings grew, but was still greater at pin feather break day than observed in north temperate species. The growth rate constant based on mass (k = 0.41) and tarsus length (k = 0.24) was lower than the k for north temperate species of similar adult mass. All nesting mortality was caused by predation and overall daily survival rate (?? SE) was relatively low (0.94 ?? 0.01) yielding an estimated 15% nest success.

  7. Estimation of shelf life of natural rubber latex exam-gloves based on creep behavior.

    PubMed

    Das, Srilekha Sarkar; Schroeder, Leroy W

    2008-05-01

    Samples of full-length glove-fingers cut from chlorinated and nonchlorinated latex medical examination gloves were aged for various times at several fixed temperatures and 25% relative humidity. Creep testing was performed using an applied stress of 50 kPa on rectangular specimens (10 mm x 8 mm) of aged and unaged glove fingers as an assessment of glove loosening during usage. Variations in creep curves obtained were compared to determine the threshold aging time when the amount of creep became larger than the initial value. These times were then used in various models to estimate shelf lives at lower temperatures. Several different methods of extrapolation were used for shelf-life estimation and comparison. Neither Q-factor nor Arrhenius activation energies, as calculated from 10 degrees C interval shift factors, were constant over the temperature range; in fact, both decreased at lower temperatures. Values of Q-factor and activation energies predicted up to 5 years of shelf life. Predictions are more sensitive to values of activation energy as the storage temperature departs from the experimental aging data. Averaging techniques for prediction of average activation energy predicted the longest shelf life as the curvature is reduced. Copyright 2007 Wiley Periodicals, Inc.

  8. Generic emergence of power law distributions and Lévy-Stable intermittent fluctuations in discrete logistic systems

    NASA Astrophysics Data System (ADS)

    Biham, Ofer; Malcai, Ofer; Levy, Moshe; Solomon, Sorin

    1998-08-01

    The dynamics of generic stochastic Lotka-Volterra (discrete logistic) systems of the form wi(t+1)=λ(t)wi(t)+aw¯(t)-bwi(t)w¯(t) is studied by computer simulations. The variables wi, i=1,...,N, are the individual system components and w¯(t)=(1/N)∑iwi(t) is their average. The parameters a and b are constants, while λ(t) is randomly chosen at each time step from a given distribution. Models of this type describe the temporal evolution of a large variety of systems such as stock markets and city populations. These systems are characterized by a large number of interacting objects and the dynamics is dominated by multiplicative processes. The instantaneous probability distribution P(w,t) of the system components wi turns out to fulfill a Pareto power law P(w,t)~w-1-α. The time evolution of w¯(t) presents intermittent fluctuations parametrized by a Lévy-stable distribution with the same index α, showing an intricate relation between the distribution of the wi's at a given time and the temporal fluctuations of their average.

  9. Reliability, return periods, and risk under nonstationarity

    NASA Astrophysics Data System (ADS)

    Read, Laura K.; Vogel, Richard M.

    2015-08-01

    Water resources design has widely used the average return period as a concept to inform management and communication of the risk of experiencing an exceedance event within a planning horizon. Even though nonstationarity is often apparent, in practice hydrologic design often mistakenly assumes that the probability of exceedance, p, is constant from year to year which leads to an average return period To equal to 1/p; this expression is far more complex under nonstationarity. Even for stationary processes, the common application of an average return period is problematic: it does not account for planning horizon, is an average value that may not be representative of the time to the next flood, and is generally not applied in other areas of water planning. We combine existing theoretical and empirical results from the literature to provide the first general, comprehensive description of the probabilistic behavior of the return period and reliability under nonstationarity. We show that under nonstationarity, the underlying distribution of the return period exhibits a more complex shape than the exponential distribution under stationary conditions. Using a nonstationary lognormal model, we document the increased complexity and challenges associated with planning for future flood events over a planning horizon. We compare application of the average return period with the more common concept of reliability and recommend replacing the average return period with reliability as a more practical way to communicate event likelihood in both stationary and nonstationary contexts.

  10. MC3: Multi-core Markov-chain Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan

    2016-10-01

    MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.

  11. Interhemispheric comparison of atmospheric circulation features as evaluated from Nimbus satellite data

    NASA Technical Reports Server (NTRS)

    Reiter, E. R.; Vonderhaar, T. H.; Adler, R. F.; Srivatsangam, S.; Fields, A.

    1973-01-01

    A relationship is established between relative geostrophic vorticity on an isobaric surface and the Laplacian of the underlying layer-mean temperature. This relationship is used to investigate the distribution of vorticity and baroclinicity in a jet-stream model which is constantly recurrent in the winter troposphere. The investigation shows that the baroclinic and vorticity fields of the extratropical troposphere must be bifurcated with two extrema in the middle and subpolar latitudes. This pattern is present in daily tropospheric meridional cross-sections. The reasons for the disappearance of bifurcation in the time-and-longitude averaged distributions are discussed.

  12. An atomic clock with 10(-18) instability.

    PubMed

    Hinkley, N; Sherman, J A; Phillips, N B; Schioppo, M; Lemke, N D; Beloy, K; Pizzocaro, M; Oates, C W; Ludlow, A D

    2013-09-13

    Atomic clocks have been instrumental in science and technology, leading to innovations such as global positioning, advanced communications, and tests of fundamental constant variation. Timekeeping precision at 1 part in 10(18) enables new timing applications in relativistic geodesy, enhanced Earth- and space-based navigation and telescopy, and new tests of physics beyond the standard model. Here, we describe the development and operation of two optical lattice clocks, both using spin-polarized, ultracold atomic ytterbium. A measurement comparing these systems demonstrates an unprecedented atomic clock instability of 1.6 × 10(-18) after only 7 hours of averaging.

  13. Influence of Progressive Central Hypovolemia on Multifractal Dimension of Cardiac Interbeat Intervals

    DTIC Science & Technology

    2010-05-07

    by an exponent that he called H in honor of Hurst . 4 Consequently, if X(t) is a fractal process with Hurst exponent H and c is a constant, then X (t...1−2H ≈ f−1−2h0 , (1) where f is the frequency, H is the Hurst exponent and h0 is the average of the Hölder exponent distribution among the...infinitely long monofractal time series. Figure 2 shows a computer generated realization of fGn with Hurst exponent H = 1 or Hölder exponent h0 ≈ 0

  14. A drift line bias estimator: ARMA-based filter or calibration method, and its application in BDS/GPS-based attitude determination

    NASA Astrophysics Data System (ADS)

    Liang, Zhang; Yanqing, Hou; Jie, Wu

    2016-12-01

    The multi-antenna synchronized receiver (using a common clock) is widely applied in GNSS-based attitude determination (AD) or terrain deformations monitoring, and many other applications, since the high-accuracy single-differenced carrier phase can be used to improve the positioning or AD accuracy. Thus, the line bias (LB) parameter (fractional bias isolating) should be calibrated in the single-differenced phase equations. In the past decades, all researchers estimated the LB as a constant parameter in advance and compensated it in real time. However, the constant LB assumption is inappropriate in practical applications because of the physical length and permittivity changes of the cables, caused by the environmental temperature variation and the instability of receiver-self inner circuit transmitting delay. Considering the LB drift (or colored LB) in practical circumstances, this paper initiates a real-time estimator using auto regressive moving average-based (ARMA) prediction/whitening filter model or Moving average-based (MA) constant calibration model. In the ARMA-based filter model, four cases namely AR(1), ARMA(1, 1), AR(2) and ARMA(2, 1) are applied for the LB prediction. The real-time relative positioning model using the ARMA-based predicting LB is derived and it is theoretically proved that the positioning accuracy is better than the traditional double difference carrier phase (DDCP) model. The drifting LB is defined with a phase temperature changing rate integral function, which is a random walk process if the phase temperature changing rate is white noise, and is validated by the analysis of the AR model coefficient. The auto covariance function shows that the LB is indeed varying in time and estimating it as a constant is not safe, which is also demonstrated by the analysis on LB variation of each visible satellite during a zero and short baseline BDS/GPS experiment. Compared to the DDCP approach, in the zero-baseline experiment, the LB constant calibration (LBCC) and MA approaches improved the positioning accuracy of the vertical component, while slightly degrading the accuracy of the horizontal components. The ARMA(1, 0) model, however, improved the positioning accuracy of all three components, with 40 and 50 % improvement of the vertical component for BDS and GPS, respectively. In the short baseline experiment, compared to the DDCP approach, the LBCC approach yielded bad positioning solutions and degraded the AD accuracy; both MA and ARMA-based filter approaches improved the AD accuracy. Moreover, the ARMA(1, 0) and ARMA(1, 1) models have relatively better performance, improving to 55 % and 48 % the elevation angle in ARMA(1, 1) and MA model for GPS, respectively. Furthermore, the drifting LB variation is found to be continuous and slowly cumulative; the variation magnitudes in the unit of length are almost identical on different frequency carrier phases, so the LB variation does not show obvious correlation between different frequencies. Consequently, the wide-lane LB in the unit of cycle is very stable, while the narrow-lane LB varies largely in time. This reasoning probably also explains the phenomenon that the wide-lane LB originating in the satellites is stable, while the narrow-lane LB varies. The results of ARMA-based filters are better than the MA model, which probably implies that the modeling for drifting LB can further improve the precise point positioning accuracy.

  15. Physician Assistants Improve Efficiency and Decrease Costs in Outpatient Oral and Maxillofacial Surgery.

    PubMed

    Resnick, Cory M; Daniels, Kimberly M; Flath-Sporn, Susan J; Doyle, Michael; Heald, Ronald; Padwa, Bonnie L

    2016-11-01

    To determine the effects on time, cost, and complication rates of integrating physician assistants (PAs) into the procedural components of an outpatient oral and maxillofacial surgery practice. This is a prospective cohort study of patients from the Department of Plastic and Oral Surgery at Boston Children's Hospital who underwent removal of 4 impacted third molars with intravenous sedation in our outpatient facility. Patients were separated into the "no PA group" and PA group. Process maps were created to capture all activities from room preparation to patient discharge, and all activities were timed for each case. A time-driven activity-based costing method was used to calculate the average times and costs from the provider's perspective for each group. Complication rates were calculated during the periods for both groups. Descriptive statistics were calculated, and significance was set at P < .05. The total process time did not differ significantly between groups, but the average total procedure cost decreased by $75.08 after the introduction of PAs (P < .001). The time that the oral and maxillofacial surgeon was directly involved in the procedure decreased by an average of 19.2 minutes after the introduction of PAs (P < .001). No significant differences in postoperative complications were found. The addition of PAs into the procedural components of an outpatient oral and maxillofacial surgery practice resulted in decreased costs whereas complication rates remained constant. The increased availability of the oral and maxillofacial surgeon after the incorporation of PAs allows for more patients to be seen during a clinic session, which has the potential to further increase efficiency and revenue. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  16. Ex-Vivo Lymphatic Perfusion System for Independently Controlling Pressure Gradient and Transmural Pressure in Isolated Vessels

    PubMed Central

    Kornuta, Jeffrey A.; Dixon, J. Brandon

    2015-01-01

    In addition to external forces, collecting lymphatic vessels intrinsically contract to transport lymph from the extremities to the venous circulation. As a result, the lymphatic endothelium is routinely exposed to a wide range of dynamic mechanical forces, primarily fluid shear stress and circumferential stress, which have both been shown to affect lymphatic pumping activity. Although various ex-vivo perfusion systems exist to study this innate pumping activity in response to mechanical stimuli, none are capable of independently controlling the two primary mechanical forces affecting lymphatic contractility: transaxial pressure gradient, ΔP, which governs fluid shear stress; and average transmural pressure, Pavg, which governs circumferential stress. Hence, the authors describe a novel ex-vivo lymphatic perfusion system (ELPS) capable of independently controlling these two outputs using a linear, explicit model predictive control (MPC) algorithm. The ELPS is capable of reproducing arbitrary waveforms within the frequency range observed in the lymphatics in vivo, including a time-varying ΔP with a constant Pavg, time-varying ΔP and Pavg, and a constant ΔP with a time-varying Pavg. In addition, due to its implementation of syringes to actuate the working fluid, a post-hoc method of estimating both the flow rate through the vessel and fluid wall shear stress over multiple, long (5 sec) time windows is also described. PMID:24809724

  17. Zero-point corrections and temperature dependence of HD spin-spin coupling constants of heavy metal hydride and dihydrogen complexes calculated by vibrational averaging.

    PubMed

    Mort, Brendan C; Autschbach, Jochen

    2006-08-09

    Vibrational corrections (zero-point and temperature dependent) of the H-D spin-spin coupling constant J(HD) for six transition metal hydride and dihydrogen complexes have been computed from a vibrational average of J(HD) as a function of temperature. Effective (vibrationally averaged) H-D distances have also been determined. The very strong temperature dependence of J(HD) for one of the complexes, [Ir(dmpm)Cp*H2]2 + (dmpm = bis(dimethylphosphino)methane) can be modeled simply by the Boltzmann average of the zero-point vibrationally averaged JHD of two isomers. For this complex and four others, the vibrational corrections to JHD are shown to be highly significant and lead to improved agreement between theory and experiment in most cases. The zero-point vibrational correction is important for all complexes. Depending on the shape of the potential energy and J-coupling surfaces, for some of the complexes higher vibrationally excited states can also contribute to the vibrational corrections at temperatures above 0 K and lead to a temperature dependence. We identify different classes of complexes where a significant temperature dependence of J(HD) may or may not occur for different reasons. A method is outlined by which the temperature dependence of the HD spin-spin coupling constant can be determined with standard quantum chemistry software. Comparisons are made with experimental data and previously calculated values where applicable. We also discuss an example where a low-order expansion around the minimum of a complicated potential energy surface appears not to be sufficient for reproducing the experimentally observed temperature dependence.

  18. Atrial rhythm influences catheter tissue contact during radiofrequency catheter ablation of atrial fibrillation: comparison of contact force between sinus rhythm and atrial fibrillation.

    PubMed

    Matsuda, Hisao; Parwani, Abdul Shokor; Attanasio, Philipp; Huemer, Martin; Wutzler, Alexander; Blaschke, Florian; Haverkamp, Wilhelm; Boldt, Leif-Hendrik

    2016-09-01

    Catheter tissue contact force (CF) is an important factor for durable lesion formation during radiofrequency catheter ablation (RFCA) of atrial fibrillation (AF). Since CF varies in the beating heart, atrial rhythm during RFCA may influence CF. A high-density map and RFCA points were obtained in 25 patients undergoing RFCA of AF using a CF-sensing catheter (Tacticath, St. Jude Medical). The operators were blinded to the CF information. Contact type was classified into three categories: constant, variable, and intermittent contact. Average CF and contact type were analyzed according to atrial rhythm (SR vs. AF) and anatomical location. A total of 1364 points (891 points during SR and 473 points during AF) were analyzed. Average CFs showed no significant difference between SR (17.2 ± 11.3 g) and AF (17.2 ± 13.3 g; p = 0.99). The distribution of points with an average CF of ≥20 and <10 g also showed no significant difference. However, the distribution of excessive CF (CF ≥40 g) was significantly higher during AF (7.4 %) in comparison with SR (4.2 %; p < 0.05). At the anterior area of the right inferior pulmonary vein (RIPV), the average CF during AF was significantly higher than during SR (p < 0.05). Constant contact was significantly higher during AF (32.2 %) when compared to SR (9.9 %; p < 0.01). Although the average CF was not different between atrial rhythms, constant contact was more often achievable during AF than it was during SR. However, excessive CF also seems to occur more frequently during AF especially at the anterior part of RIPV.

  19. Exospheric hydrogen above St-Santin /France/

    NASA Technical Reports Server (NTRS)

    Derieux, A.; Lejeune, G.; Bauer, P.

    1975-01-01

    The temperature and hydrogen concentration of the exosphere was determined using incoherent scatter measurements performed above St. Santin from 1969 to 1972. The hydrogen concentration was deduced from measurements of the number density of positive hydrogen and oxygen ions. A statistical analysis is given of the hydrogen concentration as a function of the exospheric temperature and the diurnal variation of the hydrogen concentration is investigated for a few selected days of good quality observation. The data averaged with respect to the exospheric temperature without consideration of the local time exhibits a distribution consistent with a constant effective Jeans escape flux of about 9 x 10 to the 7 cu cm/s. The local time variation exhibits a maximum to minimum concentration ratio of at least 3.5.

  20. Efficient Thread Labeling for Monitoring Programs with Nested Parallelism

    NASA Astrophysics Data System (ADS)

    Ha, Ok-Kyoon; Kim, Sun-Sook; Jun, Yong-Kee

    It is difficult and cumbersome to detect data races occurred in an execution of parallel programs. Any on-the-fly race detection techniques using Lamport's happened-before relation needs a thread labeling scheme for generating unique identifiers which maintain logical concurrency information for the parallel threads. NR labeling is an efficient thread labeling scheme for the fork-join program model with nested parallelism, because its efficiency depends only on the nesting depth for every fork and join operation. This paper presents an improved NR labeling, called e-NR labeling, in which every thread generates its label by inheriting the pointer to its ancestor list from the parent threads or by updating the pointer in a constant amount of time and space. This labeling is more efficient than the NR labeling, because its efficiency does not depend on the nesting depth for every fork and join operation. Some experiments were performed with OpenMP programs having nesting depths of three or four and maximum parallelisms varying from 10,000 to 1,000,000. The results show that e-NR is 5 times faster than NR labeling and 4.3 times faster than OS labeling in the average time for creating and maintaining the thread labels. In average space required for labeling, it is 3.5 times smaller than NR labeling and 3 times smaller than OS labeling.

  1. Comparing Vibrationally Averaged Nuclear Shielding Constants by Quantum Diffusion Monte Carlo and Second-Order Perturbation Theory.

    PubMed

    Ng, Yee-Hong; Bettens, Ryan P A

    2016-03-03

    Using the method of modified Shepard's interpolation to construct potential energy surfaces of the H2O, O3, and HCOOH molecules, we compute vibrationally averaged isotropic nuclear shielding constants ⟨σ⟩ of the three molecules via quantum diffusion Monte Carlo (QDMC). The QDMC results are compared to that of second-order perturbation theory (PT), to see if second-order PT is adequate for obtaining accurate values of nuclear shielding constants of molecules with large amplitude motions. ⟨σ⟩ computed by the two approaches differ for the hydrogens and carbonyl oxygen of HCOOH, suggesting that for certain molecules such as HCOOH where big displacements away from equilibrium happen (internal OH rotation), ⟨σ⟩ of experimental quality may only be obtainable with the use of more sophisticated and accurate methods, such as quantum diffusion Monte Carlo. The approach of modified Shepard's interpolation is also extended to construct shielding constants σ surfaces of the three molecules. By using a σ surface with the equilibrium geometry as a single data point to compute isotropic nuclear shielding constants for each descendant in the QDMC ensemble representing the ground state wave function, we reproduce the results obtained through ab initio computed σ to within statistical noise. Development of such an approach could thereby alleviate the need for any future costly ab initio σ calculations.

  2. Elastic properties of uniaxial-fiber reinforced composites - General features

    NASA Astrophysics Data System (ADS)

    Datta, Subhendu; Ledbetter, Hassel; Lei, Ming

    The salient features of the elastic properties of uniaxial-fiber-reinforced composites are examined by considering the complete set of elastic constants of composites comprising isotropic uniaxial fibers in an isotropic matrix. Such materials exhibit transverse-isotropic symmetry and five independent elastic constants in Voigt notation: C(11), C(33), C(44), C(66), and C(13). These C(ij) constants are calculated over the entire fiber-volume-fraction range 0.0-1.0, using a scattered-plane-wave ensemple-average model. Some practical elastic constants such as the principal Young moduli and the principal Poisson ratios are considered, and the behavior of these constants is discussed. Also presented are the results for the four principal sound velocities used to study uniaxial-fiber-reinforced composites: v(11), v(33), v(12), and v(13).

  3. Bayesian Maximum Entropy Integration of Ozone Observations and Model Predictions: A National Application.

    PubMed

    Xu, Yadong; Serre, Marc L; Reyes, Jeanette; Vizuete, William

    2016-04-19

    To improve ozone exposure estimates for ambient concentrations at a national scale, we introduce our novel Regionalized Air Quality Model Performance (RAMP) approach to integrate chemical transport model (CTM) predictions with the available ozone observations using the Bayesian Maximum Entropy (BME) framework. The framework models the nonlinear and nonhomoscedastic relation between air pollution observations and CTM predictions and for the first time accounts for variability in CTM model performance. A validation analysis using only noncollocated data outside of a validation radius rv was performed and the R(2) between observations and re-estimated values for two daily metrics, the daily maximum 8-h average (DM8A) and the daily 24-h average (D24A) ozone concentrations, were obtained with the OBS scenario using ozone observations only in contrast with the RAMP and a Constant Air Quality Model Performance (CAMP) scenarios. We show that, by accounting for the spatial and temporal variability in model performance, our novel RAMP approach is able to extract more information in terms of R(2) increase percentage, with over 12 times for the DM8A and over 3.5 times for the D24A ozone concentrations, from CTM predictions than the CAMP approach assuming that model performance does not change across space and time.

  4. Effects of packaging and heat transfer kinetics on drug-product stability during storage under uncontrolled temperature conditions.

    PubMed

    Nakamura, Toru; Yamaji, Takayuki; Takayama, Kozo

    2013-05-01

    To predict the stability of pharmaceutical preparations under uncontrolled temperature conditions accurately, a method to compute the average reaction rate constant taking into account the heat transfer from the atmosphere to the product was developed. The average reaction rate constants computed with taken into consideration heat transfer (κ(re) ) were then compared with those computed without taking heat transfer into consideration (κ(in) ). The apparent thermal diffusivity (κ(a) ) exerted some influence on the average reaction rate constant ratio (R, R = κ(re) /κ(in) ). In the regions where the κ(a) was large (above 1 h(-1) ) or very small, the value of R was close to 1. On the contrary, in the middle region (0.001-1 h(-1) ), the value of R was less than 1.The κ(a) of the central part of a large-size container and that of the central part of a paper case of 10 bottles of liquid medicine (100 mL) fell within this middle region. On the basis of the above-mentioned considerations, heat transfer may need to be taken into consideration to enable a more accurate prediction of the stability of actual pharmaceutical preparations under nonisothermal atmospheres. Copyright © 2013 Wiley Periodicals, Inc.

  5. The effect of Argon pressure dependent V thin film on the phase transition process of (020) VO2 thin film

    NASA Astrophysics Data System (ADS)

    Meng, Yifan; Huang, Kang; Tang, Zhou; Xu, Xiaofeng; Tan, Zhiyong; Liu, Qian; Wang, Chunrui; Wu, Binhe; Wang, Chang; Cao, Juncheng

    2018-01-01

    It has been proved challenging to fabricate the single crystal orientation of VO2 thin film by a simple method. Based on chemical reaction thermodynamic and crystallization analysis theory, combined with our experimental results, we find out that when stoichiometric number of metallic V in the chemical equation is the same, the ratio of metallic V thin film surface average roughness Ra to thin film average particle diameter d decreases with the decreasing sputtering Argon pressure. Meanwhile, the oxidation reaction equilibrium constant K also decreases, which will lead to the increases of oxidation time, thereby the crystal orientation of the VO2 thin film will also become more uniform. By sputtering oxidation coupling method, metallic V thin film is deposited on c-sapphire substrate at 1 × 10-1 Pa, and then oxidized in the air with the maximum oxidation time of 65s, high oriented (020) VO2 thin film has been fabricated successfully, which exhibits ∼4.6 orders sheet resistance change across the metal-insulator transition.

  6. A new method for determining the acid number of biodiesel based on coulometric titration.

    PubMed

    Barbieri Gonzaga, Fabiano; Pereira Sobral, Sidney

    2012-08-15

    A new method is proposed for determining the acid number (AN) of biodiesel using coulometric titration with potentiometric detection, basically employing a potentiostat/galvanostat and an electrochemical cell containing a platinum electrode, a silver electrode, and a combination pH electrode. The method involves a sequential application of a constant current between the platinum (cathode) and silver (anode) electrodes, followed by measuring the potential of the combination pH electrode, using an isopropanol/water mixture as solvent and LiCl as the supporting electrolyte. A preliminary evaluation of the new method, using acetic acid for doping a biodiesel sample, showed an average recovery of 100.1%. Compared to a volumetric titration-based method for determining the AN of several biodiesel samples (ranging from about 0.18 to 0.95 mg g(-1)), the new method produced statistically similar results with better repeatability. Compared to other works reported in the literature, the new method presented an average repeatability up to 3.2 times better and employed a sample size up to 20 times smaller. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Rogue waves of the Kundu-Eckhaus equation in a chaotic wave field.

    PubMed

    Bayindir, Cihan

    2016-03-01

    In this paper we study the properties of the chaotic wave fields generated in the frame of the Kundu-Eckhaus equation (KEE). Modulation instability results in a chaotic wave field which exhibits small-scale filaments with a free propagation constant, k. The average velocity of the filaments is approximately given by the average group velocity calculated from the dispersion relation for the plane-wave solution; however, direction of propagation is controlled by the β parameter, the constant in front of the Raman-effect term. We have also calculated the probabilities of the rogue wave occurrence for various values of propagation constant k and showed that the probability of rogue wave occurrence depends on k. Additionally, we have showed that the probability of rogue wave occurrence significantly depends on the quintic and the Raman-effect nonlinear terms of the KEE. Statistical comparisons between the KEE and the cubic nonlinear Schrödinger equation have also been presented.

  8. [Drying characteristics and apparent change of sludge granules during drying].

    PubMed

    Ma, Xue-Wen; Weng, Huan-Xin; Zhang, Jin-Jun

    2011-08-01

    Three different weight grades of sludge granules (2.5, 5, 10 g) were dried at constant temperature of 100, 200, 300, 400 and 500 degrees C, respectively. Then characteristics of weight loss and change of apparent form during sludge drying were analyzed. Results showed that there were three stages during sludge drying at 100-200 degrees C: acceleration phase, constant-rate phase, and falling-rate phase. At 300-500 degrees C, there were no constant-rate phase, but due to lots of cracks generated at sludge surface, average drying rates were still high. There was a quadratic nonlinear relationship between average drying rate and drying temperature. At 100-200 degrees C, drying processes of different weight grade sludge granules were similar. At 300-500 degrees C, drying processes of same weight grade of sludge granules were similar. Little organic matter decomposed till sludge burning at 100-300 degrees C, while some organic matter began to decompose at the beginning of sludge drying at 400-500 degrees C.

  9. Heuristic approach to capillary pressures averaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coca, B.P.

    1980-10-01

    Several methods are available to average capillary pressure curves. Among these are the J-curve and regression equations of the wetting-fluid saturation in porosity and permeability (capillary pressure held constant). While the regression equation seem completely empiric, the J-curve method seems to be theoretically sound due to its expression based on a relation between the average capillary radius and the permeability-porosity ratio. An analysis is given of each of these methods.

  10. Dynamic shear-stress-enhanced rates of nutrient consumption in gas-liquid semi-continuous-flow suspensions

    NASA Astrophysics Data System (ADS)

    Belfiore, Laurence A.; Volpato, Fabio Z.; Paulino, Alexandre T.; Belfiore, Carol J.

    2011-12-01

    The primary objective of this investigation is to establish guidelines for generating significant mammalian cell density in suspension bioreactors when stress-sensitive kinetics enhance the rate of nutrient consumption. Ultra-low-frequency dynamic modulations of the impeller (i.e., 35104 Hz) introduce time-dependent oscillatory shear into this transient analysis of cell proliferation under semi-continuous creeping flow conditions. Greater nutrient consumption is predicted when the amplitude A of modulated impeller rotation increases, and stress-kinetic contributions to nutrient consumption rates increase linearly at higher modulation frequency via an application of fluctuation-dissipation response. Interphase mass transfer is required to replace dissolved oxygen as it is consumed by aerobic nutrient consumption in the liquid phase. The theory and predictions described herein could be important at small length scales in the creeping flow regime where viscous shear is significant at the interface between the nutrient medium and isolated cells in suspension. Two-dimensional flow around spherically shaped mammalian cells, suspended in a Newtonian culture medium, is analyzed to calculate the surface-averaged magnitude of the velocity gradient tensor and modify homogeneous rates of nutrient consumption that are stimulated by viscous shear, via the formalism of stress-kinetic reciprocal relations that obey Curie's theorem in non-equilibrium thermodynamics. Time constants for stress-free free and stress-sensitive stress nutrient consumption are defined and quantified to identify the threshold (i.e., stress,threshold) below which the effect of stress cannot be neglected in accurate predictions of bioreactor performance. Parametric studies reveal that the threshold time constant for stress-sensitive nutrient consumption stress,threshold decreases when the time constant for stress-free nutrient consumption free is shorter. Hence, stress,threshold depends directly on free. In other words, the threshold rate of stress-sensitive nutrient consumption is higher when the stress-free rate of nutrient consumption increases. Modulated rotation of the impeller, superimposed on steady shear, increases stress,threshold when free is constant, and stress,threshold depends directly on the amplitude A of these angular velocity modulations.

  11. Monitoring the affordability of healthy eating: a case study of 10 years of the Illawarra Healthy Food Basket.

    PubMed

    Williams, Peter

    2010-11-01

    Healthy food baskets have been used around the world for a variety of purposes, including: examining the difference in cost between healthy and unhealthy food; mapping the availability of healthy foods in different locations; calculating the minimum cost of an adequate diet for social policy planning; developing educational material on low cost eating and examining trends on food costs over time. In Australia, the Illawarra Healthy Food Basket was developed in 2000 to monitor trends in the affordability of healthy food compared to average weekly wages and social welfare benefits for the unemployed. It consists of 57 items selected to meet the nutritional requirements of a reference family of five. Bi-annual costing from 2000-2009 has shown that the basket costs have increased by 38.4% in the 10-year period, but that affordability has remained relatively constant at around 30% of average household incomes.

  12. Electronic, elastic and optical properties of divalent (R+2X) and trivalent (R+3X) rare earth monochalcogenides

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Chandra, S.; Singh, J. K.

    2017-08-01

    Based on plasma oscillations theory of solids, simple relations have been proposed for the calculation of bond length, specific gravity, homopolar energy gap, heteropolar energy gap, average energy gap, crystal ionicity, bulk modulus, electronic polarizability and dielectric constant of rare earth divalent R+2X and trivalent R+3X monochalcogenides. The specific gravity of nine R+2X, twenty R+3X, and bulk modulus of twenty R+3X monochalcogenides have been calculated for the first time. The calculated values of all parameters are compared with the available experimental and the reported values. A fairly good agreement has been obtained between them. The average percentage deviation of two parameters: bulk modulus and electronic polarizability for which experimental data are known, have also been calculated and found to be better than the earlier correlations.

  13. Stratospheric aerosol geoengineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robock, Alan

    2015-03-30

    The Geoengineering Model Intercomparison Project, conducting climate model experiments with standard stratospheric aerosol injection scenarios, has found that insolation reduction could keep the global average temperature constant, but global average precipitation would reduce, particularly in summer monsoon regions around the world. Temperature changes would also not be uniform; the tropics would cool, but high latitudes would warm, with continuing, but reduced sea ice and ice sheet melting. Temperature extremes would still increase, but not as much as without geoengineering. If geoengineering were halted all at once, there would be rapid temperature and precipitation increases at 5–10 times the rates frommore » gradual global warming. The prospect of geoengineering working may reduce the current drive toward reducing greenhouse gas emissions, and there are concerns about commercial or military control. Because geoengineering cannot safely address climate change, global efforts to reduce greenhouse gas emissions and to adapt are crucial to address anthropogenic global warming.« less

  14. Simulation of 90{degrees} ply fatigue crack growth along the width of cross-ply carbon-epoxy coupons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henaff-Gardin, C.; Urwald, E.; Lafarie-Frenot, M.C.

    1994-07-01

    We study the mechanism of fatigue cracking of the matrix of cross-ply carbon-epoxy laminates. Primary attention is given to the study of the influence of the specimen width on the evolution of damage. On the basis of shear lag analysis, we determine the strain energy release rate in the processes of initiation and growth of transverse fatigue cracks. We also present results of experimental research on the evolution of the edge crack density per ply, the average length of the cracks, and the crack propagation rate under transverse fatigue cracking. It is shown that these characteristics are independent of themore » specimen width. At the same time, as soon as the edge crack density reaches its saturation value, the average crack growth rate becomes constant. All the experimental results are in good agreement with results obtained by using the theoretical model.« less

  15. Monitoring the Affordability of Healthy Eating: A Case Study of 10 Years of the Illawarra Healthy Food Basket

    PubMed Central

    Williams, Peter

    2010-01-01

    Healthy food baskets have been used around the world for a variety of purposes, including: examining the difference in cost between healthy and unhealthy food; mapping the availability of healthy foods in different locations; calculating the minimum cost of an adequate diet for social policy planning; developing educational material on low cost eating and examining trends on food costs over time. In Australia, the Illawarra Healthy Food Basket was developed in 2000 to monitor trends in the affordability of healthy food compared to average weekly wages and social welfare benefits for the unemployed. It consists of 57 items selected to meet the nutritional requirements of a reference family of five. Bi-annual costing from 2000–2009 has shown that the basket costs have increased by 38.4% in the 10-year period, but that affordability has remained relatively constant at around 30% of average household incomes. PMID:22254001

  16. High-order noise filtering in nontrivial quantum logic gates.

    PubMed

    Green, Todd; Uys, Hermann; Biercuk, Michael J

    2012-07-13

    Treating the effects of a time-dependent classical dephasing environment during quantum logic operations poses a theoretical challenge, as the application of noncommuting control operations gives rise to both dephasing and depolarization errors that must be accounted for in order to understand total average error rates. We develop a treatment based on effective Hamiltonian theory that allows us to efficiently model the effect of classical noise on nontrivial single-bit quantum logic operations composed of arbitrary control sequences. We present a general method to calculate the ensemble-averaged entanglement fidelity to arbitrary order in terms of noise filter functions, and provide explicit expressions to fourth order in the noise strength. In the weak noise limit we derive explicit filter functions for a broad class of piecewise-constant control sequences, and use them to study the performance of dynamically corrected gates, yielding good agreement with brute-force numerics.

  17. Mean first passage time of active Brownian particle in one dimension

    NASA Astrophysics Data System (ADS)

    Scacchi, A.; Sharma, A.

    2018-02-01

    We investigate the mean first passage time of an active Brownian particle in one dimension using numerical simulations. The activity in one dimension is modelled as a two state model; the particle moves with a constant propulsion strength but its orientation switches from one state to other as in a random telegraphic process. We study the influence of a finite resetting rate r on the mean first passage time to a fixed target of a single free active Brownian particle and map this result using an effective diffusion process. As in the case of a passive Brownian particle, we can find an optimal resetting rate r* for an active Brownian particle for which the target is found with the minimum average time. In the case of the presence of an external potential, we find good agreement between the theory and numerical simulations using an effective potential approach.

  18. The effect of gamma irradiation on chemical, morphology and optical properties of polystyrene nanosphere at various exposure time

    NASA Astrophysics Data System (ADS)

    Alhaji Yabagi, Jibrin; Isah Kimpa, Mohammed; Nmayaya Muhammad, Muhammad; Rashid, Saiful Bin; Zaidi, Embong; Arif Agam, Mohd

    2018-01-01

    Irradiation of polymers causes structural, chemical and the optical properties changes. Polystyrene nanosphere was drop coated to substrates and the gamma irradiation was carried out in a Cesium-137 (Cs-137) source chamber at different time (1-5 hours) with constant dose of 30 kGy. Fourier transformation infrared spectroscopy (FTIR) and Raman spectroscopy were employed to characterize the chemical properties of irradiated polystyrene while Scanning Electron Microscopy (SEM) and Atomic Force Microscopy (AFM) were used to study the surface morphological changes of the samples. The optical energy band gaps of the thin films were investigated and studied using transmittance and absorbance measurements. The results obtained revealed that as irradiation time increases the optical properties changes and polystyrene gradually undergoes crystal to carbonaceous from its amorphous state. The average particles diameter and roughness of the samples decreases with increasing irradiation time.

  19. Specific Impulse Definition for Ablative Laser Propulsion

    NASA Technical Reports Server (NTRS)

    Herren, Kenneth A.; Gregory, Don A.

    2004-01-01

    The term "specific impulse" is so ingrained in the field of rocket propulsion that it is unlikely that any fundamental argument would be taken seriously for its removal. It is not an ideal measure but it does give an indication of the amount of mass flow (mass loss/time), as in fuel rate, required to produce a measured thrust over some time period This investigation explores the implications of being able to accurately measure the ablation rate and how the language used to describe the specific impulse results may have to change slightly, and recasts the specific impulse as something that is not a time average. It is not currently possible to measure the ablation rate accurately in real time so it is generally just assumed that a constant amount of material will be removed for each laser pulse delivered The specific impulse dependence on the ablation rate is determined here as a correction to the classical textbook definition.

  20. The constant displacement scheme for tracking particles in heterogeneous aquifers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, X.H.; Gomez-Hernandez, J.J.

    1996-01-01

    Simulation of mass transport by particle tracking or random walk in highly heterogeneous media may be inefficient from a computational point of view if the traditional constant time step scheme is used. A new scheme which adjusts automatically the time step for each particle according to the local pore velocity, so that each particle always travels a constant distance, is shown to be computationally faster for the same degree of accuracy than the constant time step method. Using the constant displacement scheme, transport calculations in a 2-D aquifer model, with nature log-transmissivity variance of 4, can be 8.6 times fastermore » than using the constant time step scheme.« less

  1. Time-weighted average sampling of airborne propylene glycol ethers by a solid-phase microextraction device.

    PubMed

    Shih, H C; Tsai, S W; Kuo, C H

    2012-01-01

    A solid-phase microextraction (SPME) device was used as a diffusive sampler for airborne propylene glycol ethers (PGEs), including propylene glycol monomethyl ether (PGME), propylene glycol monomethyl ether acetate (PGMEA), and dipropylene glycol monomethyl ether (DPGME). Carboxen-polydimethylsiloxane (CAR/PDMS) SPME fiber was selected for this study. A polytetrafluoroethylene (PTFE) tubing was used as the holder, and the SPME fiber assembly was inserted into the tubing as a diffusive sampler. The diffusion path length and area of the sampler were 0.3 cm and 0.00086 cm(2), respectively. The theoretical sampling constants at 30°C and 1 atm for PGME, PGMEA, and DPGME were 1.50 × 10(-2), 1.23 × 10(-2) and 1.14 × 10(-2) cm(3) min(-1), respectively. For evaluations, known concentrations of PGEs around the threshold limit values/time-weighted average with specific relative humidities (10% and 80%) were generated both by the air bag method and the dynamic generation system, while 15, 30, 60, 120, and 240 min were selected as the time periods for vapor exposures. Comparisons of the SPME diffusive sampling method to Occupational Safety and Health Administration (OSHA) organic Method 99 were performed side-by-side in an exposure chamber at 30°C for PGME. A gas chromatography/flame ionization detector (GC/FID) was used for sample analysis. The experimental sampling constants of the sampler at 30°C were (6.93 ± 0.12) × 10(-1), (4.72 ± 0.03) × 10(-1), and (3.29 ± 0.20) × 10(-1) cm(3) min(-1) for PGME, PGMEA, and DPGME, respectively. The adsorption of chemicals on the stainless steel needle of the SPME fiber was suspected to be one of the reasons why significant differences between theoretical and experimental sampling rates were observed. Correlations between the results for PGME from both SPME device and OSHA organic Method 99 were linear (r = 0.9984) and consistent (slope = 0.97 ± 0.03). Face velocity (0-0.18 m/s) also proved to have no effects on the sampler. However, the effects of temperature and humidity have been observed. Therefore, adjustments of experimental sampling constants at different environmental conditions will be necessary.

  2. Viscoelastic deformation of lipid bilayer vesicles.

    PubMed

    Wu, Shao-Hua; Sankhagowit, Shalene; Biswas, Roshni; Wu, Shuyang; Povinelli, Michelle L; Malmstadt, Noah

    2015-10-07

    Lipid bilayers form the boundaries of the cell and its organelles. Many physiological processes, such as cell movement and division, involve bending and folding of the bilayer at high curvatures. Currently, bending of the bilayer is treated as an elastic deformation, such that its stress-strain response is independent of the rate at which bending strain is applied. We present here the first direct measurement of viscoelastic response in a lipid bilayer vesicle. We used a dual-beam optical trap (DBOT) to stretch 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) giant unilamellar vesicles (GUVs). Upon application of a step optical force, the vesicle membrane deforms in two regimes: a fast, instantaneous area increase, followed by a much slower stretching to an eventual plateau deformation. From measurements of dozens of GUVs, the average time constant of the slower stretching response was 0.225 ± 0.033 s (standard deviation, SD). Increasing the fluid viscosity did not affect the observed time constant. We performed a set of experiments to rule out heating by laser absorption as a cause of the transient behavior. Thus, we demonstrate here that the bending deformation of lipid bilayer membranes should be treated as viscoelastic.

  3. Viscoelastic deformation of lipid bilayer vesicles†

    PubMed Central

    Wu, Shao-Hua; Sankhagowit, Shalene; Biswas, Roshni; Wu, Shuyang; Povinelli, Michelle L.

    2015-01-01

    Lipid bilayers form the boundaries of the cell and its organelles. Many physiological processes, such as cell movement and division, involve bending and folding of the bilayer at high curvatures. Currently, bending of the bilayer is treated as an elastic deformation, such that its stress-strain response is independent of the rate at which bending strain is applied. We present here the first direct measurement of viscoelastic response in a lipid bilayer vesicle. We used a dual-beam optical trap (DBOT) to stretch 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) giant unilamellar vesicles (GUVs). Upon application of a step optical force, the vesicle membrane deforms in two regimes: a fast, instantaneous area increase, followed by a much slower stretching to an eventual plateau deformation. From measurements of dozens of GUVs, the average time constant of the slower stretching response was 0.225 ± 0.033 s (standard deviation, SD). Increasing the fluid viscosity did not affect the observed time constant. We performed a set of experiments to rule out heating by laser absorption as a cause of the transient behavior. Thus, we demonstrate here that the bending deformation of lipid bilayer membranes should be treated as viscoelastic. PMID:26268612

  4. Resistance to antitumor chemotherapy due to bounded-noise-induced transitions

    NASA Astrophysics Data System (ADS)

    D'Onofrio, Alberto; Gandolfi, Alberto

    2010-12-01

    Tumor angiogenesis is a landmark of solid tumor development, but it is also directly relevant to chemotherapy. Indeed, the density and quality of neovessels may influence the effectiveness of therapies based on blood-born agents. In this paper, first we define a deterministic model of antiproliferative chemotherapy in which the drug efficacy is a unimodal function of vessel density, and then we show that under constant continuous infusion therapy the tumor-vessel system may be multistable. However, the actual drug concentration profiles are affected by bounded even if possibly large fluctuations. Through numerical simulations, we show that the tumor volume may undergo transitions to the higher equilibrium value induced by the bounded noise. In case of periodically delivered boli-based chemotherapy, we model the fluctuations due to time variability of both the drug clearance rate and the distribution volume, as well as those due to irregularities in drug delivery. We observed noise-induced transitions also in case of periodic delivering. By applying a time dense scheduling with constant average delivered drug (metronomic scheduling), we observed an easier suppression of the transitions. Finally, we propose to interpret the above phenomena as an unexpected non-genetic kind of resistance to chemotherapy.

  5. Testing the Relation between the Local and Cosmic Star Formation Histories

    NASA Astrophysics Data System (ADS)

    Fields, Brian D.

    1999-04-01

    Recently, there has been great progress toward observationally determining the mean star formation history of the universe. When accurately known, the cosmic star formation rate could provide much information about Galactic evolution, if the Milky Way's star formation rate is representative of the average cosmic star formation history. A simple hypothesis is that our local star formation rate is proportional to the cosmic mean. In addition, to specify a star formation history, one must also adopt an initial mass function (IMF) typically it is assumed that the IMF is a smooth function, which is constant in time. We show how to test directly the compatibility of all these assumptions by making use of the local (solar neighborhood) star formation record encoded in the present-day stellar mass function. Present data suggest that at least one of the following is false: (1) the local IMF is constant in time; (2) the local IMF is a smooth (unimodal) function; and/or (3) star formation in the Galactic disk was representative of the cosmic mean. We briefly discuss how to determine which of these assumptions fail and also improvements in observations, which will sharpen this test.

  6. Measuring hearing in the harbor seal (Phoca vitulina): Comparison of behavioral and auditory brainstem response techniques

    NASA Astrophysics Data System (ADS)

    Wolski, Lawrence F.; Anderson, Rindy C.; Bowles, Ann E.; Yochem, Pamela K.

    2003-01-01

    Auditory brainstem response (ABR) and standard behavioral methods were compared by measuring in-air audiograms for an adult female harbor seal (Phoca vitulina). Behavioral audiograms were obtained using two techniques: the method of constant stimuli and the staircase method. Sensitivity was tested from 0.250 to 30 kHz. The seal showed good sensitivity from 6 to 12 kHz [best sensitivity 8.1 dB (re 20 μPa2.s) RMS at 8 kHz]. The staircase method yielded thresholds that were lower by 10 dB on average than the method of constant stimuli. ABRs were recorded at 2, 4, 8, 16, and 22 kHz and showed a similar best range (8-16 kHz). ABR thresholds averaged 5.7 dB higher than behavioral thresholds at 2, 4, and 8 kHz. ABRs were at least 7 dB lower at 16 kHz, and approximately 3 dB higher at 22 kHz. The better sensitivity of ABRs at higher frequencies could have reflected differences in the seal's behavior during ABR testing and/or bandwidth characteristics of test stimuli. These results agree with comparisons of ABR and behavioral methods performed in other recent studies and indicate that ABR methods represent a good alternative for estimating hearing range and sensitivity in pinnipeds, particularly when time is a critical factor and animals are untrained.

  7. Projections of Temperature-Attributable Premature Deaths in 209 U.S. Cities Using a Cluster-Based Poisson Approach

    NASA Technical Reports Server (NTRS)

    Schwartz, Joel D.; Lee, Mihye; Kinney, Patrick L.; Yang, Suijia; Mills, David; Sarofim, Marcus C.; Jones, Russell; Streeter, Richard; St. Juliana, Alexis; Peers, Jennifer; hide

    2015-01-01

    Background: A warming climate will affect future temperature-attributable premature deaths. This analysis is the first to project these deaths at a near national scale for the United States using city and month-specific temperature-mortality relationships. Methods: We used Poisson regressions to model temperature-attributable premature mortality as a function of daily average temperature in 209 U.S. cities by month. We used climate data to group cities into clusters and applied an Empirical Bayes adjustment to improve model stability and calculate cluster-based month-specific temperature-mortality functions. Using data from two climate models, we calculated future daily average temperatures in each city under Representative Concentration Pathway 6.0. Holding population constant at 2010 levels, we combined the temperature data and cluster-based temperature-mortality functions to project city-specific temperature-attributable premature deaths for multiple future years which correspond to a single reporting year. Results within the reporting periods are then averaged to account for potential climate variability and reported as a change from a 1990 baseline in the future reporting years of 2030, 2050 and 2100. Results: We found temperature-mortality relationships that vary by location and time of year. In general, the largest mortality response during hotter months (April - September) was in July in cities with cooler average conditions. The largest mortality response during colder months (October-March) was at the beginning (October) and end (March) of the period. Using data from two global climate models, we projected a net increase in premature deaths, aggregated across all 209 cities, in all future periods compared to 1990. However, the magnitude and sign of the change varied by cluster and city. Conclusions: We found increasing future premature deaths across the 209 modeled U.S. cities using two climate model projections, based on constant temperature-mortality relationships from 1997 to 2006 without any future adaptation. However, results varied by location, with some locations showing net reductions in premature temperature-attributable deaths with climate change.

  8. Estimation of uptake rate constants for PCB congeners accumulated by semipermeable membrane devices and brown treat (Salmo trutta)

    USGS Publications Warehouse

    Meadows, J.C.; Echols, K.R.; Huckins, J.N.; Borsuk, F.A.; Carline, R.F.; Tillitt, D.E.

    1998-01-01

    The triolein-filled semipermeable membrane device (SPMD) is a simple and effective method of assessing the presence of waterborne hydrophobic chemicals. Uptake rate constants for individual chemicals are needed to accurately relate the amounts of chemicals accumulated by the SPMD to dissolved water concentrations. Brown trout and SPMDs were exposed to PCB- contaminated groundwater in a spring for 28 days to calculate and compare uptake rates of specific PCB congeners by the two matrixes. Total PCB congener concentrations in water samples from the spring were assessed and corrected for estimated total organic carbon (TOC) sorption to estimate total dissolved concentrations. Whole and dissolved concentrations averaged 4.9 and 3.7 ??g/L, respectively, during the exposure. Total concentrations of PCBs in fish rose from 0.06 to 118.3 ??g/g during the 28-day exposure, while concentrations in the SPMD rose from 0.03 to 203.4 ??g/ g. Uptake rate constants (k1) estimated for SPMDs and brown trout were very similar, with k1 values for SPMDs ranging from one to two times those of the fish. The pattern of congener uptake by the fish and SPMDs was also similar. The rates of uptake generally increased or decreased with increasing K(ow), depending on the assumption of presence or absence of TOC.The triolein-filled semipermeable membrane device (SPMD) is a simple and effective method of assessing the presence of waterborne hydrophobic chemicals. Uptake rate constants for individual chemicals are needed to accurately relate the amounts of chemicals accumulated by the SPMB to dissolved water concentrations. Brown trout and SPMDs were exposed to PCB-contaminated groundwater in a spring for 28 days to calculate and compare uptake rates of specific PCB congeners by the two matrixes. Total PCB congener concentrations in water samples from the spring were assessed and corrected for estimated total organic carbon (TOC) sorption to estimate total dissolved concentrations. Whole and dissolved concentrations averaged 4.9 and 3.7 ??g/L, respectively, during the exposure. Total concentrations of PCBs in fish rose from 0.06 to 118.3 ??g/g during the 28-day exposure, while concentrations in the SPMD rose from 0.03 to 203.4 ??g/g. Uptake rate constants (k1) estimated for SPMDs and brown trout were very similar, with k1 values for SPMDs ranging from one to two times those of the fish. The pattern of congener uptake by the fish and SPMBs was also similar. The rates of uptake generally increased or decreased with increasing KOW, depending on the assumption of presence or absence of TOC.

  9. Frequency distributions and correlations of solar X-ray flare parameters

    NASA Technical Reports Server (NTRS)

    Crosby, Norma B.; Aschwanden, Markus J.; Dennis, Brian R.

    1993-01-01

    Frequency distributions of flare parameters are determined from over 12,000 solar flares. The flare duration, the peak counting rate, the peak hard X-ray flux, the total energy in electrons, and the peak energy flux in electrons are among the parameters studied. Linear regression fits, as well as the slopes of the frequency distributions, are used to determine the correlations between these parameters. The relationship between the variations of the frequency distributions and the solar activity cycle is also investigated. Theoretical models for the frequency distribution of flare parameters are dependent on the probability of flaring and the temporal evolution of the flare energy build-up. The results of this study are consistent with stochastic flaring and exponential energy build-up. The average build-up time constant is found to be 0.5 times the mean time between flares.

  10. Anticipating the effects of gravity when intercepting moving objects: differentiating up and down based on nonvisual cues.

    PubMed

    Senot, Patrice; Zago, Myrka; Lacquaniti, Francesco; McIntyre, Joseph

    2005-12-01

    Intercepting an object requires a precise estimate of its time of arrival at the interception point (time to contact or "TTC"). It has been proposed that knowledge about gravitational acceleration can be combined with first-order, visual-field information to provide a better estimate of TTC when catching falling objects. In this experiment, we investigated the relative role of visual and nonvisual information on motor-response timing in an interceptive task. Subjects were immersed in a stereoscopic virtual environment and asked to intercept with a virtual racket a ball falling from above or rising from below. The ball moved with different initial velocities and could accelerate, decelerate, or move at a constant speed. Depending on the direction of motion, the acceleration or deceleration of the ball could therefore be congruent or not with the acceleration that would be expected due to the force of gravity acting on the ball. Although the best success rate was observed for balls moving at a constant velocity, we systematically found a cross-effect of ball direction and acceleration on success rate and response timing. Racket motion was triggered on average 25 ms earlier when the ball fell from above than when it rose from below, whatever the ball's true acceleration. As visual-flow information was the same in both cases, this shift indicates an influence of the ball's direction relative to gravity on response timing, consistent with the anticipation of the effects of gravity on the flight of the ball.

  11. High-throughput sequencing of complete human mtDNA genomes from the Caucasus and West Asia: high diversity and demographic inferences.

    PubMed

    Schönberg, Anna; Theunert, Christoph; Li, Mingkun; Stoneking, Mark; Nasidze, Ivan

    2011-09-01

    To investigate the demographic history of human populations from the Caucasus and surrounding regions, we used high-throughput sequencing to generate 147 complete mtDNA genome sequences from random samples of individuals from three groups from the Caucasus (Armenians, Azeri and Georgians), and one group each from Iran and Turkey. Overall diversity is very high, with 144 different sequences that fall into 97 different haplogroups found among the 147 individuals. Bayesian skyline plots (BSPs) of population size change through time show a population expansion around 40-50 kya, followed by a constant population size, and then another expansion around 15-18 kya for the groups from the Caucasus and Iran. The BSP for Turkey differs the most from the others, with an increase from 35 to 50 kya followed by a prolonged period of constant population size, and no indication of a second period of growth. An approximate Bayesian computation approach was used to estimate divergence times between each pair of populations; the oldest divergence times were between Turkey and the other four groups from the South Caucasus and Iran (~400-600 generations), while the divergence time of the three Caucasus groups from each other was comparable to their divergence time from Iran (average of ~360 generations). These results illustrate the value of random sampling of complete mtDNA genome sequences that can be obtained with high-throughput sequencing platforms.

  12. Time-dependent cell disintegration kinetics in lung tumors after irradiation

    NASA Astrophysics Data System (ADS)

    Chvetsov, Alexei V.; Palta, Jatinder J.; Nagata, Yasushi

    2008-05-01

    We study the time-dependent disintegration kinetics of tumor cells that did not survive radiotherapy treatment. To evaluate the cell disintegration rate after irradiation, we studied the volume changes of solitary lung tumors after stereotactic radiotherapy. The analysis is performed using two approximations: (1) tumor volume is a linear function of the total cell number in the tumor and (2) the cell disintegration rate is governed by the exponential decay with constant risk, which is defined by the initial cell number and a half-life T1/2. The half-life T1/2 is determined using the least-squares fit to the clinical data on lung tumor size variation with time after stereotactic radiotherapy. We show that the tumor volume variation after stereotactic radiotherapy of solitary lung tumors can be approximated by an exponential function. A small constant component in the volume variation does not change with time; however, this component may be the residual irregular density due to radiation fibrosis and was, therefore, subtracted from the total volume variation in our computations. Using computerized fitting of the exponent function to the clinical data for selected patients, we have determined that the average half-life T1/2 of cell disintegration is 28.2 days for squamous cell carcinoma and 72.4 days for adenocarcinoma. This model is needed for simulating the tumor volume variation during radiotherapy, which may be important for time-dependent treatment planning of proton therapy that is sensitive to density variations.

  13. Distribution Development for STORM Ingestion Input Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fulton, John

    The Sandia-developed Transport of Radioactive Materials (STORM) code suite is used as part of the Radioisotope Power System Launch Safety (RPSLS) program to perform statistical modeling of the consequences due to release of radioactive material given a launch accident. As part of this modeling, STORM samples input parameters from probability distributions with some parameters treated as constants. This report described the work done to convert four of these constant inputs (Consumption Rate, Average Crop Yield, Cropland to Landuse Database Ratio, and Crop Uptake Factor) to sampled values. Consumption rate changed from a constant value of 557.68 kg / yr tomore » a normal distribution with a mean of 102.96 kg / yr and a standard deviation of 2.65 kg / yr. Meanwhile, Average Crop Yield changed from a constant value of 3.783 kg edible / m 2 to a normal distribution with a mean of 3.23 kg edible / m 2 and a standard deviation of 0.442 kg edible / m 2 . The Cropland to Landuse Database ratio changed from a constant value of 0.0996 (9.96%) to a normal distribution with a mean value of 0.0312 (3.12%) and a standard deviation of 0.00292 (0.29%). Finally the crop uptake factor changed from a constant value of 6.37e -4 (Bq crop /kg)/(Bq soil /kg) to a lognormal distribution with a geometric mean value of 3.38e -4 (Bq crop /kg)/(Bq soil /kg) and a standard deviation value of 3.33 (Bq crop /kg)/(Bq soil /kg)« less

  14. Investigation of dynamic characteristics of a turbine-propeller engine

    NASA Technical Reports Server (NTRS)

    Oppenheimer, Frank L; Jacques, James R

    1951-01-01

    Time constants that characterize engine speed response of a turbine-propeller engine over the cruising speed range for various values of constant fuel flow and constant blade angle were obtained both from steady-state characteristics and from transient operation. Magnitude of speed response to changes in fuel flow and blade angle was investigated and is presented in the form of gain factors. Results indicate that at any given value of speed in the engine cruising speed range, time constants obtained both from steady-state characteristics and from transient operation agree satisfactorily for any given constant fuel flow, whereas time constants obtained from transient operation exceed time constants obtained from steady-state characteristics by approximately 14 percent for any given blade angle.

  15. A Discrete-Time Average Model Based Predictive Control for Quasi-Z-Source Inverter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yushan; Abu-Rub, Haitham; Xue, Yaosuo

    A discrete-time average model-based predictive control (DTA-MPC) is proposed for a quasi-Z-source inverter (qZSI). As a single-stage inverter topology, the qZSI regulates the dc-link voltage and the ac output voltage through the shoot-through (ST) duty cycle and the modulation index. Several feedback strategies have been dedicated to produce these two control variables, among which the most popular are the proportional–integral (PI)-based control and the conventional model-predictive control (MPC). However, in the former, there are tradeoffs between fast response and stability; the latter is robust, but at the cost of high calculation burden and variable switching frequency. Moreover, they require anmore » elaborated design or fine tuning of controller parameters. The proposed DTA-MPC predicts future behaviors of the ST duty cycle and modulation signals, based on the established discrete-time average model of the quasi-Z-source (qZS) inductor current, the qZS capacitor voltage, and load currents. The prediction actions are applied to the qZSI modulator in the next sampling instant, without the need of other controller parameters’ design. A constant switching frequency and significantly reduced computations are achieved with high performance. Transient responses and steady-state accuracy of the qZSI system under the proposed DTA-MPC are investigated and compared with the PI-based control and the conventional MPC. Simulation and experimental results verify the effectiveness of the proposed approach for the qZSI.« less

  16. A Discrete-Time Average Model Based Predictive Control for Quasi-Z-Source Inverter

    DOE PAGES

    Liu, Yushan; Abu-Rub, Haitham; Xue, Yaosuo; ...

    2017-12-25

    A discrete-time average model-based predictive control (DTA-MPC) is proposed for a quasi-Z-source inverter (qZSI). As a single-stage inverter topology, the qZSI regulates the dc-link voltage and the ac output voltage through the shoot-through (ST) duty cycle and the modulation index. Several feedback strategies have been dedicated to produce these two control variables, among which the most popular are the proportional–integral (PI)-based control and the conventional model-predictive control (MPC). However, in the former, there are tradeoffs between fast response and stability; the latter is robust, but at the cost of high calculation burden and variable switching frequency. Moreover, they require anmore » elaborated design or fine tuning of controller parameters. The proposed DTA-MPC predicts future behaviors of the ST duty cycle and modulation signals, based on the established discrete-time average model of the quasi-Z-source (qZS) inductor current, the qZS capacitor voltage, and load currents. The prediction actions are applied to the qZSI modulator in the next sampling instant, without the need of other controller parameters’ design. A constant switching frequency and significantly reduced computations are achieved with high performance. Transient responses and steady-state accuracy of the qZSI system under the proposed DTA-MPC are investigated and compared with the PI-based control and the conventional MPC. Simulation and experimental results verify the effectiveness of the proposed approach for the qZSI.« less

  17. Very high pressure liquid chromatography using core-shell particles: quantitative analysis of fast gradient separations without post-run times.

    PubMed

    Stankovich, Joseph J; Gritti, Fabrice; Stevenson, Paul G; Beaver, Lois A; Guiochon, Georges

    2014-01-17

    Five methods for controlling the mobile phase flow rate for gradient elution analyses using very high pressure liquid chromatography (VHPLC) were tested to determine thermal stability of the column during rapid gradient separations. To obtain rapid separations, instruments are operated at high flow rates and high inlet pressure leading to uneven thermal effects across columns and additional time needed to restore thermal equilibrium between successive analyses. The purpose of this study is to investigate means to minimize thermal instability and obtain reliable results by measuring the reproducibility of the results of six replicate gradient separations of a nine component RPLC standard mixture under various experimental conditions with no post-run times. Gradient separations under different conditions were performed: constant flow rates, two sets of constant pressure operation, programmed flow constant pressure operation, and conditions which theoretically should yield a constant net heat loss at the column's wall. The results show that using constant flow rates, programmed flow constant pressures, and constant heat loss at the column's wall all provide reproducible separations. However, performing separations using a high constant pressure with programmed flow reduces the analysis time by 16% compared to constant flow rate methods. For the constant flow rate, programmed flow constant pressure, and constant wall heat experiments no equilibration time (post-run time) was required to obtain highly reproducible data. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Object motion computation for the initiation of smooth pursuit eye movements in humans.

    PubMed

    Wallace, Julian M; Stone, Leland S; Masson, Guillaume S

    2005-04-01

    Pursuing an object with smooth eye movements requires an accurate estimate of its two-dimensional (2D) trajectory. This 2D motion computation requires that different local motion measurements are extracted and combined to recover the global object-motion direction and speed. Several combination rules have been proposed such as vector averaging (VA), intersection of constraints (IOC), or 2D feature tracking (2DFT). To examine this computation, we investigated the time course of smooth pursuit eye movements driven by simple objects of different shapes. For type II diamond (where the direction of true object motion is dramatically different from the vector average of the 1-dimensional edge motions, i.e., VA not equal IOC = 2DFT), the ocular tracking is initiated in the vector average direction. Over a period of less than 300 ms, the eye-tracking direction converges on the true object motion. The reduction of the tracking error starts before the closing of the oculomotor loop. For type I diamonds (where the direction of true object motion is identical to the vector average direction, i.e., VA = IOC = 2DFT), there is no such bias. We quantified this effect by calculating the direction error between responses to types I and II and measuring its maximum value and time constant. At low contrast and high speeds, the initial bias in tracking direction is larger and takes longer to converge onto the actual object-motion direction. This effect is attenuated with the introduction of more 2D information to the extent that it was totally obliterated with a texture-filled type II diamond. These results suggest a flexible 2D computation for motion integration, which combines all available one-dimensional (edge) and 2D (feature) motion information to refine the estimate of object-motion direction over time.

  19. [Double Endobutto reconstituting coracoclavicular ligament combined with repairing acromioclavicular ligament at stage I for the treatment of acromioclavicular dislocation with Rockwood type III - V].

    PubMed

    Hu, Wen-yue; Yu, Chong; Huang, Zhong-ming; Han, Lei

    2015-06-01

    To explore clinical efficacy of double Endobutto reconstituting coracoclavicular ligament combined with repairing acromioclavicular ligament in stage I in treating acromioclavicular dislocation with Rockwood type III - V . From January 2010 to September 2013, 56 patients with Rockwood type III - V acromioclavicular dislocation were treated by operation, including 20 males and 36 femlaes, aged from 32 to 52 years old with an average of 38.5 years old. Twenty-five patients were on the left side and 31 cases on the right side. The time from injury to operation was from 3 to 14 days, averaged 7 days. All patients were diagnosed as acromioclavicular dislocation with Rockwood type III - V, and double Endobutto were used to reconstituting coracoclavicular ligament, line metal anchors were applied for repairing acromioclavicular ligament. Postoperative complications were observed, Karlsson and Constant-Murley evaluation standard were used to evaluate clinical effects. All patients were followed up from 8 to 24 months with average of 11 months. According to Karlsson evaluation standard at 6 months after operation,42 cases were grade A, 13 were grade B and 1 was grade C. Constant-Murley score were improved from (42.80±5.43) before operation to (91.75±4.27) at 6 months after operation. All items at 6 months after operation were better than that of preoperative items. Forty-eight patients got excellent results, 7 were moderate and only 1 with bad result. No shoulder joint adhesion, screw loosening or breakage were occurred during following up. Double Endobutto reconstituting coracoclavicular ligament combined with repairing acromioclavicular ligament in stage I for the treatment of acromioclavicular dislocation with Rockwood type III - V could obtain early staisfied clinical effects, and benefit for early recovery of shoulder joint function.

  20. Ultrasonic Characterization of Superhard Material: Osmium Diboride

    NASA Astrophysics Data System (ADS)

    Yadawa, P. K.

    2012-12-01

    Higher order elastic constants have been calculated in hexagonal structured superhard material OsB2 at room temperature following the interaction potential model. The temperature variation of the ultrasonic velocities is evaluated along different angles with unique axis of the crystal using the second order elastic constants. The ultrasonic velocity decreases with the temperature along particular orientation with the unique axis. Temperature variation of the thermal relaxation time and Debye average velocities are also calculated along the same orientation. The temperature dependency of the ultrasonic properties is discussed in correlation with elastic, thermal and electrical properties. It has been found that the thermal conductivity is the main contributor to the behaviour of ultrasonic attenuation as a function of temperature and the responsible cause of attenuation is phonon-phonon interaction. The mechanical properties of OsB2 at low temperature are better than at high temperature, because at low temperature it has low ultrasonic velocity and ultrasonic attenuation. Superhard material OsB2 has many industrial applications, such as abrasives, cutting tools and hard coatings.

  1. The vanishing limit of the square-well fluid: The adhesive hard-sphere model as a reference system

    NASA Astrophysics Data System (ADS)

    Largo, J.; Miller, M. A.; Sciortino, F.

    2008-04-01

    We report a simulation study of the gas-liquid critical point for the square-well potential, for values of well width δ as small as 0.005 times the particle diameter σ. For small δ, the reduced second virial coefficient at the critical point B2*c is found to depend linearly on δ. The observed weak linear dependence is not sufficient to produce any significant observable effect if the critical temperature Tc is estimated via a constant B2*c assumption, due to the highly nonlinear transformation between B2*c and Tc. This explains the previously observed validity of the law of corresponding states. The critical density ρc is also found to be constant when measured in units of the cube of the average distance between two bonded particles (1+0.5δ)σ. The possibility of describing the δ →0 dependence with precise functional forms provides improved accurate estimates of the critical parameters of the adhesive hard-sphere model.

  2. The vanishing limit of the square-well fluid: the adhesive hard-sphere model as a reference system.

    PubMed

    Largo, J; Miller, M A; Sciortino, F

    2008-04-07

    We report a simulation study of the gas-liquid critical point for the square-well potential, for values of well width delta as small as 0.005 times the particle diameter sigma. For small delta, the reduced second virial coefficient at the critical point B2*c is found to depend linearly on delta. The observed weak linear dependence is not sufficient to produce any significant observable effect if the critical temperature Tc is estimated via a constant B2*c assumption, due to the highly nonlinear transformation between B2*c and Tc. This explains the previously observed validity of the law of corresponding states. The critical density rho c is also found to be constant when measured in units of the cube of the average distance between two bonded particles (1+0.5 delta)sigma. The possibility of describing the delta-->0 dependence with precise functional forms provides improved accurate estimates of the critical parameters of the adhesive hard-sphere model.

  3. Site-specific acid-base properties of pholcodine and related compounds.

    PubMed

    Kovács, Z; Hosztafi, S; Noszál, B

    2006-11-01

    The acid-base properties of pholcodine, a cough-depressant agent, and related compounds including metabolites were studied by 1H NMR-pH titrations, and are characterised in terms of macroscopic and microscopic protonation constants. New N-methylated derivatives were also synthesized in order to quantitate site- and nucleus-specific protonation shifts and to unravel microscopic acid-base equilibria. The piperidine nitrogen was found to be 38 and 400 times more basic than its morpholine counterpart in pholcodine and norpholcodine, respectively. The protonation data show that the molecule of pholcodine bears an average of positive charge of 1.07 at physiological pH, preventing it from entering the central nervous system, a plausible reason for its lack of analgesic or addictive properties. The protonation constants of pholcodine and its derivatives are interpreted by comparing with related molecules of pharmaceutical interest. The pH-dependent relative concentrations of the variously protonated forms of pholcodine and morphine are depicted in distribution diagrams.

  4. The binding of the primary water of hydration to nucleosides, CsDNA and potassium hyaluronate

    NASA Astrophysics Data System (ADS)

    Lukan, A. M.; Cavanaugh, D.; Whitson, K. B.; Marlowe, R. L.; Lee, S. A.; Anthony, L.; Rupprecht, A.; Mohan, V.

    1998-03-01

    Differential scanning calorimetry (DSC) has been used to study the eight nucleosides, CsDNA and KHA hydrated at 59% relative humidity. Thermograms were measured between 25 and 180 ^oC for scan rates of 1, 2, 5, 10 and 20 K/min. A broad endothermic transition (due to the desorption of the water) near 80 ^oC was observed for all runs. The average enthalpy of desorption per water molecule was evaluated from the area under the peak. A Kissinger analysis of these data yielded the net activation energy for desorption. Both parameters were very similar for the two biopolymers. Rayleigh scattering of Mossbauer radiation (RSMR) data(G. Albanese et al. ) Hyperfine Int. 95, 97 (1995) were analyzed via a simple harmonic oscillator model to evaluate the effective force constant of the water bound to the biopolymer. This analysis suggests that the effective force constant of water bound to HA is much larger (about 5 times) than for water bound to DNA.

  5. The evolving Planck mass in classically scale-invariant theories

    NASA Astrophysics Data System (ADS)

    Kannike, K.; Raidal, M.; Spethmann, C.; Veermäe, H.

    2017-04-01

    We consider classically scale-invariant theories with non-minimally coupled scalar fields, where the Planck mass and the hierarchy of physical scales are dynamically generated. The classical theories possess a fixed point, where scale invariance is spontaneously broken. In these theories, however, the Planck mass becomes unstable in the presence of explicit sources of scale invariance breaking, such as non-relativistic matter and cosmological constant terms. We quantify the constraints on such classical models from Big Bang Nucleosynthesis that lead to an upper bound on the non-minimal coupling and require trans-Planckian field values. We show that quantum corrections to the scalar potential can stabilise the fixed point close to the minimum of the Coleman-Weinberg potential. The time-averaged motion of the evolving fixed point is strongly suppressed, thus the limits on the evolving gravitational constant from Big Bang Nucleosynthesis and other measurements do not presently constrain this class of theories. Field oscillations around the fixed point, if not damped, contribute to the dark matter density of the Universe.

  6. Mathematical analysis of running performance and world running records.

    PubMed

    Péronnet, F; Thibault, G

    1989-07-01

    The objective of this study was to develop an empirical model relating human running performance to some characteristics of metabolic energy-yielding processes using A, the capacity of anaerobic metabolism (J/kg); MAP, the maximal aerobic power (W/kg); and E, the reduction in peak aerobic power with the natural logarithm of race duration T, when T greater than TMAP = 420 s. Accordingly, the model developed describes the average power output PT (W/kg) sustained over any T as PT = [S/T(1 - e-T/k2)] + 1/T integral of T O [BMR + B(1 - e-t/k1)]dt where S = A and B = MAP - BMR (basal metabolic rate) when T less than TMAP; and S = A + [Af ln(T/TMAP)] and B = (MAP - BMR) + [E ln(T/TMAP)] when T greater than TMAP; k1 = 30 s and k2 = 20 s are time constants describing the kinetics of aerobic and anaerobic metabolism, respectively, at the beginning of exercise; f is a constant describing the reduction in the amount of energy provided from anaerobic metabolism with increasing T; and t is the time from the onset of the race. This model accurately estimates actual power outputs sustained over a wide range of events, e.g., average absolute error between actual and estimated T for men's 1987 world records from 60 m to the marathon = 0.73%. In addition, satisfactory estimations of the metabolic characteristics of world-class male runners were made as follows: A = 1,658 J/kg; MAP = 83.5 ml O2.kg-1.min-1; 83.5% MAP sustained over the marathon distance. Application of the model to analysis of the evolution of A, MAP, and E, and of the progression of men's and women's world records over the years, is presented.

  7. RECOVERY OF ROD PHOTORESPONSES IN ABCR-DEFICIENT MICE

    PubMed Central

    Pawar, Ambarish S.; Qtaishat, Nasser M.; Little, Deborah M.; Pepperberg, David R.

    2010-01-01

    Purpose ABCR protein in the rod outer segment is thought to facilitate movement of the all-trans retinal photoproduct of rhodopsin bleaching out of the disk lumen. We investigated the extent to which ABCR deficiency affects post-bleach recovery of the rod photoresponse in ABCR-deficient (abcr−/−) mice. Methods Electroretinographic (ERG) a-wave responses were recorded from abcr−/− mice and two control strains. Using a bright probe flash, we examined the course of rod recovery following fractional rhodopsin bleaches of ~10−6, ~3×10−5, ~0.03 and ~0.30–0.40. Results Dark-adapted abcr−/− mice and controls exhibited similar normalized near-peak amplitudes of the paired-flash-ERG-derived, weak-flash response. Response recovery following ~10−6 bleaching exhibited an average exponential time constant of 319, 171 and 213 ms, respectively, in the abcr−/− and the two control strains. Recovery time constants determined for ~3×10−5 bleaching did not differ significantly among strains. However, those determined for the ~0.03 bleach indicated significantly faster recovery in abcr−/− (2.34 ± 0.74 min) than in the controls (5.36 ± 2.20 min, and 5.92 ± 2.44 min). Following ~0.30–0.40 bleaching, the initial recovery in the abcr−/− was on average faster than in controls. Conclusions By comparison with controls, abcr−/− mice exhibit faster rod recovery following a bleach of ~0.03. The data suggest that ABCR in normal rods may directly or indirectly prolong all-trans retinal clearance from the disk lumen over a significant bleaching range, and that the essential function of ABCR may be to promote the clearance of residual amounts of all-trans retinal that remain in the disks long after bleaching. PMID:18263807

  8. Management of acute unstable distal clavicle fracture with a modified coracoclavicular stabilization technique using a bidirectional coracoclavicular loop system.

    PubMed

    Kanchanatawan, Wichan; Wongthongsalee, Ponrachai

    2016-02-01

    Fracture of the distal clavicle is not uncommon. Despite the vast literature available for the management of this fracture, there is no consensus regarding the gold standard treatment for this fracture. To assess the clinical and radiographic outcomes and complications of acute unstable distal clavicle fracture when treated by a modified coracoclavicular stabilization technique using a bidirectional coracoclavicular loop system. Thirty-nine patients (32 males, 7 females) with acute unstable distal clavicle fractures treated by modified coracoclavicular stabilization using the surgical technique of bidirectional coracoclavicular (CC) loops seated behind the coracoacromial (CA) ligament were retrospectively reviewed. Mean follow-up time was 35.7 months (range 24-47 months). The outcomes measured included union rate, union time, CC distances when compared to the patients' uninjured shoulders, and the Constant and ASES shoulder scores, which were evaluated 6 months after surgery. All fractures displayed clinical union within 13 weeks postoperatively. The mean union time was 9.2 weeks (range 7-13 weeks). At the time of union, the CC distances on the affected shoulders were on average 0.9 mm (range 0-1.6 mm) longer than the unaffected shoulders. At 6 months after surgery, the Constant and ASES scores were on average 93.4 (72-100) and 91.5 (75-100), respectively. No complications related to the fixation loops, musculocutaneous nerve injuries, or fractures of coracoid or clavicle were recorded. One case of surgical wound dehiscence was observed due to superficial infection. Enlargement of the clavicle drill hole without migration of the buttons was observed in 9 out of 16 cases at a follow-up time of at least 30 months after the original operation. Modified CC stabilization using bidirectional CC loops seated behind the CA ligament is a simple surgical technique that naturally restores stability to the distal clavicle fracture. It also produces predictable outcomes, a high union rate, good to excellent shoulder function, and a low complication rate. The buttons and suture loops were routinely removed in a second operation in order to prevent late stress fracture of the clavicle.

  9. New Reduced Two-Time Step Method for Calculating Combustion and Emission Rates of Jet-A and Methane Fuel With and Without Water Injection

    NASA Technical Reports Server (NTRS)

    Molnar, Melissa; Marek, C. John

    2004-01-01

    A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes that are being developed at Glenn. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates were then used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx were obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering the turbine (T4) was also correlated as a function of the initial combustor temperature (T3), equivalence ratio, water to fuel mass ratio, and pressure.

  10. School Attendance: Focusing on Engagement and Re-Engagement. Practice Notes

    ERIC Educational Resources Information Center

    Center for Mental Health in Schools at UCLA, 2011

    2011-01-01

    Every student absence jeopardizes the ability of students to succeed at school and schools to achieve their mission. School attendance is a constant concern in schools. Average daily attendance rates are a common determiner of school funding, so schools funded on the basis of average daily attendance have less resources to do the job. Students who…

  11. Stable plume rise in a shear layer.

    PubMed

    Overcamp, Thomas J

    2007-03-01

    Solutions are given for plume rise assuming a power-law wind speed profile in a stably stratified layer for point and finite sources with initial vertical momentum and buoyancy. For a constant wind speed, these solutions simplify to the conventional plume rise equations in a stable atmosphere. In a shear layer, the point of maximum rise occurs further downwind and is slightly lower compared with the plume rise with a constant wind speed equal to the wind speed at the top of the stack. If the predictions with shear are compared with predictions for an equivalent average wind speed over the depth of the plume, the plume rise with shear is higher than plume rise with an equivalent average wind speed.

  12. Changes in daily activity patterns with age in U.S. men and women: National Health and Nutrition Examination Survey 2003-04 and 2005-06.

    PubMed

    Martin, Kathryn R; Koster, Annemarie; Murphy, Rachel A; Van Domelen, Dane R; Hung, Ming-yang; Brychta, Robert J; Chen, Kong Y; Harris, Tamara B

    2014-07-01

    To compare daily and hourly activity patterns according to sex and age. Cross-sectional, observational. Nationally representative community sample: National Health and Nutrition Examination Survey (NHANES) 2003-04 and 2005-06. Individuals (n = 5,788) aged 20 and older with 4 or more valid days of monitor wear-time, no missing data on valid wear-time minutes, and covariates. Activity was examined as average counts per minute (CPM) during wear-time; percentage of time spent in nonsedentary activity; and time (minutes) spent in sedentary (<100 counts), light (100-759), and moderate to vigorous physical activity (MVPA (≥ 760)). Analyses accounted for survey design, adjusted for covariates, and were sex specific. In adjusted models, men spent slightly more time (~1-2%) in nonsedentary activity than women aged 20 to 34, with levels converging at age 35 to 59, although the difference was not significant. Women aged 60 and older spent significantly more time (~3-4%) in nonsedentary activity than men, despite similarly achieved average CPM. With increasing age, all nonsedentary activity decreased in men; light activity remained constant in women (~30%). Older men had fewer CPM at night (~20), more daytime sedentary minutes (~3), fewer daytime light physical activity minutes (~4), and more MVPA minutes (~1) until early evening than older women. Although sex differences in average CPM declined with age, differences in nonsedentary activity time emerged as men increased sedentary behavior and reduced MVPA time. Maintained levels of light-intensity activity suggest that women continue engaging in common daily activities into older age more than men. Findings may help inform the development of behavioral interventions to increase intensity and overall activity levels, particularly in older adults. © 2014, Copyright the Authors Journal compilation © 2014, The American Geriatrics Society.

  13. On the use of hydroxyl radical kinetics to assess the number-average molecular weight of dissolved organic matter.

    PubMed

    Appiani, Elena; Page, Sarah E; McNeill, Kristopher

    2014-10-21

    Dissolved organic matter (DOM) is involved in numerous environmental processes, and its molecular size is important in many of these processes, such as DOM bioavailability, DOM sorptive capacity, and the formation of disinfection byproducts during water treatment. The size and size distribution of the molecules composing DOM remains an open question. In this contribution, an indirect method to assess the average size of DOM is described, which is based on the reaction of hydroxyl radical (HO(•)) quenching by DOM. HO(•) is often assumed to be relatively unselective, reacting with nearly all organic molecules with similar rate constants. Literature values for HO(•) reaction with organic molecules were surveyed to assess the unselectivity of DOM and to determine a representative quenching rate constant (k(rep) = 5.6 × 10(9) M(-1) s(-1)). This value was used to assess the average molecular weight of various humic and fulvic acid isolates as model DOM, using literature HO(•) quenching constants, kC,DOM. The results obtained by this method were compared with previous estimates of average molecular weight. The average molecular weight (Mn) values obtained with this approach are lower than the Mn measured by other techniques such as size exclusion chromatography (SEC), vapor pressure osmometry (VPO), and flow field fractionation (FFF). This suggests that DOM is an especially good quencher for HO(•), reacting at rates close to the diffusion-control limit. It was further observed that humic acids generally react faster than fulvic acids. The high reactivity of humic acids toward HO(•) is in line with the antioxidant properties of DOM. The benefit of this method is that it provides a firm upper bound on the average molecular weight of DOM, based on the kinetic limits of the HO(•) reaction. The results indicate low average molecular weight values, which is most consistent with the recent understanding of DOM. A possible DOM size distribution is discussed to reconcile the small nature of DOM with the large-molecule behavior observed in other studies.

  14. Theoretical analysis of nonnuniform skin effects on drawdown variation

    NASA Astrophysics Data System (ADS)

    Chen, C.-S.; Chang, C. C.; Lee, M. S.

    2003-04-01

    Under field conditions, the skin zone surrounding the well screen is rarely uniformly distributed in the vertical direction. To understand such non-uniform skin effects on drawdown variation, we assume the skin factor to be an arbitrary, continuous or piece-wise continuous function S_k(z), and incorporate it into a well hydraulics model for constant rate pumping in a homogeneous, vertically anisotropic, confined aquifer. Solutions of depth-specific drawdown and vertical average drawdown are determined by using the Gram-Schmidt method. The non-uniform effects of S_k(z) in vertical average drawdown are averaged out, and can be represented by a constant skin factor S_k. As a result, drawdown of fully penetrating observation wells can be analyzed by appropriate well hydraulics theories assuming a constant skin factor. The S_k is the vertical average value of S_k(z) weighted by the well bore flux q_w(z). In depth-specific drawdown, however, the non-uniform effects of S_k(z) vary with radial and vertical distances, which are under the influence of the vertical profile of S_k(z) and the vertical anisotropy ratio, K_r/K_z. Therefore, drawdown of partially penetrating observation wells may reflect the vertical anisotropy as well as the non-uniformity of the skin zone. The method of determining S_k(z) developed herein involves the use of q_w(z) as can be measured with the borehole flowmeter, and K_r/K_z and S_k as can be determined by the conventional pumping test.

  15. Investigation of Fumed Silica/Aqueous NaCl Superdielectric Material.

    PubMed

    Jenkins, Natalie; Petty, Clayton; Phillips, Jonathan

    2016-02-20

    A constant current charge/discharge protocol which showed fumed silica filled to the point of incipient wetness with aqueous NaCl solution to have dielectric constants >10⁸ over the full range of dielectric thicknesses of 0.38-3.9 mm and discharge times of 0.25->100 s was studied, making this material another example of a superdielectric. The dielectric constant was impacted by both frequency and thickness. For time to discharge greater than 10 s the dielectric constant for all thicknesses needed to be fairly constant, always >10⁸, although trending higher with increasing thickness. At shorter discharge times the dielectric constant consistently decreased, with decreasing time to discharge. Hence, it is reasonable to suggest that for time to discharge >10 s the dielectric constant at all thicknesses will be greater than 10⁸. This in turn implies an energy density for a 5 micron thick dielectric layer in the order of 350 J/cm³ for discharge times greater than 10 s.

  16. Investigation of Fumed Silica/Aqueous NaCl Superdielectric Material

    PubMed Central

    Jenkins, Natalie; Petty, Clayton; Phillips, Jonathan

    2016-01-01

    A constant current charge/discharge protocol which showed fumed silica filled to the point of incipient wetness with aqueous NaCl solution to have dielectric constants >108 over the full range of dielectric thicknesses of 0.38–3.9 mm and discharge times of 0.25–>100 s was studied, making this material another example of a superdielectric. The dielectric constant was impacted by both frequency and thickness. For time to discharge greater than 10 s the dielectric constant for all thicknesses needed to be fairly constant, always >109, although trending higher with increasing thickness. At shorter discharge times the dielectric constant consistently decreased, with decreasing time to discharge. Hence, it is reasonable to suggest that for time to discharge >10 s the dielectric constant at all thicknesses will be greater than 109. This in turn implies an energy density for a 5 micron thick dielectric layer in the order of 350 J/cm3 for discharge times greater than 10 s. PMID:28787918

  17. Lopez Island Ocean Bottom Seismometer Intercomparison Experiment.

    DTIC Science & Technology

    1980-10-01

    dividing the, record into N 4-see-long records (where N-30) and averaging4 - ].- ~ I I I I I I r r I I I I I I I I SEC - II L L I I L I I I I I I 80 1...about 1 m (Sutton et al., 1980). The OBS’s were located within tens of meters ot eacti other in deep waiter . The hydrophones are well correlated, showing...1 - 21)2 ) (5) where D is a damping constant which is the actual damping constant divided by the critical damping constant C I/cc. Lvsmer’s analog

  18. Examination of the formation process of pre-solvated and solvated electron in n-alcohol using femtosecond pulse radiolysis

    NASA Astrophysics Data System (ADS)

    Toigawa, Tomohiro; Gohdo, Masao; Norizawa, Kimihiro; Kondoh, Takafumi; Kan, Koichi; Yang, Jinfeng; Yoshida, Yoichi

    2016-06-01

    The formation process of pre-solvated and solvated electron in methanol (MeOH), ethanol (EtOH), n-butanol (BuOH), and n-octanol (OcOH) were investigated using a fs-pulse radiolysis technique by observing the pre-solvated electron at 1400 nm. The formation time constants of the pre-solvated electrons were determined to be 1.2, 2.2, 3.1, and 6.3 ps for MeOH, EtOH, BuOH, and OcOH, respectively. The formation time constants of the solvated electrons were determined to be 6.7, 13.6, 22.2, and 32.9 ps for MeOH, EtOH, BuOH, and OcOH, respectively. The formation dynamics and structure of the pre-solvated and solvated electrons in n-alcohols were discussed based on relation between the obtained time constant and dielectric relaxation time constant from the view point of kinetics. The observed formation time constants of the solvated electrons seemed to be strongly correlated with the second component of the dielectric relaxation time constants, which are related to single molecule motion. On the other hand, the observed formation time constants of the pre-solvated electrons seemed to be strongly correlated with the third component of the dielectric relaxation time constants, which are related to dynamics of hydrogen bonds.

  19. Simulation on Melting Process of Water Using Molecular Dynamics Method

    NASA Astrophysics Data System (ADS)

    Okawa, Seiji; Saito, Akio; Kang, Chaedong

    Simulation on phase change from ice to water was presented using molecular dynamics method. 576molecules were placed in a cell at ice forming arrangement. The volume of the cell was fixed so that the density of ice was kept at 923 kg/m3. Periodic boundary condition was used. According to the phase diagram of water, melting point of ice at the density of 923 kg/m3 is about 400 K. In order to perform melting process from surface, only the molecules near the boundary were scaled at each time step to keep its average temperature at 420 K, and the average temperature of other molecules were set to 350 K as initial condition. By observing time variation of the change in molecular arrangement, it was found that the hydrogen bond network near the boundary surface started to break its configuration and the melting surface moved towards the center until no more ice forming configuration was observed. This phenomenon was also discussed in a form of temperature and energy variation. The total energy increased and reached to a steady state at the time around 6.5 ps. This increment was due to the energy supplied from the boundary at a constant temperature. The temperature in the cell kept almost constant at 380 K during the period between 0.6 and 5.5 ps. This period coincides with melting process observed in molecular arrangement. Hence, it can be said that 380 K corresponds to the melting point. The total energy stored in the cell consisted of sensible and latent heat. Specific heat of water and ice were calculated, and they were found to be 5.6 kJ/kg·K and 3.7 kJ/kg·K, respectively. Hence, latent heat was found to be 316kJ/kg. These values agreed quite well to the physical properties of water.

  20. Kinetics of thorium and particle cycling along the U.S. GEOTRACES North Atlantic Transect

    NASA Astrophysics Data System (ADS)

    Lerner, Paul; Marchal, Olivier; Lam, Phoebe J.; Buesseler, Ken; Charette, Matthew

    2017-07-01

    The high particle reactivity of thorium has resulted in its widespread use in tracing processes impacting marine particles and their chemical constituents. The use of thorium isotopes as tracers of particle dynamics, however, largely relies on our understanding of how the element scavenges onto particles. Here, we estimate apparent rate constants of Th adsorption (k1), Th desorption (k-1), bulk particle degradation (β-1), and bulk particle sinking speed (w) along the water column at 11 open-ocean stations occupied during the GEOTRACES North Atlantic Section (GA03). First, we provide evidence that the budgets of Th isotopes and particles at these stations appear to be generally dominated by radioactive production and decay sorption reactions, particle degradation, and particle sinking. Rate parameters are then estimated by fitting a Th and particle cycling model to data of dissolved and particulate 228,230,234Th, 228Ra, particle concentrations, and 234,238U estimates based on salinity, using a nonlinear programming technique. We find that the adsorption rate constant (k1) generally decreases with depth across the section: broadly, the time scale 1 /k1 averages 1.0 yr in the upper 1000 m and (1.4-1.5) yr below. A positive relationship between k1 and particle concentration (P) is found, i.e., k1 ∝Pb , where b ≥ 1 , consistent with the notion that k1 increases with the number of surface sites available for adsorption. The rate constant ratio, K =k1 / (k-1 +β-1) , which measures the collective influence of rate parameters on Th scavenging, averages 0.2 for most stations and most depths. We clarify the conditions under which K / P is equivalent to the distribution coefficient, KD, test that the conditions are met at the stations, and find that K / P decreases with P, in line with a particle concentration effect (dKD / dP < 0). In contrast to the influence of colloids as envisioned by the Brownian pumping hypothesis, we provide evidence that the particle concentration effect arises from the joint effect of P on the rate constants for thorium attachment to, and detachment from, particles.

  1. On determining dose rate constants spectroscopically

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez, M.; Rogers, D. W. O.

    2013-01-15

    Purpose: To investigate several aspects of the Chen and Nath spectroscopic method of determining the dose rate constants of {sup 125}I and {sup 103}Pd seeds [Z. Chen and R. Nath, Phys. Med. Biol. 55, 6089-6104 (2010)] including the accuracy of using a line or dual-point source approximation as done in their method, and the accuracy of ignoring the effects of the scattered photons in the spectra. Additionally, the authors investigate the accuracy of the literature's many different spectra for bare, i.e., unencapsulated {sup 125}I and {sup 103}Pd sources. Methods: Spectra generated by 14 {sup 125}I and 6 {sup 103}Pd seedsmore » were calculated in vacuo at 10 cm from the source in a 2.7 Multiplication-Sign 2.7 Multiplication-Sign 0.05 cm{sup 3} voxel using the EGSnrc BrachyDose Monte Carlo code. Calculated spectra used the initial photon spectra recommended by AAPM's TG-43U1 and NCRP (National Council of Radiation Protection and Measurements) Report 58 for the {sup 125}I seeds, or TG-43U1 and NNDC(2000) (National Nuclear Data Center, 2000) for {sup 103}Pd seeds. The emitted spectra were treated as coming from a line or dual-point source in a Monte Carlo simulation to calculate the dose rate constant. The TG-43U1 definition of the dose rate constant was used. These calculations were performed using the full spectrum including scattered photons or using only the main peaks in the spectrum as done experimentally. Statistical uncertainties on the air kerma/history and the dose rate/history were Less-Than-Or-Slanted-Equal-To 0.2%. The dose rate constants were also calculated using Monte Carlo simulations of the full seed model. Results: The ratio of the intensity of the 31 keV line relative to that of the main peak in {sup 125}I spectra is, on average, 6.8% higher when calculated with the NCRP Report 58 initial spectrum vs that calculated with TG-43U1 initial spectrum. The {sup 103}Pd spectra exhibit an average 6.2% decrease in the 22.9 keV line relative to the main peak when calculated with the TG-43U1 rather than the NNDC(2000) initial spectrum. The measured values from three different investigations are in much better agreement with the calculations using the NCRP Report 58 and NNDC(2000) initial spectra with average discrepancies of 0.9% and 1.7% for the {sup 125}I and {sup 103}Pd seeds, respectively. However, there are no differences in the calculated TG-43U1 brachytherapy parameters using either initial spectrum in both cases. Similarly, there were no differences outside the statistical uncertainties of 0.1% or 0.2%, in the average energy, air kerma/history, dose rate/history, and dose rate constant when calculated using either the full photon spectrum or the main-peaks-only spectrum. Conclusions: Our calculated dose rate constants based on using the calculated on-axis spectrum and a line or dual-point source model are in excellent agreement (0.5% on average) with the values of Chen and Nath, verifying the accuracy of their more approximate method of going from the spectrum to the dose rate constant. However, the dose rate constants based on full seed models differ by between +4.6% and -1.5% from those based on the line or dual-point source approximations. These results suggest that the main value of spectroscopic measurements is to verify full Monte Carlo models of the seeds by comparison to the calculated spectra.« less

  2. Stock market context of the Lévy walks with varying velocity

    NASA Astrophysics Data System (ADS)

    Kutner, Ryszard

    2002-11-01

    We developed the most general Lévy walks with varying velocity, shorter called the Weierstrass walks (WW) model, by which one can describe both stationary and non-stationary stochastic time series. We considered a non-Brownian random walk where the walker moves, in general, with a velocity that assumes a different constant value between the successive turning points, i.e., the velocity is a piecewise constant function. This model is a kind of Lévy walks where we assume a hierarchical, self-similar in a stochastic sense, spatio-temporal representation of the main quantities such as waiting-time distribution and sojourn probability density (which are principal quantities in the continuous-time random walk formalism). The WW model makes possible to analyze both the structure of the Hurst exponent and the power-law behavior of kurtosis. This structure results from the hierarchical, spatio-temporal coupling between the walker displacement and the corresponding time of the walks. The analysis uses both the fractional diffusion and the super Burnett coefficients. We constructed the diffusion phase diagram which distinguishes regions occupied by classes of different universality. We study only such classes which are characteristic for stationary situations. We thus have a model ready for describing the data presented, e.g., in the form of moving averages; the operation is often used for stochastic time series, especially financial ones. The model was inspired by properties of financial time series and tested for empirical data extracted from the Warsaw stock exchange since it offers an opportunity to study in an unbiased way several features of stock exchange in its early stage.

  3. Subsurface Supergranular Vertical Flows as Measured Using Large Distance Separations in Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.; Hanasoge, S. M.

    2012-01-01

    As large-distance rays (say, 10-24 deg) approach the solar surface approximately vertically, travel times measured from surface pairs for these large separations are mostly sensitive to vertical flows, at least for shallow flows within a few Mm of the solar surface. All previous analyses of supergranulation have used smaller separations and have been hampered by the difficulty of separating the horizontal and vertical flow components. We find that the large separation travel times associated with upergranulation cannot be studied using the standard phase-speed filters of time-distance helioseismology. These filters, whose use is based upon a refractive model of the perturbations,reduce the resultant travel time signal by at least an order of magnitude at some distances. More effective filters are derived. Modeling suggests that the center-annulus travel time difference in the separation range 10-24 deg is insensitive to the horizontally diverging flow from the centers of the supergranules and should lead to a constant signal from the vertical flow. Our measurement of this quantity for the average supergranule, 5.1 s, is constant over the distance range. This magnitude of signal cannot be caused by the level of upflow at cell centers seen at the photosphere of 10 m/s extended in depth. It requires the vertical flow to increase with depth. A simple Gaussian model of the increase with depth implies a peak upward flow of 240 m/s at a depth of 2.3 Mm and a peak horizontal flow of 700 m/s at a depth of 1.6 Mm.

  4. The ratio of NPP to GPP: evidence of change over the course of stand development.

    PubMed

    Mäkelä, A; Valentine, H T

    2001-09-01

    Using Scots pine (Pinus sylvestris L.) in Fenno-Scandia as a case study, we investigate whether net primary production (NPP) and maintenance respiration are constant fractions of gross primary production (GPP) as even-aged mono-specific stands progress from initiation to old age. A model of the ratio of NPP to GPP is developed based on (1) the classical model of respiration, which divides total respiration into construction and maintenance components, and (2) a process-based model, which derives respiration from processes including construction, nitrate uptake and reduction, ion uptake, phloem loading and maintenance. Published estimates of specific respiration and production rates, and some recent measurements of components of dry matter in stands of different ages, are used to quantify the two approaches over the course of stand development in an average environment. Both approaches give similar results, showing a decrease in the NPP/GPP ratio with increasing tree height. In addition, we show that stand-growth models fitted under three different sets of assumptions-(i) annual specific rates of maintenance respiration of sapwood (mW) and photosynthesis (sC) are constant; (ii) m(W) is constant, but sC decreases with increasing tree height; and (iii) total maintenance respiration is a constant fraction of GPP and s(C) decreases with increasing tree height-can lead to nearly identical model projections that agree with empirical observations of NPP and stand-growth variables. Remeasurements of GPP and respiration over time in chronosequences of stands may be needed to discern which set of assumptions is correct. Total (construction + maintenance) sapwood respiration per unit mass of sapwood (kg C (kg C year)-1) decreased with increasing stand age, sapwood stock, and average tree height under all three assumptions. However, total sapwood respiration (kg C (ha year)-1) increased over the course of stand development under (i) and (ii), contributing to a downward trend in the time course of the NPP/GPP ratio after closure. A moderate decrease in mW with increasing tree height or sapwood cross-sectional area had little effect on the downward trend. On the basis of this evidence, we argue that a significant decline in the NPP/GPP ratio with tree size or age seems highly probable, although the decline may appear insignificant over some segments of stand development. We also argue that, because stand-growth models can give correct answers for the wrong reasons, statistical calibration of such models should be avoided whenever possible; instead, values of physiological parameters should come from measurements of the physiological processes themselves.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, S.; Gross, R.; Goble, W

    The safety integrity level (SIL) of equipment used in safety instrumented functions is determined by the average probability of failure on demand (PFDavg) computed at the time of periodic inspection and maintenance, i.e., the time of proof testing. The computation of PFDavg is generally based solely on predictions or estimates of the assumed constant failure rate of the equipment. However, PFDavg is also affected by maintenance actions (or lack thereof) taken by the end user. This paper shows how maintenance actions can affect the PFDavg of spring operated pressure relief valves (SOPRV) and how these maintenance actions may be accountedmore » for in the computation of the PFDavg metric. The method provides a means for quantifying the effects of changes in maintenance practices and shows how these changes impact plant safety.« less

  6. Effects of fatiguing constant versus alternating intensity intermittent isometric muscle actions on maximal torque and neuromuscular responses

    PubMed Central

    Smith, C.M.; Housh, T.J.; Hill, E.C.; Cochrane, K.C.; Jenkins, N.D.M.; Schmidt, R.J.; Johnson, G.O.

    2016-01-01

    Objective: To determine the effects of constant versus alternating applications of torque during fatiguing, intermittent isometric muscle actions of the leg extensors on maximal voluntary isometric contraction (MVIC) torque and neuromuscular responses. Methods: Sixteen subjects performed two protocols, each consisting of 50 intermittent isometric muscle actions of the leg extensors with equal average load at a constant 60% MVIC or alternating 40 then 80% (40/80%) MVIC with a work-to-rest ratio of 6-s on and 2-s off. MVIC torque as well as electromyographic signals from the vastus lateralis (VL), vastus medialis (VM), and rectus femoris (RF) and mechanomyographic signals from the VL were recorded pretest, immediately posttest, and 5-min posttest. Results: The results indicated that there were no time-related differences between the 60% MVIC and 40/80% MVIC protocols. The MVIC torque decreased posttest (22 to 26%) and remained depressed 5-min posttest (9%). There were decreases in electromyographic frequency (14 to 19%) and mechanomyographic frequency (23 to 24%) posttest that returned to pretest levels 5-min posttest. There were no changes in electromyographic amplitude and mechanomyogrpahic amplitude. Conclusions: These findings suggested that these neuromuscular parameters did not track the fatigue-induced changes in MVIC torque after 5-min of recovery. PMID:27973384

  7. Simulations of four-dimensional simplicial quantum gravity as dynamical triangulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agishtein, M.E.; Migdal, A.A.

    1992-04-20

    In this paper, Four-Dimensional Simplicial Quantum Gravity is simulated using the dynamical triangulation approach. The authors studied simplicial manifolds of spherical topology and found the critical line for the cosmological constant as a function of the gravitational one, separating the phases of opened and closed Universe. When the bare cosmological constant approaches this line from above, the four-volume grows: the authors reached about 5 {times} 10{sup 4} simplexes, which proved to be sufficient for the statistical limit of infinite volume. However, for the genuine continuum theory of gravity, the parameters of the lattice model should be further adjusted to reachmore » the second order phase transition point, where the correlation length grows to infinity. The authors varied the gravitational constant, and they found the first order phase transition, similar to the one found in three-dimensional model, except in 4D the fluctuations are rather large at the transition point, so that this is close to the second order phase transition. The average curvature in cutoff units is large and positive in one phase (gravity), and small negative in another (antigravity). The authors studied the fractal geometry of both phases, using the heavy particle propagator to define the geodesic map, as well as with the old approach using the shortest lattice paths.« less

  8. Photosensitized singlet oxygen luminescence from the protein matrix of Zn-substituted myoglobin.

    PubMed

    Lepeshkevich, Sergei V; Parkhats, Marina V; Stasheuski, Alexander S; Britikov, Vladimir V; Jarnikova, Ekaterina S; Usanov, Sergey A; Dzhagarov, Boris M

    2014-03-13

    A nanosecond laser near-infrared spectrometer was used to study singlet oxygen ((1)O2) emission in a protein matrix. Myoglobin in which the intact heme is substituted by Zn-protoporphyrin IX (ZnPP) was employed. Every collision of ground state molecular oxygen with ZnPP in the excited triplet state results in (1)O2 generation within the protein matrix. The quantum yield of (1)O2 generation was found to be equal to 0.9 ± 0.1. On the average, six from every 10 (1)O2 molecules succeed in escaping from the protein matrix into the solvent. A kinetic model for (1)O2 generation within the protein matrix and for a subsequent (1)O2 deactivation was introduced and discussed. Rate constants for radiative and nonradiative (1)O2 deactivation within the protein were determined. The first-order radiative rate constant for (1)O2 deactivation within the protein was found to be 8.1 ± 1.3 times larger than the one in aqueous solutions, indicating the strong influence of the protein matrix on the radiative (1)O2 deactivation. Collisions of singlet oxygen with each protein amino acid and ZnPP were assumed to contribute independently to the observed radiative as well as nonradiative rate constants.

  9. Effects of practice schedule and task specificity on the adaptive process of motor learning.

    PubMed

    Barros, João Augusto de Camargo; Tani, Go; Corrêa, Umberto Cesar

    2017-10-01

    This study investigated the effects of practice schedule and task specificity based on the perspective of adaptive process of motor learning. For this purpose, tasks with temporal and force control learning requirements were manipulated in experiments 1 and 2, respectively. Specifically, the task consisted of touching with the dominant hand the three sequential targets with specific movement time or force for each touch. Participants were children (N=120), both boys and girls, with an average age of 11.2years (SD=1.0). The design in both experiments involved four practice groups (constant, random, constant-random, and random-constant) and two phases (stabilisation and adaptation). The dependent variables included measures related to the task goal (accuracy and variability of error of the overall movement and force patterns) and movement pattern (macro- and microstructures). Results revealed a similar error of the overall patterns for all groups in both experiments and that they adapted themselves differently in terms of the macro- and microstructures of movement patterns. The study concludes that the effects of practice schedules on the adaptive process of motor learning were both general and specific to the task. That is, they were general to the task goal performance and specific regarding the movement pattern. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Low-level laser therapy on skeletal muscle inflammation: evaluation of irradiation parameters

    NASA Astrophysics Data System (ADS)

    Mantineo, Matías; Pinheiro, João P.; Morgado, António M.

    2014-09-01

    We evaluated the effect of different irradiation parameters in low-level laser therapy (LLLT) for treating inflammation induced in the gastrocnemius muscle of rats through cytokines concentration in systemic blood and analysis of muscle tissue. We used continuous (830 and 980 nm) and pulsed illuminations (830 nm). Animals were divided into five groups per wavelength (10, 20, 30, 40, and 50 mW), and a control group. LLLT was applied during 5 days with a constant irradiation time and area. TNF-α, IL-1β, IL-2, and IL-6 cytokines were quantified by ELISA. Inflammatory cells were counted using microscopy. Identical methodology was used with pulsed illumination. Average power (40 mW) and duty cycle were kept constant (80%) at five frequencies (5, 25, 50, 100, and 200 Hz). For continuous irradiation, treatment effects occurred for all doses, with a reduction of TNF-α, IL-1β, and IL-6 cytokines and inflammatory cells. Continuous irradiation at 830 nm was more effective, a result explained by the action spectrum of cytochrome c oxidase (CCO). Best results were obtained for 40 mW, with data suggesting a biphasic dose response. Pulsed wave irradiation was only effective for higher frequencies, a result that might be related to the rate constants of the CCO internal electron transfer process.

  11. Atmospheric organochlorine pollutants and air-sea exchange of hexachlorocyclohexane in the Bering and Chukchi Seas

    USGS Publications Warehouse

    Hinckley, D.A.; Bidleman, T.F.; Rice, C.P.

    1991-01-01

    Organochlorine pesticides have been found in Arctic fish, marine mammals, birds, and plankton for some time. The lack of local sources and remoteness of the region imply long-range transport and deposition of contaminants into the Arctic from sources to the south. While on the third Soviet-American Joint Ecological Expedition to the Bering and Chukchi Seas (August 1988), high-volume air samples were taken and analyzed for organochlorine pesticides. Hexachlorocyclohexane (HCH), hexachlorobenzene, polychlorinated camphenes, and chlordane (listed in order of abundance, highest to lowest) were quantified. The air-sea gas exchange of HCH was estimated at 18 stations during the cruise. Average alpha-HCH concentrations in concurrent atmosphere and surface water samples were 250 pg m-3 and 2.4 ng L-1, respectively, and average gamma-HCH concentrations were 68 pg m-3 in the atmosphere and 0.6 ng L-1 in surface water. Calculations based on experimentally derived Henry's law constants showed that the surface water was undersaturated with respect to the atmosphere at most stations (alpha-HCH, average 79% saturation; gamma-HCH, average 28% saturation). The flux for alpha-HCH ranged from -47 ng m-2 day-1 (sea to air) to 122 ng m-2 d-1 (air to sea) and averaged 25 ng m-2 d-1 air to sea. All fluxes of gamma-HCH were from air to sea, ranged from 17 to 54 ng m-2 d-1, and averaged 31 ng m-2 d-1.

  12. Quantifying the behavior of price dynamics at opening time in stock market

    NASA Astrophysics Data System (ADS)

    Ochiai, Tomoshiro; Takada, Hideyuki; Nacher, Jose C.

    2014-11-01

    The availability of huge volume of financial data has offered the possibility for understanding the markets as a complex system characterized by several stylized facts. Here we first show that the time evolution of the Japan’s Nikkei stock average index (Nikkei 225) futures follows the resistance and breaking-acceleration effects when the complete time series data is analyzed. However, in stock markets there are periods where no regular trades occur between the close of the market on one day and the next day’s open. To examine these time gaps we decompose the time series data into opening time and intermediate time. Our analysis indicates that for the intermediate time, both the resistance and the breaking-acceleration effects are still observed. However, for the opening time there are almost no resistance and breaking-acceleration effects, and volatility is always constantly high. These findings highlight unique dynamic differences between stock markets and forex market and suggest that current risk management strategies may need to be revised to address the absence of these dynamic effects at the opening time.

  13. Shear-layer structures in near-wall turbulence

    NASA Technical Reports Server (NTRS)

    Johansson, A. V.; Alfredsson, P. H.; Kim, J.

    1987-01-01

    The structure of internal shear layer observed in the near-wall region of turbulent flows is investigated by analyzing flow fields obtained from numerical simulations of channel and boundary-layer flows. It is found that the shear layer is an important contributor to the turbulence production. The conditionally averaged production at the center of the structure was almost twice as large as the long-time mean value. The shear-layer structure is also found to retain its coherence over streamwise distances on the order of a thousand viscous length units, and propagates with a constant velocity of about 10.6 u sub rho throughout the near wall region.

  14. Population dynamical behavior of Lotka-Volterra system under regime switching

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyue; Jiang, Daqing; Mao, Xuerong

    2009-10-01

    In this paper, we investigate a Lotka-Volterra system under regime switching where B(t) is a standard Brownian motion. The aim here is to find out what happens under regime switching. We first obtain the sufficient conditions for the existence of global positive solutions, stochastic permanence and extinction. We find out that both stochastic permanence and extinction have close relationships with the stationary probability distribution of the Markov chain. The limit of the average in time of the sample path of the solution is then estimated by two constants related to the stationary distribution and the coefficients. Finally, the main results are illustrated by several examples.

  15. Concentrations of radioactive elements in lunar materials

    NASA Astrophysics Data System (ADS)

    Korotev, Randy L.

    1998-01-01

    As an aid to interpreting data obtained remotely on the distribution of radioactive elements on the lunar surface, average concentrations of K, U, and Th as well as Al, Fe, and Ti in different types of lunar rocks and soils are tabulated. The U/Th ratio in representative samples of lunar rocks and regolith is constant at 0.27; K/Th ratios are more variable because K and Th are carried by different mineral phases. In nonmare regoliths at the Apollo sites, the main carriers of radioactive elements are mafic (i.e., 6-8 percent Fe) impact-melt breccias created at the time of basin formation and products derived therefrom.

  16. Porous and strong bioactive glass (13–93) scaffolds prepared by unidirectional freezing of camphene-based suspensions

    PubMed Central

    Liu, Xin; Rahaman, Mohamed N.; Fu, Qiang; Tomsia, Antoni P.

    2011-01-01

    Scaffolds of 13–93 bioactive glass (6Na2O, 12K2O, 5MgO, 20CaO, 4P2O5, 53SiO2; wt %) with an oriented pore architecture were formed by unidirectional freezing of camphene-based suspensions, followed by thermal annealing of the frozen constructs to grow the camphene crystals. After sublimation of the camphene, the constructs were sintered (1 h at 700 °C) to produce a dense glass phase with oriented macropores. The objective of this work was to study how constant freezing rates (1–7 °C/min) during the freezing step influenced the pore orientation and mechanical response of the scaffolds. When compared to scaffolds prepared by freezing the suspensions on a substrate kept at a constant temperature of 3 °C (time-dependent freezing rate), higher freezing rates resulted in better pore orientation, a more homogeneous microstructure, and a marked improvement in the mechanical response of the scaffolds in compression. Scaffolds fabricated using a constant freezing rate of 7 °C/min (porosity = 50 ± 4%; average pore diameter = 100 μm), had a compressive strength of 47 ± 5 MPa and an elastic modulus of 11 ± 3 GPa (in the orientation direction). In comparison, scaffolds prepared by freezing on the constant-temperature substrate had strength and modulus values of 35 ± 11 MPa and 8 ± 3 GPa, respectively. These oriented bioactive glass scaffolds prepared by the constant freezing rate route could potentially be used for the repair of defects in load-bearing bones, such as segmental defects in the long bones. PMID:21855661

  17. Calculation of the rate constant for state-selected recombination of H+O2(v) as a function of temperature and pressure

    NASA Astrophysics Data System (ADS)

    Teitelbaum, Heshel; Caridade, Pedro J. S. B.; Varandas, António J. C.

    2004-06-01

    Classical trajectory calculations using the MERCURY/VENUS code have been carried out on the H+O2 reactive system using the DMBE-IV potential energy surface. The vibrational quantum number and the temperature were selected over the ranges v=0 to 15, and T=300 to 10 000 K, respectively. All other variables were averaged. Rate constants were determined for the energy transfer process, H+O2(v)-->H+O2(v''), for the bimolecular exchange process, H+O2(v)-->OH(v')+O, and for the dissociative process, H+O2(v)-->H+O+O. The dissociative process appears to be a mere extension of the process of transferring large amounts of energy. State-to-state rate constants are given for the exchange reaction, and they are in reasonable agreement with previous results, while the energy transfer and dissociative rate constants have never been reported previously. The lifetime distributions of the HO2 complex, calculated as a function of v and temperature, were used as a basis for determining the relative contributions of various vibrational states of O2 to the thermal rate coefficients for recombination at various pressures. This novel approach, based on the complex's ability to survive until it collides in a secondary process with an inert gas, is used here for the first time. Complete falloff curves for the recombination of H+O2 are also calculated over a wide range of temperatures and pressures. The combination of the two separate studies results in pressure- and temperature-dependent rate constants for H+O2(v)(+Ar)⇄HO2(+Ar). It is found that, unlike the exchange reaction, vibrational and rotational-translational energy are liabilities in promoting recombination.

  18. Effect of positive pulse charge waveforms on cycle life of nickel-zinc cells

    NASA Technical Reports Server (NTRS)

    Smithrick, J. J.

    1980-01-01

    Five amp-hour nickel-zinc cells were life cycled to evaluate four different charge methods. Three of the four waveforms investigated were 120 Hz full wave rectified sinusoidal (FWRS), 120 Hz silicon controlled rectified (SCR), and 1 kHz square wave (SW). The fourth, a constant current method, was used as a baseline of comparison. Three sealed Ni-Zn cells connected in series were cycled. Each series string was charged at an average c/20 rate, and discharged at a c/2.5 rate to a 75% rated depth. Results indicate that the relatively inexpensive 120 Hz FWRS charger appears feasible for charging 5 amp-hour nickel-zinc cells with no significant loss in average cycle life when compared to constant current charging. The 1-kHz SW charger could also be used with no significant loss in average cycle life, and suggests the possibility of utilizing the existing electric vehicle chopper controller circuitry for an on-board charger. There was an apparent difference using the 120 Hz SCR charger compared to the others, however, this difference could be due to an inadvertent severe overcharge, which occurred prior to cell failure. The remaining two positive pulse charging waveforms, FWRS and 1 kHz, did not improve the cycle life of 5 amp-hour nickel-zinc cells over that of constant current charging.

  19. MIRO Observation of Comet C/2002 T7 (LINEAR) Water Line Spectrum

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Frerking, Margaret; Hofstadter, Mark; Gulkis, Samuel; von Allmen, Paul; Crovisier, Jaques; Biver, Nicholas; Bockelee-Morvan, Dominique

    2011-01-01

    Comet C/2002 T7 (LINEAR) was observed with the Microwave Instrument for Rosetta Orbiter (MIRO) on April 30, 2004, between 5 hr and 16 hr UT. The comet was 0.63AU distance from the Sun and 0.68AU distance from the MIRO telescope at the time of the observations. The water line involving the two lowest rotational levels at 556.936 GHz is observed at 557.070 GHz due to a large Doppler frequency shift. The detected water line spectrum is interpreted using a non local thermal equilibrium (Non-LTE) molecular excitation and radiative transfer model. Several synthetic spectra are calculated with various coma profiles that are plausible for the comet at the time of observations. The coma profile is modeled with three characteristic parameters: outgassing rate, a constant expansion velocity, and a constant gas temperature. The model calculation result shows that for the distant line observation where contributions from a large coma space is averaged, the combination of the outgassing rate and the gas expansion velocity determines the line shape while the gas temperature has a negligible effect. The comparison between the calculated spectra and the MIRO measured spectrum suggests that the outgassing rate of the comet is about 2.0x1029 molecules/second and its gas expansion velocity about 1.2 km/s at the time of the observations.

  20. Bias of averages in life-cycle footprinting of infrastructure: truck and bus case studies.

    PubMed

    Taptich, Michael N; Horvath, Arpad

    2014-11-18

    The life-cycle output (e.g., level of service) of infrastructure systems heavily influences their normalized environmental footprint. Many studies and tools calculate emission factors based on average productivity; however, the performance of these systems varies over time and space. We evaluate the appropriate use of emission factors based on average levels of service by comparing them to those reflecting a distribution of system outputs. For the provision of truck and bus services where fuel economy is assumed constant over levels of service, emission factor estimation biases, described by Jensen's inequality, always result in larger-than-expected environmental impacts (3%-400%) and depend strongly on the variability and skew of truck payloads and bus ridership. Well-to-wheel greenhouse gas emission factors for diesel trucks in California range from 87 to 1,500 g of CO2 equivalents per ton-km, depending on the size and type of trucks and the services performed. Along a bus route in San Francisco, well-to-wheel emission factors ranged between 53 and 940 g of CO2 equivalents per passenger-km. The use of biased emission factors can have profound effects on various policy decisions. If average emission rates must be used, reflecting a distribution of productivity can reduce emission factor biases.

  1. Queues with Choice via Delay Differential Equations

    NASA Astrophysics Data System (ADS)

    Pender, Jamol; Rand, Richard H.; Wesson, Elizabeth

    Delay or queue length information has the potential to influence the decision of a customer to join a queue. Thus, it is imperative for managers of queueing systems to understand how the information that they provide will affect the performance of the system. To this end, we construct and analyze two two-dimensional deterministic fluid models that incorporate customer choice behavior based on delayed queue length information. In the first fluid model, customers join each queue according to a Multinomial Logit Model, however, the queue length information the customer receives is delayed by a constant Δ. We show that the delay can cause oscillations or asynchronous behavior in the model based on the value of Δ. In the second model, customers receive information about the queue length through a moving average of the queue length. Although it has been shown empirically that giving patients moving average information causes oscillations and asynchronous behavior to occur in U.S. hospitals, we analytically and mathematically show for the first time that the moving average fluid model can exhibit oscillations and determine their dependence on the moving average window. Thus, our analysis provides new insight on how operators of service systems should report queue length information to customers and how delayed information can produce unwanted system dynamics.

  2. Evaluation of three harvest control rules for Bigeye Tuna ( Thunnus obesus) fisheries in the Indian Ocean

    NASA Astrophysics Data System (ADS)

    Tong, Yuhe; Chen, Xinjun; Kolody, Dale

    2014-10-01

    The stock of Bigeye tuna ( Thunnus obesus) in the Indian Ocean supports an important international fishery and is considered to be fully exploited. The responsible management agency, the Indian Ocean Tuna Commission (IOTC), does not have an explicit management decision-making framework in place to prevent over-fishing. In this study, we evaluated three harvest control rules, i) constant fishing mortality (CF), from 0.2 to 0.6, ii) constant catch (CC), from 60000 to 140000 t, and iii) constant escapement (CE), from 0.3 to 0.7. The population dynamics simulated by the operating model was based on the most recent stock assessment using Stock Synthesis version III (SS3). Three simulation scenarios (low, medium and high productivity) were designed to cover possible uncertainty in the stock assessment and biological parameters. Performances of three harvest control rules were compared on the basis of three management objectives (over 3, 10 and 25 years): i) the probability of maintaining spawning stock biomass above a level that can sustain maximum sustainable yield (MSY) on average, ii) the probability of achieving average catches between 0.8 MSY and 1.0 MSY, and iii) inter-annual variability in catches. The constant escapement strategy ( CE=0.5), constant fishing mortality strategy ( F=0.4) and constant catch ( CC=80000) were the most rational among the respective management scenarios. It is concluded that the short-term annual catch is suggested at 80000 t, and the potential total allowable catch for a stable yield could be set at 120000 t once the stock had recovered successfully. All the strategies considered in this study to achieve a `tolerable' balance between resource conservation and utilization have been based around the management objectives of the IOTC.

  3. Fast and accurate fitting and filtering of noisy exponentials in Legendre space.

    PubMed

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.

  4. On the Time Scale of Nocturnal Boundary Layer Cooling in Valleys and Basins and over Plains

    NASA Astrophysics Data System (ADS)

    de Wekker, Stephan F. J.; Whiteman, C. David

    2006-06-01

    Sequences of vertical temperature soundings over flat plains and in a variety of valleys and basins of different sizes and shapes were used to determine cooling-time-scale characteristics in the nocturnal stable boundary layer under clear, undisturbed weather conditions. An exponential function predicts the cumulative boundary layer cooling well. The fitting parameter or time constant in the exponential function characterizes the cooling of the valley atmosphere and is equal to the time required for the cumulative cooling to attain 63.2% of its total nighttime value. The exponential fit finds time constants varying between 3 and 8 h. Calculated time constants are smallest in basins, are largest over plains, and are intermediate in valleys. Time constants were also calculated from air temperature measurements made at various heights on the sidewalls of a small basin. The variation with height of the time constant exhibited a characteristic parabolic shape in which the smallest time constants occurred near the basin floor and on the upper sidewalls of the basin where cooling was governed by cold-air drainage and radiative heat loss, respectively.

  5. Intrinsic potential for immediate biodegradation of toluene in a pristine, energy-limited aquifer.

    PubMed

    Herzyk, Agnieszka; Maloszewski, Piotr; Qiu, Shiran; Elsner, Martin; Griebler, Christian

    2014-06-01

    Pristine and energy-limited aquifers are considered to have a low resistance and resilience towards organic pollution. An experiment in an indoor aquifer system revealed an unexpected high intrinsic potential for the attenuation of a short-term toluene contamination. A 30 h pulse of 486 mg of toluene, used as a model contaminant, and deuterated water (D2O) through an initially pristine, oxic, and organic carbon poor sandy aquifer revealed an immediate aerobic toluene degradation potential. Based on contaminant and tracer break-through curves, as well as mass balance analyses and reactive transport modelling, a contaminant removal of 40 % over a transport distance of only 4.2 m in less than one week of travel time was obtained. The mean first-order degradation rate constant was λ = 0.178 day(-1), corresponding to a half-life time constant T1/2 of 3.87 days. Toluene-specific stable carbon isotope analysis independently proved that the contaminant mass removal can be attributed to microbial biodegradation. Since average doubling times of indigenous bacterial communities were in the range of months to years, the aerobic biodegradation potential observed is assumed to be present and active in the pristine, energy-limited groundwater ecosystems at any time. Follow-up experiments and field studies will help to quantify the immediate natural attenuation potential of aquifers for selected priority contaminants and will try to identify the key-degraders within the autochthonous microbial communities.

  6. Electron acceleration by an obliquely propagating electromagnetic wave in the regime of validity of the Fokker-Planck-Kolmogorov approach

    NASA Technical Reports Server (NTRS)

    Hizanidis, Kyriakos; Vlahos, L.; Polymilis, C.

    1989-01-01

    The relativistic motion of an ensemble of electrons in an intense monochromatic electromagnetic wave propagating obliquely in a uniform external magnetic field is studied. The problem is formulated from the viewpoint of Hamiltonian theory and the Fokker-Planck-Kolmogorov approach analyzed by Hizanidis (1989), leading to a one-dimensional diffusive acceleration along paths of constant zeroth-order generalized Hamiltonian. For values of the wave amplitude and the propagating angle inside the analytically predicted stochastic region, the numerical results suggest that the diffusion probes proceeds in stages. In the first stage, the electrons are accelerated to relatively high energies by sampling the first few overlapping resonances one by one. During that stage, the ensemble-average square deviation of the variable involved scales quadratically with time. During the second stage, they scale linearly with time. For much longer times, deviation from linear scaling slowly sets in.

  7. Fast CT-PRESS-based spiral chemical shift imaging at 3 Tesla.

    PubMed

    Mayer, Dirk; Kim, Dong-Hyun; Adalsteinsson, Elfar; Spielman, Daniel M

    2006-05-01

    A new sequence is presented that combines constant-time point-resolved spectroscopy (CT-PRESS) with fast spiral chemical shift imaging. It allows the acquisition of multivoxel spectra without line splitting with a minimum total measurement time of less than 5 min for a field of view of 24 cm and a nominal 1.5x1.5-cm2 in-plane resolution. Measurements were performed with 17 CS encoding steps in t1 (Deltat1=12.8 ms) and an average echo time of 151 ms, which was determined by simulating the CT-PRESS experiment for the spin systems of glutamate (Glu) and myo-inositol (mI). Signals from N-acetyl-aspartate, total creatine, choline-containing compounds (Cho), Glu, and mI were detected in a healthy volunteer with no or only minor baseline distortions within 14 min on a 3 T MR scanner. Copyright (c) 2006 Wiley-Liss, Inc.

  8. Aerodyne Research mobile infrared methane monitor

    NASA Technical Reports Server (NTRS)

    Mcmanus, J. B.; Kebabian, P. L.; Kolb, C. E.

    1991-01-01

    An improved real-time methane monitor based on infrared absorption of the 3.39 micron line of a HeNe laser is described. Real time in situ measurement of methane has important applications in stratospheric and tropospheric chemistry, especially when high accuracy measurements can be made rapidly, providing fine spatial-scale information. The methane instrument provides 5 ppb resolution in a 1 sec averaging time. A key feature in this instrument is the use of magnetic (Zeeman) broadening to achieve continuous tunability with constant output power over a range of 0.017/cm. The instruments optical absorption path length is 47 m through sampled air held at 50 torr in a multipass cell of the Herriott (off-axis resonator) type. A microprocessor controls laser frequency and amplitude and collects data with minimal operator attention. The instrument recently has been used to measure methane emissions from a variety of natural and artificial terrestrial sources.

  9. The effects of confining pressure and stress difference on static fatigue of granite

    NASA Technical Reports Server (NTRS)

    Kranz, R. L.

    1979-01-01

    Samples of Barre granite were creep tested at room temperature at confining pressures up to 2 kilobars. The time to fracture increased with decreasing stress difference at every pressure, but the rate of change of fracture time with respect to the stress difference increased with pressure. At 87% of the short-term fracture strength, the time to fracture increased from about 4 minutes at atmospheric pressure to longer than one day at 2 Kb of pressure. The inelastic volumetric strain at the onset of tertiary creep, delta, was constant within 25% at any particular pressure but increased with pressure in a manner analogous to the increase of strength with pressure. At the onset of tertiary creep, the number of cracks and their average length increased with pressure. The crack angle and crack length spectra were quite similar, however, at each pressure at the onset of tertiary creep.

  10. Langevin equation with time dependent linear force and periodic load force: stochastic resonance

    NASA Astrophysics Data System (ADS)

    Sau Fa, Kwok

    2017-11-01

    The motion of a particle described by the Langevin equation with constant diffusion coefficient, time dependent linear force (ω (1+α \\cos ({ω }1t))x) and periodic load force ({A}0\\cos ({{Ω }}t)) is investigated. Analytical solutions for the probability density function (PDF) and n-moment are obtained and analysed. For {ω }1\\gg α ω the influence of the periodic term α \\cos ({ω }1t) is negligible to the PDF and n-moment for any time; this result shows that the statistical averages such as n-moments and the PDF have no access to some information of the system. For small and intermediate values of {ω }1 the influence of the periodic term α \\cos ({ω }1t) to the system is also analysed; in particular the system may present multiresonance. The solutions are obtained in a direct and pedagogical manner readily understandable by graduate students.

  11. Repressing the effects of variable speed harmonic orders in operational modal analysis

    NASA Astrophysics Data System (ADS)

    Randall, R. B.; Coats, M. D.; Smith, W. A.

    2016-10-01

    Discrete frequency components such as machine shaft orders can disrupt the operation of normal Operational Modal Analysis (OMA) algorithms. With constant speed machines, they have been removed using time synchronous averaging (TSA). This paper compares two approaches for varying speed machines. In one method, signals are transformed into the order domain, and after the removal of shaft speed related components by a cepstral notching method, are transformed back to the time domain to allow normal OMA. In the other simpler approach an exponential shortpass lifter is applied directly in the time domain cepstrum to enhance the modal information at the expense of other disturbances. For simulated gear signals with speed variations of both ±5% and ±15%, the simpler approach was found to give better results The TSA method is shown not to work in either case. The paper compares the results with those obtained using a stationary random excitation.

  12. The optimization of total laboratory automation by simulation of a pull-strategy.

    PubMed

    Yang, Taho; Wang, Teng-Kuan; Li, Vincent C; Su, Chia-Lo

    2015-01-01

    Laboratory results are essential for physicians to diagnose medical conditions. Because of the critical role of medical laboratories, an increasing number of hospitals use total laboratory automation (TLA) to improve laboratory performance. Although the benefits of TLA are well documented, systems occasionally become congested, particularly when hospitals face peak demand. This study optimizes TLA operations. Firstly, value stream mapping (VSM) is used to identify the non-value-added time. Subsequently, batch processing control and parallel scheduling rules are devised and a pull mechanism that comprises a constant work-in-process (CONWIP) is proposed. Simulation optimization is then used to optimize the design parameters and to ensure a small inventory and a shorter average cycle time (CT). For empirical illustration, this approach is applied to a real case. The proposed methodology significantly improves the efficiency of laboratory work and leads to a reduction in patient waiting times and increased service level.

  13. Real-Time Observations of Food and Fluid Timing During a 120 km Ultramarathon.

    PubMed

    Wardenaar, Floris C; Hoogervorst, Daan; Versteegen, Joline J; van der Burg, Nancy; Lambrechtse, Karin J; Bongers, Coen C W G

    2018-01-01

    The aim of the present case study was to use real-time observations to investigate ultramarathon runners' timing of food and fluid intake per 15 km and per hour, and total bodyweight loss due to dehydration. The study included 5 male ultramarathon runners observed during a 120 km race. The research team members followed on a bicycle and continuously observed their dietary intake using action cameras. Hourly carbohydrate intake ranged between 22.1 and 62.6 g/h, and fluid intake varied between 260 and 603 mL/h. These numbers remained relatively stable over the course of the ultra-endurance marathon. Runners consumed food and fluid on average 3-6 times per 15 km. Runners achieved a higher total carbohydrate consumption in the second half of the race ( p = 0.043), but no higher fluid intake ( p = 0.08). Energy gels contributed the most to the total average carbohydrate intake (40.2 ± 25.7%). Post-race weight was 3.6 ± 2.3% (range 0.3-5.7%) lower than pre-race weight, revealing a non-significant ( p = 0.08) but practical relevant difference. In conclusion, runners were able to maintain a constant timing of food and fluid intake during competition but adjusted their food choices in the second half of the race. The large variation in fluid and carbohydrate intake indicate that recommendations need to be individualized to further optimize personal intakes.

  14. Time since death and decay rate constants of Norway spruce and European larch deadwood in subalpine forests determined using dendrochronology and radiocarbon dating

    NASA Astrophysics Data System (ADS)

    Petrillo, M.; Cherubini, P.; Fravolini, G.; Ascher, J.; Schärer, M.; Synal, H.-A.; Bertoldi, D.; Camin, F.; Larcher, R.; Egli, M.

    2015-09-01

    Due to the large size and highly heterogeneous spatial distribution of deadwood, the time scales involved in the coarse woody debris (CWD) decay of Picea abies (L.) Karst. and Larix decidua Mill. in Alpine forests have been poorly investigated and are largely unknown. We investigated the CWD decay dynamics in an Alpine valley in Italy using the five-decay class system commonly employed for forest surveys, based on a macromorphological and visual assessment. For the decay classes 1 to 3, most of the dendrochronological samples were cross-dated to assess the time that had elapsed since tree death, but for decay classes 4 and 5 (poorly preserved tree rings) and some others not having enough tree rings, radiocarbon dating was used. In addition, density, cellulose and lignin data were measured for the dated CWD. The decay rate constants for spruce and larch were estimated on the basis of the density loss using a single negative exponential model. In the decay classes 1 to 3, the ages of the CWD were similar varying between 1 and 54 years for spruce and 3 and 40 years for larch with no significant differences between the classes; classes 1-3 are therefore not indicative for deadwood age. We found, however, distinct tree species-specific differences in decay classes 4 and 5, with larch CWD reaching an average age of 210 years in class 5 and spruce only 77 years. The mean CWD rate constants were 0.012 to 0.018 yr-1 for spruce and 0.005 to 0.012 yr-1 for larch. Cellulose and lignin time trends half-lives (using a multiple-exponential model) could be derived on the basis of the ages of the CWD. The half-lives for cellulose were 21 yr for spruce and 50 yr for larch. The half-life of lignin is considerably higher and may be more than 100 years in larch CWD.

  15. Fabrication and characterization of carbon nanotube turfs

    NASA Astrophysics Data System (ADS)

    Qiu, Anqi

    Carbon nanotube turfs are vertically aligned, slightly tortuous and entangled functional nanomaterials that exhibit high thermal and electrical properties. CNT turfs exhibit unique combinations of thermal and electrical conductivity, energy absorbing capability, low density and adhesive behavior. The objective of this study is to fabricate, measure, manipulate and characterize CNT turfs and thus determine the relationship between a turf's properties and its morphology, and provide guidance for developing links between turf growth conditions and of the subsequent turf properties. Nanoindentation was utilized to determine the mechanical and in situ electrical properties of CNT turfs. Elastic properties do not vary significantly laterally within a single turf, quantifying for the first time the ability to treat the turf as a mechanical continuum throughout. The use of the average mechanical properties for any given turf should be suitable for design purpose without the necessity of accounting for lateral spatial variation in structure. Properties variation based on time dependency, rate dependency, adhesive behavior and energy absorption and dissipation behavior have been investigated for these CNT turfs. Electrical properties measurements of CNT turfs have been carried out and show that a constant electrical current at a constant penetration depth indicates that a constant number of CNTs in contact with the tip; combining with the results that adhesive load increased with an increasing penetration hold time, thus we conclude that during a hold period of nanoindentation, individual tubes increase their individual attachment to the tip. CNT turfs show decreased adhesion and modulus after exposure to an electron beam due to carbon deposition and subsequent oxidation. To increase the modulus of the turf, axial compression and solvent capillary were used to increase the density of the turf by up to 15 times. Structure-property relationships were determined from the density and tortuosity measurements carried out through in situ electrical measurements and directionality measurements. Increasing density increases the mechanical properties as well as electrical conductivity. The modulus increased with a lower tortuosity, which may be related to the compressive buckling positioning.

  16. Effect of work and recovery durations on W' reconstitution during intermittent exercise.

    PubMed

    Skiba, Philip F; Jackman, Sarah; Clarke, David; Vanhatalo, Anni; Jones, Andrew M

    2014-07-01

    We recently presented an integrating model of the curvature constant of the hyperbolic power-time relationship (W') that permits the calculation of the W' balance (W'BAL) remaining at any time during intermittent exercise. Although a relationship between recovery power and the rate of W' recovery was demonstrated, the effect of the length of work or recovery intervals remains unclear. After determining VO2max, critical power, and W', 11 subjects completed six separate exercise tests on a cycle ergometer on different days, and in random order. Tests consisted of a period of intermittent severe-intensity exercise until the subject depleted approximately 50% of their predicted W'BAL, followed by a constant work rate (CWR) exercise bout until exhaustion. Work rates were kept constant between trials; however, either work or recovery durations during intermittent exercise were varied. The actual W' measured during the CWR (W'ACT) was compared with the amount of W' predicted to be available by the W'BAL model. Although some differences between W'BAL and W'ACT were noted, these amounted to only -1.6 ± 1.1 kJ when averaged across all conditions. The W'ACT was linearly correlated with the difference between VO2 at the start of CWR and VO2max (r = 0.79, P < 0.01). The W'BAL model provided a generally robust prediction of CWR W'. There may exist a physiological optimum formulation of work and recovery intervals such that baseline VO2 can be minimized, leading to an enhancement of subsequent exercise tolerance. These results may have important implications for athletic training and racing.

  17. Microwave Heating of Crystals with Gold Nanoparticles and Synovial Fluid under Synthetic Skin Patches

    PubMed Central

    2017-01-01

    Gout is a disease with elusive treatment options. Reduction of the size of l-alanine crystals as a model crystal for gouty tophi with the use of a monomode solid-state microwave was examined as a possible therapeutic aid. The effect of microwave heating on l-alanine crystals in the presence of gold nanoparticles (Au NPs) in solution and synovial fluid (SF) in a plastic pouch through a synthetic skin patch was investigated. In this regard, three experimental paradigms were employed: Paradigm 1 includes the effect of variable microwave power (5–10 W) and variable heating time (5–60 s) and Au NPs in water (20 nm size, volume of 10 μL) in a plastic pouch (1 × 2 cm2 in size). Paradigm 2 includes the effect of a variable volume of 20 nm Au NPs in a variable volume of SF up to 100 μL in a plastic pouch at a constant microwave power (10 W) for 30 s. Paradigm 3 includes the effect of constant microwave power (10 W) and microwave heating time (30 s), constant volume of Au NPs (100 μL), and variable size of Au NPs (20–200 nm) placed in a plastic pouch through a synthetic skin patch. In these experiments, an average of 60–100% reduction in the size of an l-alanine crystal (initial size = 450 μm) without damage to the synthetic skin or increasing the temperature of the samples beyond the physiological range was reported. PMID:28983527

  18. Study on Corrosion-induced Crack Initiation and Propagation of Sustaining Loaded RCbeams

    NASA Astrophysics Data System (ADS)

    Zhong, X. P.; Li, Y.; Yuan, C. B.; Yang, Z.; Chen, Y.

    2018-05-01

    For 13 pieces of reinforced concrete beams with HRB500 steel bars under long-term sustained loads, at time of corrosion-induced initial crack of concrete, and corrosion-induced crack widths of 0.3mm and 1mm, corrosion of steel bars and time-varying behavior of corrosion-induced crack width were studied by the ECWD (Electro-osmosis - constant Current – Wet and Dry cycles) accelerated corrosion method. The results show that when cover thickness was between 30 and 50mm,corrosion rates of steel bars were between 0.8% and 1.7% at time of corrosion-induced crack, and decreased with increasing concrete cover thickness; when corrosion-induced crack width was 0.3mm, the corrosion rate decreased with increasing steel bar diameter, and increased with increasing cover thickness; its corrosion rate varied between 0.98% and 4.54%; when corrosion-induced crack width reached 1mm, corrosion rate of steel bars was between 4% and 4.5%; when corrosion rate of steel bars was within 5%, the maximum and average corrosion-induced crack and corrosion rate of steel bars had a good linear relationship. The calculation model predicting the maximum and average width of corrosion-induced crack is given in this paper.

  19. Henry's Constants of Persistent Organic Pollutants by a Group-Contribution Method Based on Scaled-Particle Theory.

    PubMed

    Razdan, Neil K; Koshy, David M; Prausnitz, John M

    2017-11-07

    A group-contribution method based on scaled-particle theory was developed to predict Henry's constants for six families of persistent organic pollutants: polychlorinated benzenes, polychlorinated biphenyls, polychlorinated dibenzodioxins, polychlorinated dibenzofurans, polychlorinated naphthalenes, and polybrominated diphenyl ethers. The group-contribution model uses limited experimental data to obtain group-interaction parameters for an easy-to-use method to predict Henry's constants for systems where reliable experimental data are scarce. By using group-interaction parameters obtained from data reduction, scaled-particle theory gives the partial molar Gibbs energy of dissolution, Δg̅ 2 , allowing calculation of Henry's constant, H 2 , for more than 700 organic pollutants. The average deviation between predicted values of log H 2 and experiment is 4%. Application of an approximate van't Hoff equation gives the temperature dependence of Henry's constants for polychlorinated biphenyls, polychlorinated naphthalenes, and polybrominated diphenyl ethers in the environmentally relevant range 0-40 °C.

  20. Method and Apparatus for the Portable Identification of Material Thickness and Defects Using Spatially Controlled Heat Application

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott (Inventor); Winfree, William P. (Inventor)

    1999-01-01

    A method and a portable apparatus for the nondestructive identification of defects in structures. The apparatus comprises a heat source and a thermal imager that move at a constant speed past a test surface of a structure. The thermal imager is off set at a predetermined distance from the heat source. The heat source induces a constant surface temperature. The imager follows the heat source and produces a video image of the thermal characteristics of the test surface. Material defects produce deviations from the constant surface temperature that move at the inverse of the constant speed. Thermal noise produces deviations that move at random speed. Computer averaging of the digitized thermal image data with respect to the constant speed minimizes noise and improves the signal of valid defects. The motion of thermographic equipment coupled with the high signal to noise ratio render it suitable for portable, on site analysis.

  1. Risk of silicosis in a Colorado mining community.

    PubMed

    Kreiss, K; Zhen, B

    1996-11-01

    We investigated exposure-response relations for silicosis among 134 men over age 40 who had been identified in a previous community-based random sample study in a mining town. Thirty-two percent of the 100 dust-exposed subjects had radiologic profusions of small opacities of I/O or greater at a mean time since first silica exposure of 36.1 years. Of miners with cumulative silica exposures of 2 mg/m3-years or less, 20% had silicosis; of miners accumulating > 2 mg/m3 years, 63% had silicosis. Average silica exposure was also strongly associated with silicosis prevalence rates, with 13% silicotics among those with average exposure of 0.025-0.05 mg/m3, 34% among those with exposures of > 0.05-0.1 mg/m3, and 75% among those with average exposures > 0.1 mg/m3. Logistic regression models demonstrated that time since last silica exposure and either cumulative silica exposure or a combination of average silica exposure and duration of exposure predicted silicosis risk. Exposure-response relations were substantially higher using measured silica exposures than using estimated silica exposures based on measured dust exposures assuming a constant silica proportion of dust, consistent with less exposure misclassification. The risk of silicosis found in this study is higher than has been found in workforce studies having no follow-up of those leaving the mining industry and in studies without job title-specific silica measurements, but comparable to several recent studies of dust exposure-response relationships which suggest that a permissible exposure limit of 0.1 mg/m3 for silica does not protect against radiologic silicosis.

  2. Impacts of sampling design and estimation methods on nutrient leaching of intensively monitored forest plots in the Netherlands.

    PubMed

    de Vries, W; Wieggers, H J J; Brus, D J

    2010-08-05

    Element fluxes through forest ecosystems are generally based on measurements of concentrations in soil solution at regular time intervals at plot locations sampled in a regular grid. Here we present spatially averaged annual element leaching fluxes in three Dutch forest monitoring plots using a new sampling strategy in which both sampling locations and sampling times are selected by probability sampling. Locations were selected by stratified random sampling with compact geographical blocks of equal surface area as strata. In each sampling round, six composite soil solution samples were collected, consisting of five aliquots, one per stratum. The plot-mean concentration was estimated by linear regression, so that the bias due to one or more strata being not represented in the composite samples is eliminated. The sampling times were selected in such a way that the cumulative precipitation surplus of the time interval between two consecutive sampling times was constant, using an estimated precipitation surplus averaged over the past 30 years. The spatially averaged annual leaching flux was estimated by using the modeled daily water flux as an ancillary variable. An important advantage of the new method is that the uncertainty in the estimated annual leaching fluxes due to spatial and temporal variation and resulting sampling errors can be quantified. Results of this new method were compared with the reference approach in which daily leaching fluxes were calculated by multiplying daily interpolated element concentrations with daily water fluxes and then aggregated to a year. Results show that the annual fluxes calculated with the reference method for the period 2003-2005, including all plots, elements and depths, lies only in 53% of the cases within the range of the average +/-2 times the standard error of the new method. Despite the differences in results, both methods indicate comparable N retention and strong Al mobilization in all plots, with Al leaching being nearly equal to the leaching of SO(4) and NO(3) with fluxes expressed in mol(c) ha(-1) yr(-1). This illustrates that Al release, which is the clearest signal of soil acidification, is mainly due to the external input of SO(4) and NO(3).

  3. A satellite snow depth multi-year average derived from SSM/I for the high latitude regions

    USGS Publications Warehouse

    Biancamaria, S.; Mognard, N.M.; Boone, A.; Grippa, M.; Josberger, E.G.

    2008-01-01

    The hydrological cycle for high latitude regions is inherently linked with the seasonal snowpack. Thus, accurately monitoring the snow depth and the associated aerial coverage are critical issues for monitoring the global climate system. Passive microwave satellite measurements provide an optimal means to monitor the snowpack over the arctic region. While the temporal evolution of snow extent can be observed globally from microwave radiometers, the determination of the corresponding snow depth is more difficult. A dynamic algorithm that accounts for the dependence of the microwave scattering on the snow grain size has been developed to estimate snow depth from Special Sensor Microwave/Imager (SSM/I) brightness temperatures and was validated over the U.S. Great Plains and Western Siberia. The purpose of this study is to assess the dynamic algorithm performance over the entire high latitude (land) region by computing a snow depth multi-year field for the time period 1987-1995. This multi-year average is compared to the Global Soil Wetness Project-Phase2 (GSWP2) snow depth computed from several state-of-the-art land surface schemes and averaged over the same time period. The multi-year average obtained by the dynamic algorithm is in good agreement with the GSWP2 snow depth field (the correlation coefficient for January is 0.55). The static algorithm, which assumes a constant snow grain size in space and time does not correlate with the GSWP2 snow depth field (the correlation coefficient with GSWP2 data for January is - 0.03), but exhibits a very high anti-correlation with the NCEP average January air temperature field (correlation coefficient - 0.77), the deepest satellite snow pack being located in the coldest regions, where the snow grain size may be significantly larger than the average value used in the static algorithm. The dynamic algorithm performs better over Eurasia (with a correlation coefficient with GSWP2 snow depth equal to 0.65) than over North America (where the correlation coefficient decreases to 0.29). ?? 2007 Elsevier Inc. All rights reserved.

  4. Characterization of traffic-related PM concentration distribution and fluctuation patterns in near-highway urban residential street canyons.

    PubMed

    Hahn, Intaek; Brixey, Laurie A; Wiener, Russell W; Henkle, Stacy W; Baldauf, Richard

    2009-12-01

    Analyses of outdoor traffic-related particulate matter (PM) concentration distribution and fluctuation patterns in urban street canyons within a microscale distance of less than 500 m from a highway source are presented as part of the results from the Brooklyn Traffic Real-Time Ambient Pollutant Penetration and Environmental Dispersion (B-TRAPPED) study. Various patterns of spatial and temporal changes in the street canyon PM concentrations were investigated using time-series data of real-time PM concentrations measured during multiple monitoring periods. Concurrent time-series data of local street canyon wind conditions and wind data from the John F. Kennedy (JFK) International Airport National Weather Service (NWS) were used to characterize the effects of various wind conditions on the behavior of street canyon PM concentrations.Our results suggest that wind direction may strongly influence time-averaged mean PM concentration distribution patterns in near-highway urban street canyons. The rooftop-level wind speeds were found to be strongly correlated with the PM concentration fluctuation intensities in the middle sections of the street blocks. The ambient turbulence generated by shifting local wind directions (angles) showed a good correlation with the PM concentration fluctuation intensities along the entire distance of the first and second street blocks only when the wind angle standard deviations were larger than 30 degrees. Within-canyon turbulent shearing, caused by fluctuating local street canyon wind speeds, showed no correlation with PM concentration fluctuation intensities. The time-averaged mean PM concentration distribution along the longitudinal distances of the street blocks when wind direction was mostly constantly parallel to the street was found to be similar to the distribution pattern for the entire monitoring period when wind direction fluctuated wildly. Finally, we showed that two different PM concentration metrics-time-averaged mean concentration and number of concentration peaks above a certain threshold level-can possibly lead to different assessments of spatial concentration distribution patterns.

  5. Swelling Kinetics of Waxy Maize Starch

    NASA Astrophysics Data System (ADS)

    Desam, Gnana Prasuna Reddy

    Starch pasting behavior greatly influences the texture of a variety of food products such as canned soup, sauces, baby foods, batter mixes etc. The annual consumption of starch in the U.S. is 3 million metric tons. It is important to characterize the relationship between the structure, composition and architecture of the starch granules with its pasting behavior in order to arrive at a rational methodology to design modified starch of desirable digestion rate and texture. In this research, polymer solution theory was applied to predict the evolution of average granule size of starch at different heating temperatures in terms of its molecular weight, second virial coefficient and extent of cross-link. Evolution of granule size distribution of waxy native maize starch when subjected to heating at constant temperatures of 65, 70, 75, 80, 85 and 90 C was characterized using static laser light scattering. As expected, granule swelling was more pronounced at higher temperatures and resulted in a shift of granule size distribution to larger sizes with a corresponding increase in the average size by 100 to 120% from 13 mum to 25-28 mum. Most of the swelling occurred within the first 10 min of heating. Pasting behavior of waxy maize at different temperatures was also characterized from the measurements of G' and G" for different heating times. G' was found to increase with temperature at holding time of 2 min followed by its decrease at larger holding times. This behavior is believed to be due to the predominant effect of swelling at small times. However, G" was insensitive to temperature and holding times. The structure of waxy maize starch was characterized by cryoscanning electron microscopy. Experimental data of average granule size vs time at different temperatures were compared with model predictions. Also the Experimental data of particle size distribution vs particle size at different times and temperatures were compared with model predictions.

  6. Monte Carlo Simulations of the Kinetics of Protein Adsorption

    NASA Astrophysics Data System (ADS)

    Zhdanov, V. P.; Kasemo, B.

    The past decade has been characterized by rapid progress in Monte Carlo simulations of protein folding in a solution. This review summarizes the main results obtained in the field, as a background to the major topic, namely corresponding advances in simulations of protein adsorption kinetics at solid-liquid interfaces. The latter occur via diffusion in the liquid towards the interface followed by actual adsorption, and subsequent irreversible conformational changes, resulting in more or less pronounced denaturation of the native protein structure. The conventional kinetic models describing these steps are based on the assumption that the denaturation transitions obey the first-order law with a single value of the denaturation rate constant kr. The validity of this assumption has been studied in recent lattice Monte Carlo simulations of denaturation of model protein-like molecules with different types of the monomer-monomer interactions. The results obtained indicate that, due to trapping in metastable states, (i) the transition of a molecule to the denatured state is usually nonexponential in time, i.e. it does not obey the first-order law, and (ii) the denaturation transitions of an ensemble of different molecules are characterized by different time scales, i.e. the denaturation process cannot be described by a single rate constant kr. One should, rather, introduce a distribution of values of this rate constant (physically, different values of kr reflect the fact that the transitions to the altered state occurs via different metastable states). The phenomenological kinetics of irreversible adsorption of proteins with and without a distribution of the denaturation rate constant values have been calculated in the limits where protein diffusion in the solution is, respectively, rapid or slow. In both cases, the adsorption kinetics with a distribution of kr are found to be close to those with a single-valued rate constant kr, provided that the average value of kr in the former case is equal to kr in the latter case. This conclusion holds even for wide distributions of kr. The consequences of this finding for the fitting of global experimental kinetics on the basis of phenomenological equations are briefly discussed.

  7. Development of Solid Ceramic Dosimeters for the Time-Integrative Passive Sampling of Volatile Organic Compounds in Waters.

    PubMed

    Bonifacio, Riza Gabriela; Nam, Go-Un; Eom, In-Yong; Hong, Yong-Seok

    2017-11-07

    Time-integrative passive sampling of volatile organic compounds (VOCs) in water can now be accomplished using a solid ceramic dosimeter. A nonporous ceramic, which excludes the permeation of water, allowing only gas-phase diffusion of VOCs into the resin inside the dosimeter, effectively captured the VOCs. The mass accumulation of 11 VOCs linearly increased with time over a wide range of aqueous-phase concentrations (16.9 to 1100 μg L -1 ), and the linearity was dependent upon the Henry's constant (H). The average diffusivity of the VOCs in the solid ceramic was 1.46 × 10 -10 m 2 s -1 at 25 °C, which was 4 orders of magnitude lower than that in air (8.09 × 10 -6 m 2 s -1 ). This value was 60% greater than that in the water-permeable porous ceramic (0.92 × 10 -10 m 2 s -1 ), suggesting that its mass accumulation could be more effective than that of porous ceramic dosimeters. The mass accumulation of the VOCs in the solid ceramic dosimeter increased in the presence of salt (≥0.1 M) and with increasing temperature (4 to 40 °C) but varied only slightly with dissolved organic matter concentration. The solid ceramic dosimeter was suitable for the field testing and measurement of time-weighted average concentrations of VOC-contaminated waters.

  8. Optimal Server Scheduling to Maintain Constant Customer Waiting Times

    DTIC Science & Technology

    1988-12-01

    I I• I I I I I LCn CN OPTIMAL SERVER SCHEDUUNG TO MAINTAIN CONSTANT CUSTOMER WAITING TIMES THESIS Thomas J. Frey Captain UISAF AFIT/GOR/ENS/88D-7...hw bees appsewlf in ple rtan. cd = , ’ S 087 AFIT/GORMENS/8D-7 OPTIMAL SERVER SCHEDUUNG TO MAINTAIN~ CONSTANT CUSTOMER WAITING TIMES THESIS Thomas j...CONSTANT CUSTOMER WAITING TIMES THESIS Presented to the Faculty of the School of Engineering of the Air Force Institute of Technology Air University In

  9. Sub-nanosecond time-resolved ambient-pressure X-ray photoelectron spectroscopy setup for pulsed and constant wave X-ray light sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shavorskiy, Andrey; Slaughter, Daniel S.; Zegkinoglou, Ioannis

    2014-09-15

    An apparatus for sub-nanosecond time-resolved ambient-pressure X-ray photoelectron spectroscopy studies with pulsed and constant wave X-ray light sources is presented. A differentially pumped hemispherical electron analyzer is equipped with a delay-line detector that simultaneously records the position and arrival time of every single electron at the exit aperture of the hemisphere with ∼0.1 mm spatial resolution and ∼150 ps temporal accuracy. The kinetic energies of the photoelectrons are encoded in the hit positions along the dispersive axis of the two-dimensional detector. Pump-probe time-delays are provided by the electron arrival times relative to the pump pulse timing. An average time-resolution ofmore » (780 ± 20) ps (FWHM) is demonstrated for a hemisphere pass energy E{sub p} = 150 eV and an electron kinetic energy range KE = 503–508 eV. The time-resolution of the setup is limited by the electron time-of-flight (TOF) spread related to the electron trajectory distribution within the analyzer hemisphere and within the electrostatic lens system that images the interaction volume onto the hemisphere entrance slit. The TOF spread for electrons with KE = 430 eV varies between ∼9 ns at a pass energy of 50 eV and ∼1 ns at pass energies between 200 eV and 400 eV. The correlation between the retarding ratio and the TOF spread is evaluated by means of both analytical descriptions of the electron trajectories within the analyzer hemisphere and computer simulations of the entire trajectories including the electrostatic lens system. In agreement with previous studies, we find that the by far dominant contribution to the TOF spread is acquired within the hemisphere. However, both experiment and computer simulations show that the lens system indirectly affects the time resolution of the setup to a significant extent by inducing a strong dependence of the angular spread of electron trajectories entering the hemisphere on the retarding ratio. The scaling of the angular spread with the retarding ratio can be well approximated by applying Liouville's theorem of constant emittance to the electron trajectories inside the lens system. The performance of the setup is demonstrated by characterizing the laser fluence-dependent transient surface photovoltage response of a laser-excited Si(100) sample.« less

  10. Nonadiabatic rate constants for proton transfer and proton-coupled electron transfer reactions in solution: Effects of quadratic term in the vibronic coupling expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soudackov, Alexander; Hammes-Schiffer, Sharon

    2015-11-17

    Rate constant expressions for vibronically nonadiabatic proton transfer and proton-coupled electron transfer reactions are presented and analyzed. The regimes covered include electronically adiabatic and nonadiabatic reactions, as well as high-frequency and low-frequency regimes for the proton donor-acceptor vibrational mode. These rate constants differ from previous rate constants derived with the cumulant expansion approach in that the logarithmic expansion of the vibronic coupling in terms of the proton donor-acceptor distance includes a quadratic as well as a linear term. The analysis illustrates that inclusion of this quadratic term does not significantly impact the rate constants derived using the cumulant expansion approachmore » in any of the regimes studied. The effects of the quadratic term may become significant when using the vibronic coupling expansion in conjunction with a thermal averaging procedure for calculating the rate constant, however, particularly at high temperatures and for proton transfer interfaces with extremely soft proton donor-acceptor modes that are associated with extraordinarily weak hydrogen bonds. Even with the thermal averaging procedure, the effects of the quadratic term for weak hydrogen-bonding systems are less significant for more physically realistic models that prevent the sampling of unphysical short proton donor-acceptor distances, and the expansion of the coupling can be avoided entirely by calculating the couplings explicitly for the range of proton donor-acceptor distances. This analysis identifies the regimes in which each rate constant expression is valid and thus will be important for future applications to proton transfer and proton-coupled electron transfer in chemical and biological processes. We are grateful for support from National Institutes of Health Grant GM056207 (applications to enzymes) and the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences (applications to molecular electrocatalysts).« less

  11. Molecular dynamics studies of a DNA-binding protein: 2. An evaluation of implicit and explicit solvent models for the molecular dynamics simulation of the Escherichia coli trp repressor.

    PubMed Central

    Guenot, J.; Kollman, P. A.

    1992-01-01

    Although aqueous simulations with periodic boundary conditions more accurately describe protein dynamics than in vacuo simulations, these are computationally intensive for most proteins. Trp repressor dynamic simulations with a small water shell surrounding the starting model yield protein trajectories that are markedly improved over gas phase, yet computationally efficient. Explicit water in molecular dynamics simulations maintains surface exposure of protein hydrophilic atoms and burial of hydrophobic atoms by opposing the otherwise asymmetric protein-protein forces. This properly orients protein surface side chains, reduces protein fluctuations, and lowers the overall root mean square deviation from the crystal structure. For simulations with crystallographic waters only, a linear or sigmoidal distance-dependent dielectric yields a much better trajectory than does a constant dielectric model. As more water is added to the starting model, the differences between using distance-dependent and constant dielectric models becomes smaller, although the linear distance-dependent dielectric yields an average structure closer to the crystal structure than does a constant dielectric model. Multiplicative constants greater than one, for the linear distance-dependent dielectric simulations, produced trajectories that are progressively worse in describing trp repressor dynamics. Simulations of bovine pancreatic trypsin were used to ensure that the trp repressor results were not protein dependent and to explore the effect of the nonbonded cutoff on the distance-dependent and constant dielectric simulation models. The nonbonded cutoff markedly affected the constant but not distance-dependent dielectric bovine pancreatic trypsin inhibitor simulations. As with trp repressor, the distance-dependent dielectric model with a shell of water surrounding the protein produced a trajectory in better agreement with the crystal structure than a constant dielectric model, and the physical properties of the trajectory average structure, both with and without a nonbonded cutoff, were comparable. PMID:1304396

  12. Delay of constant light-induced persistent vaginal estrus by 24-hour time cues in rats.

    PubMed

    Weber, A L; Adler, N T

    1979-04-20

    The normal ovarian cycle of female rats is typically replaced by persistent estrus when these animals are housed under constant light. Evidence presented here shows that the maintenance of periodicity in the environment can at least delay (if not prevent) the photic induction of persistent vaginal estrus. Female rats in constant light were exposed to vaginal smearing at random times or at the same time every day. In another experiment, female rats were exposed to either constant bright light, constant dim light, or a 24-hour photic cycle of bright and dim light. The onset of persistent vaginal estrus was delayed in rats exposed to 24-hour time cues even though the light intensities were the same as or greater than those for the aperiodic control groups. The results suggest that the absence of 24-hour time cues in constant light contributes to the induction of persistent estrus.

  13. Constant Flux of Spatial Niche Partitioning through High-Resolution Sampling of Magnetotactic Bacteria.

    PubMed

    He, Kuang; Gilder, Stuart A; Orsi, William D; Zhao, Xiangyu; Petersen, Nikolai

    2017-10-15

    Magnetotactic bacteria (MTB) swim along magnetic field lines in water. They are found in aquatic habitats throughout the world, yet knowledge of their spatial and temporal distribution remains limited. To help remedy this, we took MTB-bearing sediment from a natural pond, mixed the thoroughly homogenized sediment into two replicate aquaria, and then counted three dominant MTB morphotypes (coccus, spirillum, and rod-shaped MTB cells) at a high spatiotemporal sampling resolution: 36 discrete points in replicate aquaria were sampled every ∼30 days over 198 days. Population centers of the MTB coccus and MTB spirillum morphotypes moved in continual flux, yet they consistently inhabited separate locations, displaying significant anticorrelation. Rod-shaped MTB were initially concentrated toward the northern end of the aquaria, but at the end of the experiment, they were most densely populated toward the south. The finding that the total number of MTB cells increased over time during the experiment argues that population reorganization arose from relative changes in cell division and death and not from migration. The maximum net growth rates were 10, 3, and 1 doublings day -1 and average net growth rates were 0.24, 0.11, and 0.02 doublings day -1 for MTB cocci, MTB spirilla, and rod-shaped MTB, respectively; minimum growth rates for all three morphotypes were -0.03 doublings day -1 Our results suggest that MTB cocci and MTB spirilla occupy distinctly different niches: their horizontal positioning in sediment is anticorrelated and under constant flux. IMPORTANCE Little is known about the horizontal distribution of magnetotactic bacteria in sediment or how the distribution changes over time. We therefore measured three dominant magnetotactic bacterium morphotypes at 36 places in two replicate aquaria each month for 7 months. We found that the spatial positioning of population centers changed over time and that the two most abundant morphotypes (MTB cocci and MTB spirilla) occupied distinctly different niches in the aquaria. Maximum and average growth and death rates were quantified for each of the three morphotypes based on 72 sites that were measured six times. The findings provided novel insight into the differential behavior of noncultured magnetotactic bacteria. Copyright © 2017 American Society for Microbiology.

  14. Determination of the rate constant for the NH2(X(2)B1) + NH2(X(2)B1) reaction at low pressure and 293 K.

    PubMed

    Bahng, Mi-Kyung; Macdonald, R Glen

    2008-12-25

    The rate constant for the reaction NH(2)(X(2)B(1)) + NH(2)(X(2)B(1)) --> products was measured in CF(4), N(2) and Ar carrier gases at 293 +/- 2 K over a pressure range from 2 to 10 Torr. The NH(2) radical was produced by the 193 nm photolysis of NH(3) dilute in the carrier gas. Both the loss of NH(3) and its subsequent recovery and the production of NH(2) and subsequent reaction were monitored simultaneously following the photolysis laser pulse. Both species were detected using quantitative time-resolved high-resolution absorption spectroscopy. The NH(3) molecule was monitored in the NIR using a rotation transition of the nu(1) + nu(3) first combination band near 1500 nm, and the NH(2) radical was monitored using the (1)2(21) <-- (1)3(31) rotational transition of the (0,7,0)A(2)A(1) <-- (0,0,0) X(2)B(1) band near 675 nm. The low-pressure rate constant showed a linear dependence on pressure. The slope of the pressure dependence was dominated by a recombination rate constant for NH(2) + NH(2) given by (8.0 +/- 0.5) x 10(-29), (5.7 +/- 0.7) x 10(-29), and (3.9 +/- 0.4) x 10(-29) cm(6) molecule(-2) s(-1) in CF(4), N(2), and Ar bath gases, respectively, where the uncertainties are +/-2sigma in the scatter of the measurements. The average of the three independent measurements of the sum of the disproportionation rate constants (the zero pressure rate constant) was (3.4 +/- 6) x 10(-13) cm(3) molecule(-1) s(-1), where the uncertainty is +/-2sigma in the scatter of the measurements.

  15. Investigation into constant envelope orthogonal frequency division multiplexing for polarization-division multiplexing coherent optical communication

    NASA Astrophysics Data System (ADS)

    Li, Yupeng; Ding, Ding

    2017-09-01

    Benefiting from the high spectral efficiency and low peak-to-average power ratio, constant envelope orthogonal frequency division multiplexing (OFDM) is a promising technique in coherent optical communication. Polarization-division multiplexing (PDM) has been employed as an effective way to double the transmission capacity in the commercial 100 Gb/s PDM-QPSK system. We investigated constant envelope OFDM together with PDM. Simulation results show that the acceptable maximum launch power into the fiber improves 10 and 6 dB for 80- and 320-km transmission, respectively (compared with the conventional PDM OFDM system). The maximum reachable distance of the constant envelope OFDM system is able to reach 800 km, and even 1200 km is reachable if an ideal erbium doped fiber amplifier is employed.

  16. Is liver SUV stable over time in ¹⁸F-FDG PET imaging?

    PubMed

    Laffon, Eric; Adhoute, Xavier; de Clermont, Henri; Marthan, Roger

    2011-12-01

    This work investigated whether (18)F-FDG PET standardized uptake value (SUV) is stable over time in the normal human liver. The SUV-versus-time curve, SUV(t), of (18)F-FDG in the normal human liver was derived from a kinetic model analysis. This derivation involved mean values of (18)F-FDG liver metabolism that were obtained from a patient series (n = 11), and a noninvasive population-based input function was used in each individual. Mean values (±95% reliability limits) of the (18)F-FDG uptake and release rate constant and of the fraction of free tracer in blood and interstitial volume were as follows: K = 0.0119 mL·min(-1)·mL(-1) (±0.0012), k(R) = 0.0065·min(-1) (±0.0009), and F = 0.21 mL·mL(-1) (±0.11), respectively. SUV(t) (corrected for (18)F physical decay) was derived from these mean values, showing that it smoothly peaks at 75-80 min on average after injection and that it is within 5% of the peak value between 50 and 110 min after injection. In the normal human liver, decay-corrected SUV(t) remains nearly constant (with a reasonable ±2.5% relative measurement uncertainty) if the time delay between tracer injection and PET acquisition is in the range of 50-110 min. In current clinical practice, the findings suggest that SUV of the normal liver can be used for comparison with SUV of suspected malignant lesions, if comparison is made within this time range.

  17. Interrogating the Escherichia coli cell cycle by cell dimension perturbations

    PubMed Central

    Zheng, Hai; Ho, Po-Yi; Jiang, Meiling; Tang, Bin; Liu, Weirong; Li, Dengjin; Yu, Xuefeng; Kleckner, Nancy E.; Amir, Ariel; Liu, Chenli

    2016-01-01

    Bacteria tightly regulate and coordinate the various events in their cell cycles to duplicate themselves accurately and to control their cell sizes. Growth of Escherichia coli, in particular, follows a relation known as Schaechter’s growth law. This law says that the average cell volume scales exponentially with growth rate, with a scaling exponent equal to the time from initiation of a round of DNA replication to the cell division at which the corresponding sister chromosomes segregate. Here, we sought to test the robustness of the growth law to systematic perturbations in cell dimensions achieved by varying the expression levels of mreB and ftsZ. We found that decreasing the mreB level resulted in increased cell width, with little change in cell length, whereas decreasing the ftsZ level resulted in increased cell length. Furthermore, the time from replication termination to cell division increased with the perturbed dimension in both cases. Moreover, the growth law remained valid over a range of growth conditions and dimension perturbations. The growth law can be quantitatively interpreted as a consequence of a tight coupling of cell division to replication initiation. Thus, its robustness to perturbations in cell dimensions strongly supports models in which the timing of replication initiation governs that of cell division, and cell volume is the key phenomenological variable governing the timing of replication initiation. These conclusions are discussed in the context of our recently proposed “adder-per-origin” model, in which cells add a constant volume per origin between initiations and divide a constant time after initiation. PMID:27956612

  18. Interrogating the Escherichia coli cell cycle by cell dimension perturbations.

    PubMed

    Zheng, Hai; Ho, Po-Yi; Jiang, Meiling; Tang, Bin; Liu, Weirong; Li, Dengjin; Yu, Xuefeng; Kleckner, Nancy E; Amir, Ariel; Liu, Chenli

    2016-12-27

    Bacteria tightly regulate and coordinate the various events in their cell cycles to duplicate themselves accurately and to control their cell sizes. Growth of Escherichia coli, in particular, follows a relation known as Schaechter's growth law. This law says that the average cell volume scales exponentially with growth rate, with a scaling exponent equal to the time from initiation of a round of DNA replication to the cell division at which the corresponding sister chromosomes segregate. Here, we sought to test the robustness of the growth law to systematic perturbations in cell dimensions achieved by varying the expression levels of mreB and ftsZ We found that decreasing the mreB level resulted in increased cell width, with little change in cell length, whereas decreasing the ftsZ level resulted in increased cell length. Furthermore, the time from replication termination to cell division increased with the perturbed dimension in both cases. Moreover, the growth law remained valid over a range of growth conditions and dimension perturbations. The growth law can be quantitatively interpreted as a consequence of a tight coupling of cell division to replication initiation. Thus, its robustness to perturbations in cell dimensions strongly supports models in which the timing of replication initiation governs that of cell division, and cell volume is the key phenomenological variable governing the timing of replication initiation. These conclusions are discussed in the context of our recently proposed "adder-per-origin" model, in which cells add a constant volume per origin between initiations and divide a constant time after initiation.

  19. Demonstration of SST value as EBVs descriptor in the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Valentini, E.; Filipponi, F.; Nguyen Xuan, A.; Taramelli, A.

    2017-12-01

    Sea Surface Temperature is an Essential Climate and Ocean Variable (ECV - EOV) able to capture critical scales in the seascape warming patterns and to highlight the exceeding of thresholds. This presentation addresses the changes of the SST in the last three decades over the Mediterranean Sea, a "Large Marine Ecosystem (LME)", in order to speculate the value of such powerful variable, as proxy for the assessment of ecosystem state in terms of ecosystem structures, functions and composition key descriptor. Time series of daily SST for the period 1982-2016, estimated from multi-sensor satellite data and provided by Copernicus Marine Environment Monitoring Service (CMEMS-EU) are used to perform different statistical analysis on common fish species. Results highlight the critical conditions, the general trends as well as the spatial and temporal patterns, in terms of thermal growth, vitality and stress influence on selected fish species. Results confirm a constant increasing trend in SST with an average rise of 1.4° C in the past thirty years. The variance associated to the average trend is not constant across the entire Mediterranean Sea opening the way to multiple scenarios for fish growth and vitality in the diverse sub-basins. A major effort is oriented in addressing the cross-scale ecological interactions to assess the feasibility of using SST as descriptor for Essential Biodiversity Variables, able to prioritize areas and to feed operational tools for planning and management in the Mediterranean LME.

  20. 120 Years of U.S. Residential Housing Stock and Floor Space.

    PubMed

    Moura, Maria Cecilia P; Smith, Steven J; Belzer, David B

    2015-01-01

    Residential buildings are a key driver of energy consumption and also impact transportation and land-use. Energy consumption in the residential sector accounts for one-fifth of total U.S. energy consumption and energy-related CO2 emissions, with floor space a major driver of building energy demands. In this work a consistent, vintage-disaggregated, annual long-term series of U.S. housing stock and residential floor space for 1891-2010 is presented. An attempt was made to minimize the effects of the incompleteness and inconsistencies present in the national housing survey data. Over the 1891-2010 period, floor space increased almost tenfold, from approximately 24,700 to 235,150 million square feet, corresponding to a doubling of floor space per capita from approximately 400 to 800 square feet. While population increased five times over the period, a 50% decrease in household size contributed towards a tenfold increase in the number of housing units and floor space, while average floor space per unit remains surprisingly constant, as a result of housing retirement dynamics. In the last 30 years, however, these trends appear to be changing, as household size shows signs of leveling off, or even increasing again, while average floor space per unit has been increasing. GDP and total floor space show a remarkably constant growth trend over the period and total residential sector primary energy consumption and floor space show a similar growth trend over the last 60 years, decoupling only within the last decade.

  1. 120 years of U.S. residential housing stock and floor space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moura, Maria Cecilia P.; Smith, Steven J.; Belzer, David B.

    Residential buildings are a key driver of energy consumption and also impact transportation and land-use. Energy consumption in the residential sector accounts for one-fifth of total U.S. energy consumption and energy-related CO₂ emissions, with floor space a major driver of building energy demands. In this work a consistent, vintage-disaggregated, annual long-term series of U.S. housing stock and residential floor space for 1891–2010 is presented. An attempt was made to minimize the effects of the incompleteness and inconsistencies present in the national housing survey data. Over the 1891–2010 period, floor space increased almost tenfold, from approximately 24,700 to 235,150 million squaremore » feet, corresponding to a doubling of floor space per capita from approximately 400 to 800 square feet. While population increased five times over the period, a 50% decrease in household size contributed towards a tenfold increase in the number of housing units and floor space, while average floor space per unit remains surprisingly constant, as a result of housing retirement dynamics. In the last 30 years, however, these trends appear to be changing, as household size shows signs of leveling off, or even increasing again, while average floor space per unit has been increasing. GDP and total floor space show a remarkably constant growth trend over the period and total residential sector primary energy consumption and floor space show a similar growth trend over the last 60 years, decoupling only within the last decade.« less

  2. 120 Years of U.S. Residential Housing Stock and Floor Space

    PubMed Central

    Moura, Maria Cecilia P.; Smith, Steven J.; Belzer, David B.

    2015-01-01

    Residential buildings are a key driver of energy consumption and also impact transportation and land-use. Energy consumption in the residential sector accounts for one-fifth of total U.S. energy consumption and energy-related CO2 emissions, with floor space a major driver of building energy demands. In this work a consistent, vintage-disaggregated, annual long-term series of U.S. housing stock and residential floor space for 1891–2010 is presented. An attempt was made to minimize the effects of the incompleteness and inconsistencies present in the national housing survey data. Over the 1891–2010 period, floor space increased almost tenfold, from approximately 24,700 to 235,150 million square feet, corresponding to a doubling of floor space per capita from approximately 400 to 800 square feet. While population increased five times over the period, a 50% decrease in household size contributed towards a tenfold increase in the number of housing units and floor space, while average floor space per unit remains surprisingly constant, as a result of housing retirement dynamics. In the last 30 years, however, these trends appear to be changing, as household size shows signs of leveling off, or even increasing again, while average floor space per unit has been increasing. GDP and total floor space show a remarkably constant growth trend over the period and total residential sector primary energy consumption and floor space show a similar growth trend over the last 60 years, decoupling only within the last decade. PMID:26263391

  3. The influence of Dean Number on heat transfer to Newtonian fluid through spiral coils with constant wall temperature in laminar flow

    NASA Astrophysics Data System (ADS)

    Patil, Rahul Harishchandra; Nadar, Mariappan Dharmaraj; Ali, Rashed

    2017-05-01

    The influence of Dean Number on the heat transfer to petroleum base oils (SN70, SN150 and SN300, flowing through four spiral coils, maintained at constant wall temperature and having average curvature ratio of 0.01568, 0.019, 0.02466 and 0.03011 are investigated in the present study. The fluid, with fully developed velocity profile and underdeveloped temperature profile (the Graetz problem), flows inside the tube at the entrance. Four correlations are developed which are valid for a range of Dean Number from 2 to 1043, Prandtl Number from 76 to 298, and Reynold's Number from 12 to 6013. These correlations are not available in literature and are developed for the first time for the given conditions. The correlations are compared with the correlations developed by earlier investigators and it is found that they are in good agreement. The developed correlations are corrected to account for the variable property relations for the viscous fluids used in the experiment. The average deviations in the developed correlations and the readings obtained by experiment are found to be <± 3%. The comparison of the developed correlations with the correlations of other investigators on helical coils showed an increase in heat transfer in spiral coils than the helical coils. The reason for this is that the magnitude of the secondary flow varied continuously with an increase in the mixing of the fluid particles occurring throughout the length of the spiral coil.

  4. Effect of modulated ultrasound parameters on ultrasound-induced thrombolysis.

    PubMed

    Soltani, Azita; Volz, Kim R; Hansmann, Doulas R

    2008-12-07

    The potential of ultrasound to enhance enzyme-mediated thrombolysis by application of constant operating parameters (COP) has been widely demonstrated. In this study, the effect of ultrasound with modulated operating parameters (MOP) on enzyme-mediated thrombolysis was investigated. The MOP protocol was applied to an in vitro model of thrombolysis. The results were compared to a COP with the equivalent soft tissue thermal index (TIS) over the duration of ultrasound exposure of 30 min (p < 0.14). To explore potential differences in the mechanism responsible for ultrasound-induced thrombolysis, a perfusion model was used to measure changes in average fibrin pore size of clot before, after and during exposure to MOP and COP protocols and cavitational activity was monitored in real time for both protocols using a passive cavitation detection system. The relative lysis enhancement by each COP and MOP protocol compared to alteplase alone yielded values of 33.69 +/- 12.09% and 63.89 +/- 15.02% in a thrombolysis model, respectively (p < 0.007). Both COP and MOP protocols caused an equivalent significant increase in average clot pore size of 2.09 x 10(-2) +/- 0.01 microm and 1.99 x 10(-2) +/- 0.004 microm, respectively (p < 0.74). No signatures of inertial or stable cavitation were observed for either acoustic protocol. In conclusion, due to mechanisms other than cavitation, application of ultrasound with modulated operating parameters has the potential to significantly enhance the relative lysis enhancement compared to application of ultrasound with constant operating parameters.

  5. On the Concentration Gradient across a Spherical Source Washed by Slow Flow

    PubMed Central

    Jaffe, Lionel

    1965-01-01

    A model has been numerically analyzed to help interpret the orienting effects of flow upon cells. The model is a sphere steadily and uniformly emitting a diffusible stuff into a medium otherwise free of it and moving past with Stokes flow. Its properties depend primarily upon the Peclet number, Pe, equal to a · v∞/D, i.e., the sphere's radius, a, times the free stream speed, v∞, over the stuff's diffusion constant, D. As Pe rises, and washing becomes more effective, the average surface concentration, C̄s a falls (Figs. 2 and 5) and the residual material becomes relatively concentrated on the sphere's lee pole (Figs. 2 and 4). Specifically, as Pe rises from 0.1 to 1, the relative concentration gradient, G, rises from 0.7 to 5.0 per cent and to the point where it is rising at about 8 per cent per decade; by Pe 1000, G = 22.1 per cent. From Pe 1 through 1000, G/(1 - C̄s a), or the gradient per concentration deficiency remains at about 26 per cent suggesting that G approaches a ceiling of about 26 per cent. Also from Pe 1 through 1000, the average mass transfer co-efficient nearly equals that previously calculated for spheres maintaining constant surface concentration instead of flux. The complete differential equation without approximations, the Gauss-Seidel method, and an approximation for the outer boundary condition were used. PMID:14268954

  6. 120 years of U.S. residential housing stock and floor space

    DOE PAGES

    Moura, Maria Cecilia P.; Smith, Steven J.; Belzer, David B.; ...

    2015-08-11

    Residential buildings are a key driver of energy consumption and also impact transportation and land-use. Energy consumption in the residential sector accounts for one-fifth of total U.S. energy consumption and energy-related CO₂ emissions, with floor space a major driver of building energy demands. In this work a consistent, vintage-disaggregated, annual long-term series of U.S. housing stock and residential floor space for 1891–2010 is presented. An attempt was made to minimize the effects of the incompleteness and inconsistencies present in the national housing survey data. Over the 1891–2010 period, floor space increased almost tenfold, from approximately 24,700 to 235,150 million squaremore » feet, corresponding to a doubling of floor space per capita from approximately 400 to 800 square feet. While population increased five times over the period, a 50% decrease in household size contributed towards a tenfold increase in the number of housing units and floor space, while average floor space per unit remains surprisingly constant, as a result of housing retirement dynamics. In the last 30 years, however, these trends appear to be changing, as household size shows signs of leveling off, or even increasing again, while average floor space per unit has been increasing. GDP and total floor space show a remarkably constant growth trend over the period and total residential sector primary energy consumption and floor space show a similar growth trend over the last 60 years, decoupling only within the last decade.« less

  7. Simplified Two-Time Step Method for Calculating Combustion Rates and Nitrogen Oxide Emissions for Hydrogen/Air and Hydorgen/Oxygen

    NASA Technical Reports Server (NTRS)

    Molnar, Melissa; Marek, C. John

    2005-01-01

    A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two-time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (> 1 x 10(exp -20) moles/cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T4). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/air fuel and for the H2/O2. A similar correlation is also developed using data from NASA s Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T4) as a function of overall fuel/air ratio, pressure and initial temperature (T3). High values of the regression coefficient R2 are obtained.

  8. Summary of Simplified Two Time Step Method for Calculating Combustion Rates and Nitrogen Oxide Emissions for Hydrogen/Air and Hydrogen/Oxygen

    NASA Technical Reports Server (NTRS)

    Marek, C. John; Molnar, Melissa

    2005-01-01

    A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (greater than l x 10(exp -20)) moles per cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T(sub 4)). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/Air fuel and for H2/O2. A similar correlation is also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T(sub 4)) as a function of overall fuel/air ratio, pressure and initial temperature (T(sub 3)). High values of the regression coefficient R squared are obtained.

  9. [Travel times of patients to ambulatory care physicians in Germany].

    PubMed

    Schang, Laura; Kopetsch, Thomas; Sundmacher, Leonie

    2017-12-01

    The time needed by patients to get to a doctor's office represents an important indicator of realised access to care. In Germany, findings on travel times are only available from surveys or for some regions. For the first time, this study examines nationwide and physician group-specific travel times in the ambulatory care sector in Germany and describes demographic, supply-side and spatial determinants of variations. Using a full review of patient consultations in the statutory health insurance system from 2009/2010 for 14 physician groups (approximately 518 million cases), case-related travel times by car between patients' places of residence and physician's practices were estimated at the municipal level. Physicians were reached in less than 30 min in 90.8% of cases for primary care physicians and up to 63% of cases for radiologists. Patients between 18 and under 30 years of age travel longer to get to the doctor than other age groups. The average travel time at the county level systematically differs between urban and rural planning areas. In the case of gynecologists, dermatologists and ophthalmologists, the average journey time decreases with increasing physician density at the county level, but remains approximately constant from a recognisable point of inflection. There is no association between primary care physician density and travel time at the district level. Spatial analyses show physician group-specific patterns of regional concentrations with an increased proportion of cases with very long travel times. Patients' travel times are influenced by supply- and demand-side determinants. Interactions between influential determinants should be analysed in depth to examine the extent to which the time travelled is an expression of regional under- or over-supply rather than an expression of patient preferences.

  10. Measurement of cardiac output from dynamic pulmonary circulation time CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yee, Seonghwan, E-mail: Seonghwan.Yee@Beaumont.edu; Scalzetti, Ernest M.

    Purpose: To introduce a method of estimating cardiac output from the dynamic pulmonary circulation time CT that is primarily used to determine the optimal time window of CT pulmonary angiography (CTPA). Methods: Dynamic pulmonary circulation time CT series, acquired for eight patients, were retrospectively analyzed. The dynamic CT series was acquired, prior to the main CTPA, in cine mode (1 frame/s) for a single slice at the level of the main pulmonary artery covering the cross sections of ascending aorta (AA) and descending aorta (DA) during the infusion of iodinated contrast. The time series of contrast changes obtained for DA,more » which is the downstream of AA, was assumed to be related to the time series for AA by the convolution with a delay function. The delay time constant in the delay function, representing the average time interval between the cross sections of AA and DA, was determined by least square error fitting between the convoluted AA time series and the DA time series. The cardiac output was then calculated by dividing the volume of the aortic arch between the cross sections of AA and DA (estimated from the single slice CT image) by the average time interval, and multiplying the result by a correction factor. Results: The mean cardiac output value for the six patients was 5.11 (l/min) (with a standard deviation of 1.57 l/min), which is in good agreement with the literature value; the data for the other two patients were too noisy for processing. Conclusions: The dynamic single-slice pulmonary circulation time CT series also can be used to estimate cardiac output.« less

  11. The System of Chemical Elements Distribution in the Hydrosphere

    NASA Astrophysics Data System (ADS)

    Korzh, Vyacheslav D.

    2013-04-01

    The chemical composition of the hydrosphere is a result of substance migration and transformation on lithosphere-river, river-sea, and ocean-atmosphere boundaries. The chemical elements composition of oceanic water is a fundamental multi-dimensional constant for our planet. Detailed studies revealed three types of chemical element distribution in the ocean: 1) Conservative: concentration normalized to salinity is the constant in space and time; 2) Nutrient-type: element concentration in the surface waters decreases due to the biosphere consumption; and 3) Litho-generative: complex character of distribution of elements, which enter the ocean with the river runoff and interred almost entirely in sediments. The correlation between the chemical elements compositions of the river and oceanic water is high (r = 0.94). We conclude that biogeochemical features of each chemical element are determined by the relationship between its average concentration in the ocean and the intensity of its migration through hydrosphere boundary zones. In our presentation, we shall show intensities of global migration and average concentrations in the ocean in the co ordinates lgC - lg [tau], where C is an average element concentration and [tau] is its residence time in the ocean. We have derived a relationship between three main geochemical parameters of the dissolved forms of chemical elements in the hydrosphere: 1) average concentration in the ocean, 2) average concentration in the river runoff and 3) the type of distribution in oceanic water. Using knowledge of two of these parameters, it allows gaining theoretical knowledge of the third. The System covers all chemical elements for the entire range of observed concentrations. It even allows to predict the values of the annual river transport of dissolved Be, C, N, Ge, Tl, Re, to refine such estimates for P, V, Zn, Br, I, and to determine the character of distribution in the ocean for Au and U. Furthermore, the System allowed estimating natural (unaffected by anthropogenic influence) mean concentrations of elements in the river runoff and using them as ecological reference data. Finally, due to the long response time of the ocean, the mean concentrations of elements and patterns of their distribution in the ocean can be used to determine pre-techno-generative concentrations of elements in the river runoff. In our presentation, we shall show several examples of implementation of the System for studying the sediments' transport by the rivers of the Arctic slope of Northern Eurasia. References 1. Korzh V.D. 1974: Some general laws governing the turnover of substance within the ocean-atmosphere-continent-ocean cycle. Journal de Recherches Atmospheriques, 8, 653-660. 2. Korzh V.D. 2008: The general laws in the formation of the element composition of the Hydrosphere and Biosphere. J. Ecologica, 15, 13-21. 3. Korzh V.D. 2012: Determination of general laws of the chemical element composition in Hydrosphere. Water: Chemistry & Ecology, Journal of Water Science and its Practical Application. No. 1, 56-62.

  12. A study of the growth and decay of cigarette smoke NOx in ambient air under controlled conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rickert, W.S.; Robinson, J.C.; Collishaw, N.E.

    The amount of NO{sub 2} and NO produced by the machine smoking of cigarettes was determined for 15 commercial Canadian brands. Average yield of NO was 1.44 {mu}moles or about 13% of the average reported for American cigarettes. Levels of NO{sub 2} were all less than 12% of NO and were probably due to the oxidation of NO. In order to assess the contribution of tobacco smoke to levels of NO in ambient air, 5 brands of cigarettes were smoked in a 27 cubic meter controlled environment room. Ventilation conditions were either 2.5 or 5.0 air changes per hour (ACH)more » and each experiment was replicated 3 times for a total of 30 experiments. Ventilation rates of 0.3 and 1.5 ACH were also selected in a second series of experiments in which only one brand of cigarette was smoked. Least squares estimates for the effective ventilation rates were obtained in the usual manner after linearizing the decay portion of the NO time curve. In each of the experiments, the regression explained at least 95% of the variation in the levels of NO with time. Loss of NO due to factors other than ventilation appeared to be constant within experimental error and averaged 2.22 ACH. Equilibrium values for NO were grossly underestimated when results from currently accepted procedures for smoke analysis were used in modeling the growth and decay of NO. Goodness-of-fit was improved when equilibrium values were estimated based on observed levels in ambient air.« less

  13. Relations for estimating unit-hydrograph parameters in New Mexico

    USGS Publications Warehouse

    Waltemeyer, Scott D.

    2001-01-01

    Data collected from 20 U.S. Geological Survey streamflow-gaging stations, most of which were operated in New Mexico between about 1969 and 1977, were used to define hydrograph characteristics for small New Mexico streams. Drainage areas for the gaging stations ranged from 0.23 to 18.2 square miles. Observed values for the hydrograph characteristics were determined for 87 of the most significant rainfall-runoff events at these gaging stations and were used to define regional regression relations with basin characteristics. Regional relations defined lag time (tl), time of concentration (tc), and time to peak (tp) as functions of stream length and basin shape. The regional equation developed for time of concentration for New Mexico agrees well with the Kirpich equation developed for Tennessee. The Kirpich equation is based on stream length and channel slope, whereas the New Mexico equation is based on stream length and basin shape. Both equations, however, underestimate tc when applied to larger basins where tc is greater than about 2 hours. The median ratio between tp and tc for the observed data was 0.66, which equals the value (0.67) recommended by the Natural Resources Conservation Service (formerly the Soil Conservation Service). However, the median ratio between tl and tc was only 0.42, whereas the commonly used ratio is 0.60. A relation also was developed between unit-peak discharge (qu) and time of concentration. The unit-peak discharge relation is similar in slope to the Natural Resources Conservation Service equation, but the equation developed for New Mexico in this study produces estimates of qu that range from two to three times as large as those estimated from the Natural Resources Conservation Service equation. An average value of 833 was determined for the empirical constant Kp. A default value of 484 has been used by the Natural Resources Conservation Service when site-specific data are not available. The use of a lower value of Kp in calculations generally results in a lower peak discharge. A relation between the empirical constant Kp and average channel slope was defined in this study. The predicted Kp values from the equation ranged from 530 to 964 for the 20 flood-hydrograph gaging stations. The standard error of estimate for the equation is 36 percent.

  14. Optimal current waveforms for brushless permanent magnet motors

    NASA Astrophysics Data System (ADS)

    Moehle, Nicholas; Boyd, Stephen

    2015-07-01

    In this paper, we give energy-optimal current waveforms for a permanent magnet synchronous motor that result in a desired average torque. Our formulation generalises previous work by including a general back-electromotive force (EMF) wave shape, voltage and current limits, an arbitrary phase winding connection, a simple eddy current loss model, and a trade-off between power loss and torque ripple. Determining the optimal current waveforms requires solving a small convex optimisation problem. We show how to use the alternating direction method of multipliers to find the optimal current in milliseconds or hundreds of microseconds, depending on the processor used, which allows the possibility of generating optimal waveforms in real time. This allows us to adapt in real time to changes in the operating requirements or in the model, such as a change in resistance with winding temperature, or even gross changes like the failure of one winding. Suboptimal waveforms are available in tens or hundreds of microseconds, allowing for quick response after abrupt changes in the desired torque. We demonstrate our approach on a simple numerical example, in which we give the optimal waveforms for a motor with a sinusoidal back-EMF, and for a motor with a more complicated, nonsinusoidal waveform, in both the constant-torque region and constant-power region.

  15. Electrical properties and dielectric spectroscopy of Ar{sup +} implanted polycarbonate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chawla, Mahak, E-mail: mahak.chawla@gmail.com; Shekhawat, Nidhi; Aggarwal, Sanjeev

    2015-05-15

    The aim of the present paper is to study the effect of argon ion implantation on electrical and dielectric properties of polycarbonate. Specimens were implanted with 130 keV Ar{sup +} ions in the fluence ranging from 1×10{sup 14} to 1×10{sup 16} ions cm{sup −2}. The beam current used was ∼0.40 µA cm{sup −2}. The electrical conduction behaviour of virgin and Ar{sup +} implanted polycarbonate specimens have been studied through current-voltage (I-V characteristic) measurements. It has been observed that after implantation conductivity increases with increasing ion fluence. The dielectric spectroscopy of these specimens has been done in the frequency range of 100 kHz-100 MHz.more » Relaxation processes were studied by Cole-Cole plot of complex permittivity (real part of complex permittivity, ε′ vs. imaginary part of complex permittivity, ε″). The Cole-Cole plots have also been used to determine static dielectric constant (ε{sub s}), optical dielectric constant (ε{sub ∞}), spreading factor (α), average relaxation time (τ{sub 0}) and molecular relaxation time (τ). The dielectric behaviour has been found to be significantly affected due to Ar{sup +} implantation. The possible correlation between this behaviour and the changes induced by the implantation has been discussed.« less

  16. Effect of temperature on development and reproduction of the carmine spider mite, Tetranychus cinnabarinus (Acari: Tetranychiae), fed on cassava leaves.

    PubMed

    Zou, Zhiwen; Xi, Jianfei; Liu, Ge; Song, Shuxian; Xin, Tianrong; Xia, Bin

    2018-04-01

    The effect of five constant temperatures (16, 20, 24, 28 and 32 °C) on the development, survival and reproduction of Tetranychus cinnabarinus (Boisduval) [= Tetranychus urticae Koch (red form)] fed on cassava leaves was examined in the laboratory at 85% relative humidity. Development time of various immature stages decreased with increasing temperature, with total egg-to-adult development time varying from 27.7 to 6.7 days. The lower thermal threshold for development was 10.8 °C and the thermal constant from egg to adult was 142.4 degree-days. Pre- and post-oviposition period and female longevity all decreased as temperature increased. The longest oviposition period was observed at 20 °C with 20.4 days. Under different temperatures, mated females laid, on average, 1.0, 2.9, 4.7, 4.7 and 4.9 eggs per day, respectively. The maximum fecundity (81.5 eggs per female) was at 28 °C and the intrinsic rate of increase (r m ) was highest (0.25) at 32 °C. The results of this study indicate that T. cinnabarinus population could increase rapidly when cassava leaves serve as a food source. At the appropriate temperature T. cinnabarinus could seriously threaten growth of cassava.

  17. Improved Predictions of Drug-Drug Interactions Mediated by Time-Dependent Inhibition of CYP3A.

    PubMed

    Yadav, Jaydeep; Korzekwa, Ken; Nagar, Swati

    2018-05-07

    Time-dependent inactivation (TDI) of cytochrome P450s (CYPs) is a leading cause of clinical drug-drug interactions (DDIs). Current methods tend to overpredict DDIs. In this study, a numerical approach was used to model complex CYP3A TDI in human-liver microsomes. The inhibitors evaluated included troleandomycin (TAO), erythromycin (ERY), verapamil (VER), and diltiazem (DTZ) along with the primary metabolites N-demethyl erythromycin (NDE), norverapamil (NV), and N-desmethyl diltiazem (NDD). The complexities incorporated into the models included multiple-binding kinetics, quasi-irreversible inactivation, sequential metabolism, inhibitor depletion, and membrane partitioning. The resulting inactivation parameters were incorporated into static in vitro-in vivo correlation (IVIVC) models to predict clinical DDIs. For 77 clinically observed DDIs, with a hepatic-CYP3A-synthesis-rate constant of 0.000 146 min -1 , the average fold difference between the observed and predicted DDIs was 3.17 for the standard replot method and 1.45 for the numerical method. Similar results were obtained using a synthesis-rate constant of 0.000 32 min -1 . These results suggest that numerical methods can successfully model complex in vitro TDI kinetics and that the resulting DDI predictions are more accurate than those obtained with the standard replot approach.

  18. An automated system for performing continuous viscosity versus temperature measurements of fluids using an Ostwald viscometer

    NASA Astrophysics Data System (ADS)

    Beaulieu, L. Y.; Logan, E. R.; Gering, K. L.; Dahn, J. R.

    2017-09-01

    An automated system was developed to measure the viscosity of fluids as a function of temperature using image analysis tracking software. An Ostwald viscometer was placed in a three-wall dewar in which ethylene glycol was circulated using a thermal bath. The system collected continuous measurements during both heating and cooling cycles exhibiting no hysteresis. The use of video tracking analysis software greatly reduced the measurement errors associated with measuring the time required for the meniscus to pass through the markings on the viscometer. The stability of the system was assessed by performing 38 consecutive measurements of water at 42.50 ± 0.05 °C giving an average flow time of 87.7 ± 0.3 s. A device was also implemented to repeatedly deliver a constant volume of liquid of 11.00 ± 0.03 ml leading to an average error in the viscosity of 0.04%. As an application, the system was used to measure the viscosity of two Li-ion battery electrolyte solvents from approximately 10 to 40 °C with results showing excellent agreement with viscosity values calculated using Gering's Advanced Electrolyte Model (AEM).

  19. Adaptive WTA with an analog VLSI neuromorphic learning chip.

    PubMed

    Häfliger, Philipp

    2007-03-01

    In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long-term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system.

  20. Color change of the snapper (Pagrus auratus) and Gurnard (Chelidonichthys kumu) skin and eyes during storage: effect of light polarization and contact with ice.

    PubMed

    Balaban, Murat O; Stewart, Kelsie; Fletcher, Graham C; Alçiçek, Zayde

    2014-12-01

    Ten gurnard and 10 snapper were stored on ice. One side always contacted the ice; the other side was always exposed to air. At different intervals for up to 12 d, the fish were placed in a light box, and the images of both sides were taken using polarized and nonpolarized illumination. Image analysis resulted in average L*, a*, and b* values of skin, and average L* values of the eyes. The skin L* value of gurnard changed significantly over time while that of snapper was substantially constant. The a* and b* values of both fish decreased over time. The L* values of eyes were significantly lower for polarized images, and significantly lower for the side of fish exposed to air only. This may be a concern in quality evaluation methods such as QIM. The difference of colors between the polarized and nonpolarized images was calculated to quantify the reflection off the surface of fish. For accurate measurement of surface color and eye color, use of polarized light is recommended. © 2014 Institute of Food Technologists®

Top