NASA Astrophysics Data System (ADS)
Villarini, Gabriele; Khouakhi, Abdou; Cunningham, Evan
2017-12-01
Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of 10+ % when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection ( 5-10% fewer trends detected in comparison with the reference data).
Choi, Tayoung; Ganapathy, Sriram; Jung, Jaehak; Savage, David R.; Lakshmanan, Balasubramanian; Vecasey, Pamela M.
2013-04-16
A system and method for detecting a low performing cell in a fuel cell stack using measured cell voltages. The method includes determining that the fuel cell stack is running, the stack coolant temperature is above a certain temperature and the stack current density is within a relatively low power range. The method further includes calculating the average cell voltage, and determining whether the difference between the average cell voltage and the minimum cell voltage is greater than a predetermined threshold. If the difference between the average cell voltage and the minimum cell voltage is greater than the predetermined threshold and the minimum cell voltage is less than another predetermined threshold, then the method increments a low performing cell timer. A ratio of the low performing cell timer and a system run timer is calculated to identify a low performing cell.
Rise in the frequency of cloud cover in LANDSAT data for the period 1973 to 1981. [Brazil
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Mendonca, F. J.; Neto, G. C.
1983-01-01
Percentages of cloud cover in LANDSAT imagery were used to calculate the cloud cover monthly average statistic for each LANDSAT scene in Brazil, during the period of 1973 to 1981. The average monthly cloud cover and the monthly minimum cloud cover were also calculated for the regions of north, northeast, central west, southeast and south, separately.
An Earth longwave radiation climate model
NASA Technical Reports Server (NTRS)
Yang, S. K.
1984-01-01
An Earth outgoing longwave radiation (OLWR) climate model was constructed for radiation budget study. Required information is provided by on empirical 100mb water vapor mixing ratio equation of the mixing ratio interpolation scheme. Cloud top temperature is adjusted so that the calculation would agree with NOAA scanning radiometer measurements. Both clear sky and cloudy sky cases are calculated and discussed for global average, zonal average and world-wide distributed cases. The results agree well with the satellite observations. The clear sky case shows that the OLWR field is highly modulated by water vapor, especially in the tropics. The strongest longitudinal variation occurs in the tropics. This variation can be mostly explained by the strong water vapor gradient. Although in the zonal average case the tropics have a minimum in OLWR, the minimum is essentially contributed by a few very low flux regions, such as the Amazon, Indonesian and the Congo.
1992-05-01
Plate Figure H-1. Temperature Coefficient Test Circuit The forward voltage was measured at 3 different termperatures. The average TC was calculated to be...AT, rather than the average figure given by the large area Isolation diffusion. The peak temperature , rather than the average temperature , is the...components would cause the temperatures of the components to be nearer the average , particularly those near the minimum and maximum. X-I The largest
Conklin, Annalijn I; Ponce, Ninez A; Frank, John; Nandi, Arijit; Heymann, Jody
2016-01-01
To describe the relationship between minimum wage and overweight and obesity across countries at different levels of development. A cross-sectional analysis of 27 countries with data on the legislated minimum wage level linked to socio-demographic and anthropometry data of non-pregnant 190,892 adult women (24-49 y) from the Demographic and Health Survey. We used multilevel logistic regression models to condition on country- and individual-level potential confounders, and post-estimation of average marginal effects to calculate the adjusted prevalence difference. We found the association between minimum wage and overweight/obesity was independent of individual-level SES and confounders, and showed a reversed pattern by country development stage. The adjusted overweight/obesity prevalence difference in low-income countries was an average increase of about 0.1 percentage points (PD 0.075 [0.065, 0.084]), and an average decrease of 0.01 percentage points in middle-income countries (PD -0.014 [-0.019, -0.009]). The adjusted obesity prevalence difference in low-income countries was an average increase of 0.03 percentage points (PD 0.032 [0.021, 0.042]) and an average decrease of 0.03 percentage points in middle-income countries (PD -0.032 [-0.036, -0.027]). This is among the first studies to examine the potential impact of improved wages on an important precursor of non-communicable diseases globally. Among countries with a modest level of economic development, higher minimum wage was associated with lower levels of obesity.
Effect of corn stover compositional variability on minimum ethanol selling price (MESP).
Tao, Ling; Templeton, David W; Humbird, David; Aden, Andy
2013-07-01
A techno-economic sensitivity analysis was performed using a National Renewable Energy Laboratory (NREL) 2011 biochemical conversion design model varying feedstock compositions. A total of 496 feedstock near infrared (NIR) compositions from 47 locations in eight US Corn Belt states were used as the inputs to calculate minimum ethanol selling price (MESP), ethanol yield (gallons per dry ton biomass feedstock), ethanol annual production, as well as total installed project cost for each composition. From this study, the calculated MESP is $2.20 ± 0.21 (average ± 3 SD) per gallon ethanol. Copyright © 2013. Published by Elsevier Ltd.
Percolation flux and Transport velocity in the unsaturated zone, Yucca Mountain, Nevada
Yang, I.C.
2002-01-01
The percolation flux for borehole USW UZ-14 was calculated from 14C residence times of pore water and water content of cores measured in the laboratory. Transport velocity is calculated from the depth interval between two points divided by the difference in 14C residence times. Two methods were used to calculate the flux and velocity. The first method uses the 14C data and cumulative water content data directly in the incremental intervals in the Paintbrush nonwelded unit and the Topopah Spring welded unit. The second method uses the regression relation for 14C data and cumulative water content data for the entire Paintbrush nonwelded unit and the Topopah Spring Tuff/Topopah Spring welded unit. Using the first method, for the Paintbrush nonwelded unit in boreholeUSW UZ-14 percolation flux ranges from 2.3 to 41.0 mm/a. Transport velocity ranges from 1.2 to 40.6 cm/a. For the Topopah Spring welded unit percolation flux ranges from 0.9 to 5.8 mm/a in the 8 incremental intervals calculated. Transport velocity ranges from 1.4 to 7.3 cm/a in the 8 incremental intervals. Using the second method, average percolation flux in the Paintbrush nonwelded unit for 6 boreholes ranges from 0.9 to 4.0 mm/a at the 95% confidence level. Average transport velocity ranges from 0.6 to 2.6 cm/a. For the Topopah Spring welded unit and Topopah Spring Tuff, average percolation flux in 5 boreholes ranges from 1.3 to 3.2 mm/a. Average transport velocity ranges from 1.6 to 4.0 cm/a. Both the average percolation flux and average transport velocity in the PTn are smaller than in the TS/TSw. However, the average minimum and average maximum values for the percolation flux in the TS/TSw are within the PTn average range. Therefore, differences in the percolation flux in the two units are not significant. On the other hand, average, average minimum, and average maximum transport velocities in the TS/TSw unit are all larger than the PTn values, implying a larger transport velocity for the TS/TSw although there is a small overlap.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harikrishnan, R.; Hareland, G.; Warpinski, N.R.
This paper evaluates the correlation between values of minimum principal in situ stress derived from two different models which use data obtained from triaxial core tests and coefficient for earth at rest correlations. Both models use triaxial laboratory tests with different confining pressures. The first method uses a vcrified fit to the Mohr failure envelope as a function of average rock grain size, which was obtained from detailed microscopic analyses. The second method uses the Mohr-Coulomb failure criterion. Both approaches give an angle in internal friction which is used to calculate the coefficient for earth at rest which gives themore » minimum principal in situ stress. The minimum principal in situ stress is then compared to actual field mini-frac test data which accurately determine the minimum principal in situ stress and are used to verify the accuracy of the correlations. The cores and the mini-frac stress test were obtained from two wells, the Gas Research Institute`s (GRIs) Staged Field Experiment (SFE) no. 1 well through the Travis Peak Formation in the East Texas Basin, and the Department of Energy`s (DOE`s) Multiwell Experiment (MWX) wells located west-southwest of the town of Rifle, Colorado, near the Rulison gas field. Results from this study indicates that the calculated minimum principal in situ stress values obtained by utilizing the rock failure envelope as a function of average rock grain size correlation are in better agreement with the measured stress values (from mini-frac tests) than those obtained utilizing Mohr-Coulomb failure criterion.« less
Conklin, Annalijn I.; Ponce, Ninez A.; Frank, John; Nandi, Arijit; Heymann, Jody
2016-01-01
Objectives To describe the relationship between minimum wage and overweight and obesity across countries at different levels of development. Methods A cross-sectional analysis of 27 countries with data on the legislated minimum wage level linked to socio-demographic and anthropometry data of non-pregnant 190,892 adult women (24–49 y) from the Demographic and Health Survey. We used multilevel logistic regression models to condition on country- and individual-level potential confounders, and post-estimation of average marginal effects to calculate the adjusted prevalence difference. Results We found the association between minimum wage and overweight/obesity was independent of individual-level SES and confounders, and showed a reversed pattern by country development stage. The adjusted overweight/obesity prevalence difference in low-income countries was an average increase of about 0.1 percentage points (PD 0.075 [0.065, 0.084]), and an average decrease of 0.01 percentage points in middle-income countries (PD -0.014 [-0.019, -0.009]). The adjusted obesity prevalence difference in low-income countries was an average increase of 0.03 percentage points (PD 0.032 [0.021, 0.042]) and an average decrease of 0.03 percentage points in middle-income countries (PD -0.032 [-0.036, -0.027]). Conclusion This is among the first studies to examine the potential impact of improved wages on an important precursor of non-communicable diseases globally. Among countries with a modest level of economic development, higher minimum wage was associated with lower levels of obesity. PMID:26963247
Use of computer code for dose distribution studies in A 60CO industrial irradiator
NASA Astrophysics Data System (ADS)
Piña-Villalpando, G.; Sloan, D. P.
1995-09-01
This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).
NASA Astrophysics Data System (ADS)
Kim, Byung Chan; Park, Seong-Ook
In order to determine exposure compliance with the electromagnetic fields from a base station's antenna in the far-field region, we should calculate the spatially averaged field value in a defined space. This value is calculated based on the measured value obtained at several points within the restricted space. According to the ICNIRP guidelines, at each point in the space, the reference levels are averaged over any 6min (from 100kHz to 10GHz) for the general public. Therefore, the more points we use, the longer the measurement time becomes. For practical application, it is very advantageous to spend less time for measurement. In this paper, we analyzed the difference of average values between 6min and lesser periods and compared it with the standard uncertainty for measurement drift. Based on the standard deviation from the 6min averaging value, the proposed minimum averaging time is 1min.
NASA Astrophysics Data System (ADS)
Imber, S. M.; Milan, S. E.; Lester, M.
2012-04-01
We present a long term study, from 1995 - 2011, of the latitude of the Heppner-Maynard Boundary (HMB) determined using the northern hemisphere SuperDARN radars. The HMB represents the equatorward extent of ionospheric convection. We find that the average latitude of the HMB at midnight is 61° magnetic latitude during the solar maximum of 2003, but it moves significantly poleward during solar minimum, averaging 64° latitude during 1996, and 68° during 2010. This poleward motion is observed despite the increasing number of low latitude radars built in recent years as part of the StormDARN network, and so is not an artefact of data coverage. We believe that the recent extreme solar minimum lead to an average HMB location that was further poleward than previous solar cycles. We also calculated the open-closed field line boundary (OCB) from auroral images during the years 2000-2002 and find that on average the HMB is located equatorward of the OCB by ~6°. We suggest that the HMB may be a useful proxy for the OCB when global auroral images are not available.
40 CFR 63.7535 - Is there a minimum amount of monitoring data I must obtain?
Code of Federal Regulations, 2014 CFR
2014-07-01
...-control periods, or required monitoring system quality assurance or control activities in data averages... required monitoring system quality assurance or quality control activities (including, as applicable... control activities. You must calculate monitoring results using all other monitoring data collected while...
40 CFR 63.7535 - Is there a minimum amount of monitoring data I must obtain?
Code of Federal Regulations, 2013 CFR
2013-07-01
...-control periods, or required monitoring system quality assurance or control activities in data averages... required monitoring system quality assurance or quality control activities (including, as applicable... control activities. You must calculate monitoring results using all other monitoring data collected while...
Global Precipitation Measurement (GPM) Validation Network
NASA Technical Reports Server (NTRS)
Schwaller, Mathew; Moris, K. Robert
2010-01-01
The method averages the minimum TRMM PR and Ground Radar (GR) sample volumes needed to match-up spatially/temporally coincident PR and GR data types. PR and GR averages are calculated at the geometric intersection of the PR rays with the individual Ground Radar(GR)sweeps. Along-ray PR data are averaged only in the vertical, GR data are averaged only in the horizontal. Small difference in PR & GR reflectivity high in the atmosphere, relatively larger differences. Version 6 TRMM PR underestimates rainfall in the case of convective rain in the lower part of the atmosphere by 30 to 40 percent.
Image dynamic range test and evaluation of Gaofen-2 dual cameras
NASA Astrophysics Data System (ADS)
Zhang, Zhenhua; Gan, Fuping; Wei, Dandan
2015-12-01
In order to fully understand the dynamic range of Gaofen-2 satellite data and support the data processing, application and next satellites development, in this article, we evaluated the dynamic range by calculating some statistics such as maximum ,minimum, average and stand deviation of four images obtained at the same time by Gaofen-2 dual cameras in Beijing area; then the maximum ,minimum, average and stand deviation of each longitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of each camera's dynamic range consistency; and these four statistics of each latitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of the dynamic range consistency between PMS1 and PMS2 at last. The results suggest that there is a wide dynamic range of DN value in the image obtained by PMS1 and PMS2 which contains rich information of ground objects; in general, the consistency of dynamic range between the single camera images is in close agreement, but also a little difference, so do the dual cameras. The consistency of dynamic range between the single camera images is better than the dual cameras'.
NASA Technical Reports Server (NTRS)
Randel, D. L.; Campbell, G. G.; Vonder Haar, T. H.; Smith, L.
1986-01-01
Scale factors and assumptions which were applied in calculations of global radiation budget parameters based on ERB data are discussed. The study was performed to examine the relationship between the composite global ERB map that can be generated every six days using all available data and the actual average global ERB. The wide field of view ERB instrument functioned for the first 19 months of the Nimbus-7 life, and furnished sufficient data for calculating actual ERB averages. The composite was most accurate in regions with the least variation in radiation budget.
Impact of cigarette minimum price laws on the retail price of cigarettes in the USA.
Tynan, Michael A; Ribisl, Kurt M; Loomis, Brett R
2013-05-01
Cigarette price increases prevent youth initiation, reduce cigarette consumption and increase the number of smokers who quit. Cigarette minimum price laws (MPLs), which typically require cigarette wholesalers and retailers to charge a minimum percentage mark-up for cigarette sales, have been identified as an intervention that can potentially increase cigarette prices. 24 states and the District of Columbia have cigarette MPLs. Using data extracted from SCANTRACK retail scanner data from the Nielsen company, average cigarette prices were calculated for designated market areas in states with and without MPLs in three retail channels: grocery stores, drug stores and convenience stores. Regression models were estimated using the average cigarette pack price in each designated market area and calendar quarter in 2009 as the outcome variable. The average difference in cigarette pack prices are 46 cents in the grocery channel, 29 cents in the drug channel and 13 cents in the convenience channel, with prices being lower in states with MPLs for all three channels. The findings that MPLs do not raise cigarette prices could be the result of a lack of compliance and enforcement by the state or could be attributed to the minimum state mark-up being lower than the free-market mark-up for cigarettes. Rather than require a minimum mark-up, which can be nullified by promotional incentives and discounts, states and countries could strengthen MPLs by setting a simple 'floor price' that is the true minimum price for all cigarettes or could prohibit discounts to consumers and retailers.
Dissociation of end systole from end ejection in patients with long-term mitral regurgitation.
Brickner, M E; Starling, M R
1990-04-01
To determine whether left ventricular (LV) end systole and end ejection uncouple in patients with long-term mitral regurgitation, 59 patients (22 control patients with atypical chest pain, 21 patients with aortic regurgitation, and 16 patients with mitral regurgitation) were studied with micromanometer LV catheters and radionuclide angiograms. End systole was defined as the time of occurrence (Tmax) of the maximum time-varying elastance (Emax), and end ejection was defined as the time of occurrence of minimum ventricular volume (minV) and zero systolic flow as approximated by the aortic dicrotic notch (Aodi). The temporal relation between end systole and end ejection in the control patients was Tmax (331 +/- 42 [SD] msec), minV (336 +/- 36 msec), and then, zero systolic flow (355 +/- 23 msec). This temporal relation was maintained in the patients with aortic regurgitation. In contrast, in the patients with mitral regurgitation, the temporal relation was Tmax (266 +/- 49 msec), zero systolic flow (310 +/- 37 msec, p less than 0.01 vs. Tmax), and then, minV (355 +/- 37 msec, p less than 0.001 vs. Tmax and p less than 0.01 vs. Aodi). Additionally, the average Tmax occurred earlier in the patients with mitral regurgitation than in the control patients and patients with aortic regurgitation (p less than 0.01, for both), whereas the average time to minimum ventricular volume was similar in all three patient groups. Moreover, the average time to zero systolic flow also occurred earlier in the patients with mitral regurgitation than in the control patients (p less than 0.01) and patients with aortic regurgitation (p less than 0.05). Because of the dissociation of end systole from minimum ventricular volume in the patients with mitral regurgitation, the end-ejection pressure-volume relations calculated at minimum ventricular volume did not correlate (r = -0.09), whereas those calculated at zero systolic flow did correlate (r = 0.88) with the Emax slope values. We conclude that end ejection, defined as minimum ventricular volume, dissociates from end systole in patients with mitral regurgitation because of the shortened time to LV end systole in association with preservation of the time to LV end ejection due to the low impedance to ejection presented by the left atrium. Therefore, pressure-volume relations calculated at minimum ventricular volume might not be useful for assessing LV chamber performance in some patients with mitral regurgitation.
This EnviroAtlas dataset contains data on the mean biological nitrogen fixation in natural/semi-natural ecosystems per 12-digit Hydrologic Unit (HUC) in 2006. Biological N fixation (BNF) in natural/semi-natural ecosystems was estimated using a correlation with actual evapotranspiration (AET). This correlation is based on a global meta-analysis of BNF in natural/semi-natural ecosystems (Cleveland et al. 1999). AET estimates for 2006 were calculated using a regression equation describing the correlation of AET with climate (average annual daily temperature, average annual minimum daily temperature, average annual maximum daily temperature, and annual precipitation) and land use/land cover variables in the conterminous US (Sanford and Selnick 2013). Data describing annual average minimum and maximum daily temperatures and total precipitation for 2006 were acquired from the PRISM climate dataset (http://prism.oregonstate.edu). Average annual climate data were then calculated for individual 12-digit USGS Hydrologic Unit Codes (HUC12s; http://water.usgs.gov/GIS/huc.html; 22 March 2011 release) using the Zonal Statistics tool in ArcMap 10.0. AET for individual HUC12s was estimated using equations described in Sanford and Selnick (2013). BNF in natural/semi-natural ecosystems within individual HUC12s was modeled with an equation describing the statistical relationship between BNF (kg N ha-1 yr-1) and actual evapotranspiration (AET; cm yr-1) and scaled to the proportion
N(2)O in small para-hydrogen clusters: Structures and energetics.
Zhu, Hua; Xie, Daiqian
2009-04-30
We present the minimum-energy structures and energetics of clusters of the linear N(2)O molecule with small numbers of para-hydrogen molecules with pairwise additive potentials. Interaction energies of (p-H(2))-N(2)O and (p-H(2))-(p-H(2)) complexes were calculated by averaging the corresponding full-dimensional potentials over the H(2) angular coordinates. The averaged (p-H(2))-N(2)O potential has three minima corresponding to the T-shaped and the linear (p-H(2))-ONN and (p-H(2))-NNO structures. Optimization of the minimum-energy structures was performed using a Genetic Algorithm. It was found that p-H(2) molecules fill three solvation rings around the N(2)O axis, each of them containing up to five p-H(2) molecules, followed by accumulation of two p-H(2) molecules at the oxygen and nitrogen ends. The first solvation shell is completed at N = 17. The calculated chemical potential oscillates with cluster size up to the completed first solvation shell. These results are consistent with the available experimental measurements. (c) 2009 Wiley Periodicals, Inc.
Prinz, P; Ronacher, B
2002-08-01
The temporal resolution of auditory receptors of locusts was investigated by applying noise stimuli with sinusoidal amplitude modulations and by computing temporal modulation transfer functions. These transfer functions showed mostly bandpass characteristics, which are rarely found in other species at the level of receptors. From the upper cut-off frequencies of the modulation transfer functions the minimum integration times were calculated. Minimum integration times showed no significant correlation to the receptor spike rates but depended strongly on the body temperature. At 20 degrees C the average minimum integration time was 1.7 ms, dropping to 0.95 ms at 30 degrees C. The values found in this study correspond well to the range of minimum integration times found in birds and mammals. Gap detection is another standard paradigm to investigate temporal resolution. In locusts and other grasshoppers application of this paradigm yielded values of the minimum detectable gap widths that are approximately twice as large than the minimum integration times reported here.
NASA Astrophysics Data System (ADS)
Yohana, Eflita; Nugraha, Afif Prasetya; Diana, Ade Eva; Mahawan, Ilham; Nugroho, Sri
2018-02-01
Tea processing is basically distinguished into three types which black tea, green tea, and oolong tea. Green tea is processed by heating and drying the leaves. Green tea factories in Indonesia are generally using the process of drying by panning the leaves. It is more recommended to use the fluidization process to speed up the drying process as the quality of the tea can be maintained. Bubbling fluidization is expected to occur in this research. It is a process of bubbles are formed in the fluidization. The effectiveness of the drying process in a fluidized bed dryer machine needs to be improved by using a CFD simulation method to proof that umf < u < ut, where the average velocity value is limited by the minimum and the maximum velocity of the calculation the experimental data. The minimum and the maximum velocity value of the fluidization is 0.96 m/s and 8.2 m/s. The result of the simulation obtained that the average velocity of the upper bed part is 1.81 m/s. From the results obtained, it can be concluded that the calculation and the simulation data is in accordance with the condition of bubbling fluidization in fluidized bed dryer.
Death from respiratory diseases and temperature in Shiraz, Iran (2006-2011).
Dadbakhsh, Manizhe; Khanjani, Narges; Bahrampour, Abbas; Haghighi, Pegah Shoae
2017-02-01
Some studies have suggested that the number of deaths increases as temperatures drops or rises above human thermal comfort zone. The present study was conducted to evaluate the relation between respiratory-related mortality and temperature in Shiraz, Iran. In this ecological study, data about the number of respiratory-related deaths sorted according to age and gender as well as average, minimum, and maximum ambient air temperatures during 2007-2011 were examined. The relationship between air temperature and respiratory-related deaths was calculated by crude and adjusted negative binomial regression analysis. It was adjusted for humidity, rainfall, wind speed and direction, and air pollutants including CO, NO x , PM 10 , SO 2 , O 3 , and THC. Spearman and Pearson correlations were also calculated between air temperature and respiratory-related deaths. The analysis was done using MINITAB16 and STATA 11. During this period, 2598 respiratory-related deaths occurred in Shiraz. The minimum number of respiratory-related deaths among all subjects happened in an average temperature of 25 °C. There was a significant inverse relationship between average temperature- and respiratory-related deaths among all subjects and women. There was also a significant inverse relationship between average temperature and respiratory-related deaths among all subjects, men and women in the next month. The results suggest that cold temperatures can increase the number of respiratory-related deaths and therefore policies to reduce mortality in cold weather, especially in patients with respiratory diseases should be implemented.
Death from respiratory diseases and temperature in Shiraz, Iran (2006-2011)
NASA Astrophysics Data System (ADS)
Dadbakhsh, Manizhe; Khanjani, Narges; Bahrampour, Abbas; Haghighi, Pegah Shoae
2017-02-01
Some studies have suggested that the number of deaths increases as temperatures drops or rises above human thermal comfort zone. The present study was conducted to evaluate the relation between respiratory-related mortality and temperature in Shiraz, Iran. In this ecological study, data about the number of respiratory-related deaths sorted according to age and gender as well as average, minimum, and maximum ambient air temperatures during 2007-2011 were examined. The relationship between air temperature and respiratory-related deaths was calculated by crude and adjusted negative binomial regression analysis. It was adjusted for humidity, rainfall, wind speed and direction, and air pollutants including CO, NOx, PM10, SO2, O3, and THC. Spearman and Pearson correlations were also calculated between air temperature and respiratory-related deaths. The analysis was done using MINITAB16 and STATA 11. During this period, 2598 respiratory-related deaths occurred in Shiraz. The minimum number of respiratory-related deaths among all subjects happened in an average temperature of 25 °C. There was a significant inverse relationship between average temperature- and respiratory-related deaths among all subjects and women. There was also a significant inverse relationship between average temperature and respiratory-related deaths among all subjects, men and women in the next month. The results suggest that cold temperatures can increase the number of respiratory-related deaths and therefore policies to reduce mortality in cold weather, especially in patients with respiratory diseases should be implemented.
Noise pollution levels in the pediatric intensive care unit.
Kramer, Bree; Joshi, Prashant; Heard, Christopher
2016-12-01
Patients and staff may experience adverse effects from exposure to noise. This study assessed noise levels in the pediatric intensive care unit and evaluated family and staff opinion of noise. Noise levels were recorded using a NoisePro DLX. The microphone was 1 m from the patient's head. The noise level was averaged each minute and levels above 70 and 80 dBA were recorded. The maximum, minimum, and average decibel levels were calculated and peak noise level great than 100 dBA was also recorded. A parent questionnaire concerning their evaluation of noisiness of the bedside was completed. The bedside nurse also completed a questionnaire. The average maximum dB for all patients was 82.2. The average minimum dB was 50.9. The average daily bedside noise level was 62.9 dBA. The average % time where the noise level was higher than 70 dBA was 2.2%. The average percent of time that the noise level was higher than 80 dBA was 0.1%. Patients experienced an average of 115 min/d where peak noise was greater than 100 dBA. The parents and staff identified the monitors as the major contribution to noise. Patients experienced levels of noise greater than 80 dBA. Patients experience peak noise levels in excess of 100 dB during their pediatric intensive care unit stay. Copyright © 2016 Elsevier Inc. All rights reserved.
12 CFR Appendix A to Subpart A of... - Appendix A to Subpart A of Part 327
Code of Federal Regulations, 2011 CFR
2011-01-01
... one year; • Minimum and maximum downgrade probability cutoff values, based on data from June 30, 2008... rate factor (Ai,T) is calculated by subtracting 0.4 from the four-year cumulative gross asset growth... weighted average of five component ratings excluding the “S” component. Delinquency and non-accrual data on...
Model predictions and visualization of the particle flux on the surface of Mars.
Cucinotta, Francis A; Saganti, Premkumar B; Wilson, John W; Simonsen, Lisa C
2002-12-01
Model calculations of the particle flux on the surface of Mars due to the Galactic Cosmic Rays (GCR) can provide guidance on radiobiological research and shielding design studies in support of Mars exploration science objectives. Particle flux calculations for protons, helium ions, and heavy ions are reported for solar minimum and solar maximum conditions. These flux calculations include a description of the altitude variations on the Martian surface using the data obtained by the Mars Global Surveyor (MGS) mission with its Mars Orbiter Laser Altimeter (MOLA) instrument. These particle flux calculations are then used to estimate the average particle hits per cell at various organ depths of a human body in a conceptual shelter vehicle. The estimated particle hits by protons for an average location at skin depth on the Martian surface are about 10 to 100 particle-hits/cell/year and the particle hits by heavy ions are estimated to be 0.001 to 0.01 particle-hits/cell/year.
Can households earning minimum wage in Nova Scotia afford a nutritious diet?
Williams, Patricia L; Johnson, Christine P; Kratzmann, Meredith L V; Johnson, C Shanthi Jacob; Anderson, Barbara J; Chenhall, Cathy
2006-01-01
To assess the affordability of a nutritious diet for households earning minimum wage in Nova Scotia. Food costing data were collected in 43 randomly selected grocery stores throughout NS in 2002 using the National Nutritious Food Basket (NNFB). To estimate the affordability of a nutritious diet for households earning minimum wage, average monthly costs for essential expenses were subtracted from overall income to see if enough money remained for the cost of the NNFB. This was calculated for three types of household: 1) two parents and two children; 2) lone parent and two children; and 3) single male. Calculations were also made for the proposed 2006 minimum wage increase with expenses adjusted using the Consumer Price Index (CPI). The monthly cost of the NNFB priced in 2002 for the three types of household was 572.90 dollars, 351.68 dollars, and 198.73 dollars, respectively. Put into the context of basic living, these data showed that Nova Scotians relying on minimum wage could not afford to purchase a nutritious diet and meet their basic needs, placing their health at risk. These basic expenses do not include other routine costs, such as personal hygiene products, household and laundry cleaners, and prescriptions and costs associated with physical activity, education or savings for unexpected expenses. People working at minimum wage in Nova Scotia have not had adequate income to meet basic needs, including a nutritious diet. The 2006 increase in minimum wage to 7.15 dollars/hr is inadequate to ensure that Nova Scotians working at minimum wage are able to meet these basic needs. Wage increases and supplements, along with supports for expenses such as childcare and transportation, are indicated to address this public health problem.
Assessment of long-term monthly and seasonal trends of warm (cold), wet (dry) spells in Kansas, USA
NASA Astrophysics Data System (ADS)
Dokoohaki, H.; Anandhi, A.
2013-12-01
A few recent studies have focused on trends in rainfall, temperature, and frost indicators at different temporal scales using centennial weather station data in Kansas; our study supplements this work by assessing the changes in spell indicators in Kansas. These indicators provide the duration between temperature-based (warm and cold) and precipitation-based (wet and dry) spells. For wet (dry) spell calculations, a wet day is defined as a day with precipitation ≥1 mm, and a dry day is defined as one with precipitation ≤1 mm. For warm (cold) spell calculations, a warm day is defined as a day with maximum temperature >90th percentile of daily maximum temperature, and a cold day is defined as a day with minimum temperature <10th percentile of daily minimum temperature. The percentiles are calculated for 1971-2000, and four spell indicators are calculated: Average Wet Spell Length (AWSL), Dry Spell Length (ADSL), Average Warm Spell Days (AWSD) and Average Cold Spell Days (ACSD) are calculated. Data were provided from 23 centennial weather stations across Kansas, and all calculations were done for four time periods (through 1919, 1920-1949, 1950-1979, and 1980-2009). The definitions and software provided by Expert Team on Climate Change Detection and Indices (ETCCDI) were adapted for application to Kansas. The long- and short-term trends in these indices were analyzed at monthly and seasonal timescales. Monthly results indicate that ADSL is decreasing and AWSL is increasing throughout the state. AWSD and ACSD both showed an overall decreasing trend, but AWSD trends were variable during the beginning of the Industrial Revolution. Results of seasonal analysis revealed that the fall season recorded the greatest increasing trend for ACSD and the greatest decreasing trend for AWSD across the whole state and during all time periods. Similarly, the greatest increasing and decreasing trends occurred in winter for AWSL and ADSL, respectively. These variations can be important indicators of climatic change that may not be represented in mean conditions. Detailed geographical and temporal variations of the spell indices also can be beneficial for updating management decisions and providing adaptation recommendations for local and regional agricultural production.
NASA Astrophysics Data System (ADS)
Olson, L.; Pogue, K. R.; Bader, N.
2012-12-01
The Columbia Basin of Washington and Oregon is one of the most productive grape-growing areas in the United States. Wines produced in this region are influenced by their terroir - the amalgamation of physical and cultural elements that influence grapes grown at a particular vineyard site. Of the physical factors, climate, and in particular air temperature, has been recognized as a primary influence on viticulture. Air temperature directly affects ripening in the grapes. Proper fruit ripening, which requires precise and balanced levels of acid and sugar, and the accumulation of pigment in the grape skin, directly correlates with the quality of wine produced. Many features control air temperature within a particular vineyard. Elevation, latitude, slope, and aspect all converge to form complex relationships with air temperatures; however, the relative degree to which these attributes affect temperatures varies between regions and is not well understood. This study examines the influence of geography and geomorphology on air temperatures within the American Viticultural Areas (AVAs) of the Columbia Basin in eastern Washington and Oregon. The premier vineyards within each AVA, which have been recognized for producing high-quality wine, were equipped with air temperature monitoring stations that collected hourly temperature measurements. A variety of temperature statistics were calculated, including daily average, maximum, and minimum temperatures. From these values, average diurnal variation and growing degree-days (10°C) were calculated. A variety of other statistics were computed, including date of first and last frost and time spent below a minimum temperature threshold. These parameters were compared to the vineyard's elevation, latitude, slope, aspect, and local topography using GPS, ArcCatalog, and GIS in an attempt to determine their relative influences on air temperatures. From these statistics, it was possible to delineate two trends of temperature variation controlled by elevation. In some AVAs, such as Walla Walla Valley and Red Mountain, average air temperatures increased with elevation because of the effect of cold air pooling on valley floors. In other AVAs, such as Horse Heaven Hills, Lake Chelan and Columbia Gorge, average temperatures decreased with elevation due to the moderating influences of the Columbia River and Lake Chelan. Other temperature statistics, including average diurnal range and maximum and minimum temperature, were influenced by relative topography, including local topography and slope. Vineyards with flat slopes that had low elevations relative to their surroundings had larger diurnal variations and lower maximum and minimum temperatures than vineyards with steeper slopes that were high relative to their surroundings.
Shin, Hye-Young; Park, Hae-Young Lopilly; Jung, Kyoung-In; Choi, Jin-A; Park, Chan Kee
2014-01-01
To determine whether the ganglion cell-inner plexiform layer (GCIPL) or circumpapillary retinal nerve fiber layer (cpRNFL) is better at distinguishing eyes with early glaucoma from normal eyes on the basis of the the initial location of the visual field (VF) damage. Retrospective, observational study. Eighty-four patients with early glaucoma and 43 normal subjects were enrolled. The patients with glaucoma were subdivided into 2 groups according to the location of VF damage: (1) an isolated parafoveal scotoma (PFS, N = 42) within 12 points of a central 10 degrees in 1 hemifield or (2) an isolated peripheral nasal step (PNS, N = 42) within the nasal periphery outside 10 degrees of fixation in 1 hemifield. All patients underwent macular and optic disc scanning using Cirrus high-definition optical coherence tomography (Carl Zeiss Meditec, Dublin, CA). The GCIPL and cpRNFL thicknesses were compared between groups. Areas under the receiver operating characteristic curves (AUCs) were calculated. Comparison of diagnostic ability using AUCs. The average and minimum GCIPL of the PFS group were significantly thinner than those of the PNS group, whereas there was no significant difference in the average retinal nerve fiber layer (RNFL) thickness between the 2 groups. The AUCs of the average (0.962) and minimum GCIPL (0.973) thicknesses did not differ from that of the average RNFL thickness (0.972) for discriminating glaucomatous changes between normal and all glaucoma eyes (P =0.566 and 0.974, respectively). In the PFS group, the AUCs of the average (0.988) and minimum GCIPL (0.999) thicknesses were greater than that of the average RNFL thickness (0.961, P =0.307 and 0.125, respectively). However, the AUCs of the average (0.936) and minimum GCIPL (0.947) thicknesses were lower than that of the average RNFL thickness (0.984) in the PNS group (P =0.032 and 0.069, respectively). The GCIPL parameters were more valuable than the cpRNFL parameters for detecting glaucoma in eyes with parafoveal VF loss, and the cpRNFL parameters were better than the GCIPL parameters for detecting glaucoma in eyes with peripheral VF loss. Clinicians should know that the diagnostic capability of macular GCIPL parameters depends largely on the location of the VF loss. Copyright © 2014 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Avillez, Miguel A.; Breitschwerdt, Dieter, E-mail: mavillez@galaxy.lca.uevora.pt
Tracking the thermal evolution of plasmas, characterized by an n -distribution, using numerical simulations, requires the determination of the emission spectra and of the radiative losses due to free–free emission from the corresponding temperature-averaged and total Gaunt factors. Detailed calculations of the latter are presented and associated with n -distributed electrons with the parameter n ranging from 1 (corresponding to the Maxwell–Boltzmann distribution) to 100. The temperature-averaged and total Gaunt factors with decreasing n tend toward those obtained with the Maxwell–Boltzmann distribution. Radiative losses due to free–free emission in a plasma evolving under collisional ionization equilibrium conditions and composed bymore » H, He, C, N, O, Ne, Mg, Si, S, and Fe ions, are presented. These losses decrease with a decrease in the parameter n , reaching a minimum when n = 1, and thus converge with the loss of thermal plasma. Tables of the thermal-averaged and total Gaunt factors calculated for n -distributions, and a wide range of electron and photon energies, are presented.« less
Suitable Environmental Ranges for Potential Coral Reef Habitats in the Tropical Ocean
Guan, Yi; Hohn, Sönke; Merico, Agostino
2015-01-01
Coral reefs are found within a limited range of environmental conditions or tolerance limits. Estimating these limits is a critical prerequisite for understanding the impacts of climate change on the biogeography of coral reefs. Here we used the diagnostic model ReefHab to determine the current environmental tolerance limits for coral reefs and the global distribution of potential coral reef habitats as a function of six factors: temperature, salinity, nitrate, phosphate, aragonite saturation state, and light. To determine these tolerance limits, we extracted maximum and minimum values of all environmental variables in corresponding locations where coral reefs are present. We found that the global, annually averaged tolerance limits for coral reefs are 21.7—29.6 °C for temperature, 28.7—40.4 psu for salinity, 4.51 μmol L-1 for nitrate, 0.63 μmol L-1 for phosphate, and 2.82 for aragonite saturation state. The averaged minimum light intensity in coral reefs is 450 μmol photons m-2 s-1. The global area of potential reef habitats calculated by the model is 330.5 × 103 km2. Compared with previous studies, the tolerance limits for temperature, salinity, and nutrients have not changed much, whereas the minimum value of aragonite saturation in coral reef waters has decreased from 3.28 to 2.82. The potential reef habitat area calculated with ReefHab is about 121×103 km2 larger than the area estimated from the charted reefs, suggesting that the growth potential of coral reefs is higher than currently observed. PMID:26030287
Suitable environmental ranges for potential coral reef habitats in the tropical ocean.
Guan, Yi; Hohn, Sönke; Merico, Agostino
2015-01-01
Coral reefs are found within a limited range of environmental conditions or tolerance limits. Estimating these limits is a critical prerequisite for understanding the impacts of climate change on the biogeography of coral reefs. Here we used the diagnostic model ReefHab to determine the current environmental tolerance limits for coral reefs and the global distribution of potential coral reef habitats as a function of six factors: temperature, salinity, nitrate, phosphate, aragonite saturation state, and light. To determine these tolerance limits, we extracted maximum and minimum values of all environmental variables in corresponding locations where coral reefs are present. We found that the global, annually averaged tolerance limits for coral reefs are 21.7-29.6 °C for temperature, 28.7-40.4 psu for salinity, 4.51 μmol L-1 for nitrate, 0.63 μmol L-1 for phosphate, and 2.82 for aragonite saturation state. The averaged minimum light intensity in coral reefs is 450 μmol photons m-2 s-1. The global area of potential reef habitats calculated by the model is 330.5 × 103 km2. Compared with previous studies, the tolerance limits for temperature, salinity, and nutrients have not changed much, whereas the minimum value of aragonite saturation in coral reef waters has decreased from 3.28 to 2.82. The potential reef habitat area calculated with ReefHab is about 121×103 km2 larger than the area estimated from the charted reefs, suggesting that the growth potential of coral reefs is higher than currently observed.
An analysis of interplanetary space radiation exposure for various solar cycles
NASA Technical Reports Server (NTRS)
Badhwar, G. D.; Cucinotta, F. A.; O'Neill, P. M.; Wilson, J. W. (Principal Investigator)
1994-01-01
The radiation dose received by crew members in interplanetary space is influenced by the stage of the solar cycle. Using the recently developed models of the galactic cosmic radiation (GCR) environment and the energy-dependent radiation transport code, we have calculated the dose at 0 and 5 cm water depth; using a computerized anatomical man (CAM) model, we have calculated the skin, eye and blood-forming organ (BFO) doses as a function of aluminum shielding for various solar minima and maxima between 1954 and 1989. These results show that the equivalent dose is within about 15% of the mean for the various solar minima (maxima). The maximum variation between solar minimum and maximum equivalent dose is about a factor of three. We have extended these calculations for the 1976-1977 solar minimum to five practical shielding geometries: Apollo Command Module, the least and most heavily shielded locations in the U.S. space shuttle mid-deck, center of the proposed Space Station Freedom cluster and sleeping compartment of the Skylab. These calculations, using the quality factor of ICRP 60, show that the average CAM BFO equivalent dose is 0.46 Sv/year. Based on an approach that takes fragmentation into account, we estimate a calculation uncertainty of 15% if the uncertainty in the quality factor is neglected.
NASA Astrophysics Data System (ADS)
Raev, M. D.; Sharkov, E. A.; Tikhonov, V. V.; Repina, I. A.; Komarova, N. Yu.
2015-12-01
The GLOBAL-RT database (DB) is composed of long-term radio heat multichannel observation data received from DMSP F08-F17 satellites; it is permanently supplemented with new data on the Earth's exploration from the space department of the Space Research Institute, Russian Academy of Sciences. Arctic ice-cover areas for regions higher than 60° N latitude were calculated using the DB polar version and NASA Team 2 algorithm, which is widely used in foreign scientific literature. According to the analysis of variability of Arctic ice cover during 1987-2014, 2 months were selected when the Arctic ice cover was maximal (February) and minimal (September), and the average ice cover area was calculated for these months. Confidence intervals of the average values are in the 95-98% limits. Several approximations are derived for the time dependences of the ice-cover maximum and minimum over the period under study. Regression dependences were calculated for polynomials from the first degree (linear) to sextic. It was ascertained that the minimal root-mean-square error of deviation from the approximated curve sharply decreased for the biquadratic polynomial and then varied insignificantly: from 0.5593 for the polynomial of third degree to 0.4560 for the biquadratic polynomial. Hence, the commonly used strictly linear regression with a negative time gradient for the September Arctic ice cover minimum over 30 years should be considered incorrect.
Currens, J.C.
1999-01-01
Analytical data for nitrate and triazines from 566 samples collected over a 3-year period at Pleasant Grove Spring, Logan County, KY, were statistically analyzed to determine the minimum data set needed to calculate meaningful yearly averages for a conduit-flow karst spring. Results indicate that a biweekly sampling schedule augmented with bihourly samples from high-flow events will provide meaningful suspended-constituent and dissolved-constituent statistics. Unless collected over an extensive period of time, daily samples may not be representative and may also be autocorrelated. All high-flow events resulting in a significant deflection of a constituent from base-line concentrations should be sampled. Either the geometric mean or the flow-weighted average of the suspended constituents should be used. If automatic samplers are used, then they may be programmed to collect storm samples as frequently as every few minutes to provide details on the arrival time of constituents of interest. However, only samples collected bihourly should be used to calculate averages. By adopting a biweekly sampling schedule augmented with high-flow samples, the need to continuously monitor discharge, or to search for and analyze existing data to develop a statistically valid monitoring plan, is lessened.Analytical data for nitrate and triazines from 566 samples collected over a 3-year period at Pleasant Grove Spring, Logan County, KY, were statistically analyzed to determine the minimum data set needed to calculate meaningful yearly averages for a conduit-flow karst spring. Results indicate that a biweekly sampling schedule augmented with bihourly samples from high-flow events will provide meaningful suspended-constituent and dissolved-constituent statistics. Unless collected over an extensive period of time, daily samples may not be representative and may also be autocorrelated. All high-flow events resulting in a significant deflection of a constituent from base-line concentrations should be sampled. Either the geometric mean or the flow-weighted average of the suspended constituents should be used. If automatic samplers are used, then they may be programmed to collect storm samples as frequently as every few minutes to provide details on the arrival time of constituents of interest. However, only samples collected bihourly should be used to calculate averages. By adopting a biweekly sampling schedule augmented with high-flow samples, the need to continuously monitor discharge, or to search for and analyze existing data to develop a statistically valid monitoring plan, is lessened.
Precision gravity studies at Cerro Prieto: a progress report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grannell, R.B.; Kroll, R.C.; Wyman, R.M.
A third and fourth year of precision gravity data collection and reduction have now been completed at the Cerro Prieto geothermal field. In summary, 66 permanently monumented stations were occupied between December and April of 1979 to 1980 and 1980 to 1981 by a LaCoste and Romberg gravity meter (G300) at least twice, with a minimum of four replicate values obtained each time. Station 20 alternate, a stable base located on Cerro Prieto volcano, was used as the reference base for the third year and all the stations were tied to this base, using four to five hour loops. Themore » field data were reduced to observed gravity values by (1) multiplication with the appropriate calibration factor; (2) removal of calculated tidal effects; (3) calculation of average values at each station, and (4) linear removal of accumulated instrumental drift which remained after carrying out the first three reductions. Following the reduction of values and calculation of gravity differences between individual stations and the base stations, standard deviations were calculated for the averaged occupation values (two to three per station). In addition, pooled variance calculations were carried out to estimate precision for the surveys as a whole.« less
M-AMST: an automatic 3D neuron tracing method based on mean shift and adapted minimum spanning tree.
Wan, Zhijiang; He, Yishan; Hao, Ming; Yang, Jian; Zhong, Ning
2017-03-29
Understanding the working mechanism of the brain is one of the grandest challenges for modern science. Toward this end, the BigNeuron project was launched to gather a worldwide community to establish a big data resource and a set of the state-of-the-art of single neuron reconstruction algorithms. Many groups contributed their own algorithms for the project, including our mean shift and minimum spanning tree (M-MST). Although M-MST is intuitive and easy to implement, the MST just considers spatial information of single neuron and ignores the shape information, which might lead to less precise connections between some neuron segments. In this paper, we propose an improved algorithm, namely M-AMST, in which a rotating sphere model based on coordinate transformation is used to improve the weight calculation method in M-MST. Two experiments are designed to illustrate the effect of adapted minimum spanning tree algorithm and the adoptability of M-AMST in reconstructing variety of neuron image datasets respectively. In the experiment 1, taking the reconstruction of APP2 as reference, we produce the four difference scores (entire structure average (ESA), different structure average (DSA), percentage of different structure (PDS) and max distance of neurons' nodes (MDNN)) by comparing the neuron reconstruction of the APP2 and the other 5 competing algorithm. The result shows that M-AMST gets lower difference scores than M-MST in ESA, PDS and MDNN. Meanwhile, M-AMST is better than N-MST in ESA and MDNN. It indicates that utilizing the adapted minimum spanning tree algorithm which took the shape information of neuron into account can achieve better neuron reconstructions. In the experiment 2, 7 neuron image datasets are reconstructed and the four difference scores are calculated by comparing the gold standard reconstruction and the reconstructions produced by 6 competing algorithms. Comparing the four difference scores of M-AMST and the other 5 algorithm, we can conclude that M-AMST is able to achieve the best difference score in 3 datasets and get the second-best difference score in the other 2 datasets. We develop a pathway extraction method using a rotating sphere model based on coordinate transformation to improve the weight calculation approach in MST. The experimental results show that M-AMST utilizes the adapted minimum spanning tree algorithm which takes the shape information of neuron into account can achieve better neuron reconstructions. Moreover, M-AMST is able to get good neuron reconstruction in variety of image datasets.
Mainstem Clearwater River Study: Assessment for Salmonid Spawning, Incubation, and Rearing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conner, William P.
1989-01-01
Chinook salmon reproduced naturally in the Clearwater River until damming of the lower mainstem in 1927 impeded upstream spawning migrations and decimated the populations. Removal of the Washington Water Power Dam in 1973 reopened upriver passage. This study was initiated to determine the feasibility of re-introducing chinook salmon into the lower mainstem Clearwater River based on the temperature and flow regimes, water quality, substrate, and invertebrate production since the completion of Dworshak Dam in 1972. Temperature data obtained from the United States Geological Survey gaging stations at Peck and Spalding, Idaho, were used to calculate average minimum and maximum watermore » temperature on a daily, monthly and yearly basis. The coldest and warmest (absolute minimum and maximum) temperatures that have occurred in the past 15 years were also identified. Our analysis indicates that average lower mainstem Clearwater River water temperatures are suitable for all life stages of chinook salmon, and also for steelhead trout rearing. In some years absolute maximum water temperatures in late summer may postpone adult staging and spawning. Absolute minimum temperatures have been recorded that could decrease overwinter survival of summer chinook juveniles and fall chinook eggs depending on the quality of winter hiding cover and the prevalence of intra-gravel freezing in the lower mainstem Clearwater River.« less
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2013-01-01
Examined are the annual averages, 10-year moving averages, decadal averages, and sunspot cycle (SC) length averages of the mean, maximum, and minimum surface air temperatures and the diurnal temperature range (DTR) for the Armagh Observatory, Northern Ireland, during the interval 1844-2012. Strong upward trends are apparent in the Armagh surface-air temperatures (ASAT), while a strong downward trend is apparent in the DTR, especially when the ASAT data are averaged by decade or over individual SC lengths. The long-term decrease in the decadaland SC-averaged annual DTR occurs because the annual minimum temperatures have risen more quickly than the annual maximum temperatures. Estimates are given for the Armagh annual mean, maximum, and minimum temperatures and the DTR for the current decade (2010-2019) and SC24.
True, Nancy S
2006-06-15
The Stark modulated low resolution microwave spectrum of ethyl cyanoformate between 21.5 and 24.0 GHz at 210, 300, and 358 K, which shows the J + 1 <-- J = 8 <-- 7 bands of three species, is compared to simulations based on electronic structure calculations at the MP2/6-311++G theory level. Calculations at this theory level reproduce the relative energies of the syn-anti and syn-gauche conformers, obtained in a previous study, and indicate that the barrier to conformer exchange is approximately 360 cm(-1) higher in energy than the syn-anti minimum. Simulated spectra of the eigenstates of the calculated O-ethyl torsional potential function reproduce the relative intensities and shapes of the lower and higher frequency bands which correspond to transitions of the syn-anti and syn-gauche conformers, respectively, but fail to reproduce the intense center band in the experimental spectra. A model incorporating exchange averaging reproduces the intensity of the center band and its temperature dependence. These simulations indicate that a large fraction of the thermal population at all three temperatures undergoes conformational exchange with an average energy specific rate constant,
Code of Federal Regulations, 2012 CFR
2012-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...
Code of Federal Regulations, 2011 CFR
2011-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...
Transonic cascade flow calculations using non-periodic C-type grids
NASA Technical Reports Server (NTRS)
Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.
1991-01-01
A new kind of C-type grid is proposed for turbomachinery flow calculations. This grid is nonperiodic on the wake and results in minimum skewness for cascades with high turning and large camber. Euler and Reynolds averaged Navier-Stokes equations are discretized on this type of grid using a finite volume approach. The Baldwin-Lomax eddy-viscosity model is used for turbulence closure. Jameson's explicit Runge-Kutta scheme is adopted for the integration in time, and computational efficiency is achieved through accelerating strategies such as multigriding and residual smoothing. A detailed numerical study was performed for a turbine rotor and for a vane. A grid dependence analysis is presented and the effect of artificial dissipation is also investigated. Comparison of calculations with experiments clearly demonstrates the advantage of the proposed grid.
NASA Astrophysics Data System (ADS)
Lester, M.; Imber, S. M.; Milan, S. E.
2012-12-01
The Super Dual Auroral Radar Network (SuperDARN) provides a long term data series which enables investigations of the coupled magnetosphere-ionosphere system. The network has been in existence essentially since 1995 when 6 radars were operational in the northern hemisphere and 4 in the southern hemisphere. We have been involved in an analysis of the data over the lifetime of the project and present results here from two key studies. In the first study we calculated the amount of ionospheric scatter which is observed by the radars and see clear annual and solar cycle variations in both hemispheres. The recent extended solar minimum also produces a significant effect in the scatter occurrence. In the second study, we have determined the latitude of the Heppner-Maynard Boundary (HMB) using the northern hemisphere SuperDARN radars. The HMB represents the equatorward extent of ionospheric convection for the interval 1996 - 2011. We find that the average latitude of the HMB at midnight is 61° magnetic latitude during solar the maximum of 2003, but it moves significantly poleward during solar minimum, averaging 64° latitude during 1996, and 68° during 2010. This poleward motion is observed despite the increasing number of low latitude radars built in recent years as part of the StormDARN network, and so is not an artefact of data coverage. We believe that the recent extreme solar minimum led to an average HMB location that was further poleward than the previous solar cycle. We have also calculated the Open-Closed field line Boundary (OCB) from auroral images during a subset of the interval (2000 - 2002) and find that on average the HMB is located equatorward of the OCB by ~7o. We suggest that the HMB may be a useful proxy for the OCB when global images are not available. The work presented in this paper has been undertaken as part of the European Cluster Assimilation Technology (ECLAT) project which is funded through the EU FP7 programme and involves groups at Leicester, Helsinki, Uppsala, FMI, Graz and St. Petersburg. The aim of the project is to provide additional data sets, primarily ground based data, to the Cluster Active Archive, and its successor the Cluster Final Archive, in order to enhance the scientific productivity of the archives.
The Application of Stress-Relaxation Test to Life Assessment of T911/T22 Weld Metal
NASA Astrophysics Data System (ADS)
Cao, Tieshan; Zhao, Jie; Cheng, Congqian; Li, Huifang
2016-03-01
A dissimilar weld metal was obtained through submerged arc welding of a T911 steel to a T22 steel, and its creep property was explored by stress-relaxation test assisted by some conventional creep tests. The creep rate information of the stress-relaxation test was compared to the minimum and the average creep rates of the conventional creep test. Log-log graph showed that the creep rate of the stress-relaxation test was in a linear relationship with the minimum creep rate of the conventional creep test. Thus, the creep rate of stress-relaxation test could be used in the Monkman-Grant relation to calculate the rupture life. The creep rate of the stress-relaxation test was similar to the average creep rate, and thereby the rupture life could be evaluated by a method of "time to rupture strain." The results also showed that rupture life which was assessed by the Monkman-Grant relation was more accurate than that obtained through the method of "time to rupture strain."
NASA Astrophysics Data System (ADS)
Fathy, Ibrahim
2016-07-01
This paper presents a statistical study of different types of large-scale geomagnetic pulsation (Pc3, Pc4, Pc5 and Pi2) detected simultaneously by two MAGDAS stations located at Fayum (Geo. Coordinates 29.18 N and 30.50 E) and Aswan (Geo. Coordinates 23.59 N and 32.51 E) in Egypt. The second order butter-worth band-pass filter has been used to filter and analyze the horizontal H-component of the geomagnetic field in one-second data. The data was collected during the solar minimum of the current solar cycle 24. We list the most energetic pulsations detected by the two stations instantaneously, in addition; the average amplitude of the pulsation signals was calculated.
An estimation of Canadian population exposure to cosmic rays.
Chen, Jing; Timmins, Rachel; Verdecchia, Kyle; Sato, Tatsuhiko
2009-08-01
The worldwide average exposure to cosmic rays contributes to about 16% of the annual effective dose from natural radiation sources. At ground level, doses from cosmic ray exposure depend strongly on altitude, and weakly on geographical location and solar activity. With the analytical model PARMA developed by the Japan Atomic Energy Agency, annual effective doses due to cosmic ray exposure at ground level were calculated for more than 1,500 communities across Canada which cover more than 85% of the Canadian population. The annual effective doses from cosmic ray exposure in the year 2000 during solar maximum ranged from 0.27 to 0.72 mSv with the population-weighted national average of 0.30 mSv. For the year 2006 during solar minimum, the doses varied between 0.30 and 0.84 mSv, and the population-weighted national average was 0.33 mSv. Averaged over solar activity, the Canadian population-weighted average annual effective dose due to cosmic ray exposure at ground level is estimated to be 0.31 mSv.
Assessment of corneal epithelial thickness in dry eye patients.
Cui, Xinhan; Hong, Jiaxu; Wang, Fei; Deng, Sophie X; Yang, Yujing; Zhu, Xiaoyu; Wu, Dan; Zhao, Yujin; Xu, Jianjiang
2014-12-01
To investigate the features of corneal epithelial thickness topography with Fourier-domain optical coherence tomography (OCT) in dry eye patients. In this cross-sectional study, 100 symptomatic dry eye patients and 35 normal subjects were enrolled. All participants answered the ocular surface disease index questionnaire and were subjected to OCT, corneal fluorescein staining, tear breakup time, Schirmer 1 test without anesthetic (S1t), and meibomian morphology. Several epithelium statistics for each eye, including central, superior, inferior, minimum, maximum, minimum - maximum, and map standard deviation, were averaged. Correlations of epithelial thickness with the symptoms of dry eye were calculated. The mean (±SD) central, superior, and inferior corneal epithelial thickness was 53.57 (±3.31) μm, 52.00 (±3.39) μm, and 53.03 (±3.67) μm in normal eyes and 52.71 (±2.83) μm, 50.58 (±3.44) μm, and 52.53 (±3.36) μm in dry eyes, respectively. The superior corneal epithelium was thinner in dry eye patients compared with normal subjects (p = 0.037), whereas central and inferior epithelium were not statistically different. In the dry eye group, patients with higher severity grades had thinner superior (p = 0.017) and minimum (p < 0.001) epithelial thickness, more wide range (p = 0.032), and greater deviation (p = 0.003). The average central epithelial thickness had no correlation with tear breakup time, S1t, or the severity of meibomian glands, whereas average superior epithelial thickness positively correlated with S1t (r = 0.238, p = 0.017). Fourier-domain OCT demonstrated that the thickness map of the dry eye corneal epithelium was thinner than normal eyes in the superior region. In more severe dry eye disease patients, the superior and minimum epithelium was much thinner, with a greater range of map standard deviation.
Estimates of in-place oil shale of various grades in federal lands, Piceance Basin, Colorado
Mercier, Tracey J.; Johnson, Ronald C.; Brownfield, Michael E.
2010-01-01
The entire oil shale interval in the Piceance Basin is subdivided into seventeen “rich” and “lean” zones that were assessed separately. These zones are roughly time-stratigraphic units consisting of distinctive, laterally continuous sequences of oil shale beds that can be traced throughout much of the Piceance Basin. Several subtotals of the 1.5 trillion barrels total were calculated: (1) about 920 billion barrels (60 percent) exceed 15 gallons per ton (GPT); (2) about 352 billion barrels (23 percent) exceed 25 GPT; (3) more than one trillion barrels (70 percent) underlie Federally-managed lands; and (4) about 689 billion barrels (75 percent) of the 15 GPT total and about 284 billion barrels (19 percent) of the 25 GPT total are under Federal mineral (subsurface) ownership. These 15 and 25 GPT estimates include only those areas where the weighted average of an entire zone exceeds those minimum cutoffs. In areas where the entire zone does not meet the minimum criteria, some oil shale intervals of significant thicknesses could exist within the zone that exceed these minimum cutoffs. For example, a 30-ft interval within an oil shale zone might exceed 25 GPT but if the entire zone averages less than 25 GPT, these resources are not included in the 15 and 25 GPT subtotals, although they might be exploited in the future.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Annual Threshold Amount, and Percent Used To Calculate IPA Minimum Participation Assigned to Each Catcher... Allocation and Annual Threshold Amount, and Percent Used To Calculate IPA Minimum Participation Assigned to... threshold amount of 13,516 Column H Percent used to calculate IPA minimum participation Vessel name USCG...
Cost effectiveness of the US Geological Survey's stream-gaging program in New York
Wolcott, S.W.; Gannon, W.B.; Johnston, W.H.
1986-01-01
The U.S. Geological Survey conducted a 5-year nationwide analysis to define and document the most cost effective means of obtaining streamflow data. This report describes the stream gaging network in New York and documents the cost effectiveness of its operation; it also identifies data uses and funding sources for the 174 continuous-record stream gages currently operated (1983). Those gages as well as 189 crest-stage, stage-only, and groundwater gages are operated with a budget of $1.068 million. One gaging station was identified as having insufficient reason for continuous operation and was converted to a crest-stage gage. Current operation of the 363-station program requires a budget of $1.068 million/yr. The average standard error of estimation of continuous streamflow data is 13.4%. Results indicate that this degree of accuracy could be maintained with a budget of approximately $1.006 million if the gaging resources were redistributed among the gages. The average standard error for 174 stations was calculated for five hypothetical budgets. A minimum budget of $970,000 would be needed to operated the 363-gage program; a budget less than this does not permit proper servicing and maintenance of the gages and recorders. Under the restrictions of a minimum budget, the average standard error would be 16.0%. The maximum budget analyzed was $1.2 million, which would decrease the average standard error to 9.4%. (Author 's abstract)
On the Importance of Cycle Minimum in Sunspot Cycle Prediction
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.
1996-01-01
The characteristics of the minima between sunspot cycles are found to provide important information for predicting the amplitude and timing of the following cycle. For example, the time of the occurrence of sunspot minimum sets the length of the previous cycle, which is correlated by the amplitude-period effect to the amplitude of the next cycle, with cycles of shorter (longer) than average length usually being followed by cycles of larger (smaller) than average size (true for 16 of 21 sunspot cycles). Likewise, the size of the minimum at cycle onset is correlated with the size of the cycle's maximum amplitude, with cycles of larger (smaller) than average size minima usually being associated with larger (smaller) than average size maxima (true for 16 of 22 sunspot cycles). Also, it was found that the size of the previous cycle's minimum and maximum relates to the size of the following cycle's minimum and maximum with an even-odd cycle number dependency. The latter effect suggests that cycle 23 will have a minimum and maximum amplitude probably larger than average in size (in particular, minimum smoothed sunspot number Rm = 12.3 +/- 7.5 and maximum smoothed sunspot number RM = 198.8 +/- 36.5, at the 95-percent level of confidence), further suggesting (by the Waldmeier effect) that it will have a faster than average rise to maximum (fast-rising cycles have ascent durations of about 41 +/- 7 months). Thus, if, as expected, onset for cycle 23 will be December 1996 +/- 3 months, based on smoothed sunspot number, then the length of cycle 22 will be about 123 +/- 3 months, inferring that it is a short-period cycle and that cycle 23 maximum amplitude probably will be larger than average in size (from the amplitude-period effect), having an RM of about 133 +/- 39 (based on the usual +/- 30 percent spread that has been seen between observed and predicted values), with maximum amplitude occurrence likely sometime between July 1999 and October 2000.
Chadsuthi, Sudarat; Iamsirithaworn, Sopon; Triampo, Wannapong; Modchang, Charin
2015-01-01
Influenza is a worldwide respiratory infectious disease that easily spreads from one person to another. Previous research has found that the influenza transmission process is often associated with climate variables. In this study, we used autocorrelation and partial autocorrelation plots to determine the appropriate autoregressive integrated moving average (ARIMA) model for influenza transmission in the central and southern regions of Thailand. The relationships between reported influenza cases and the climate data, such as the amount of rainfall, average temperature, average maximum relative humidity, average minimum relative humidity, and average relative humidity, were evaluated using cross-correlation function. Based on the available data of suspected influenza cases and climate variables, the most appropriate ARIMA(X) model for each region was obtained. We found that the average temperature correlated with influenza cases in both central and southern regions, but average minimum relative humidity played an important role only in the southern region. The ARIMAX model that includes the average temperature with a 4-month lag and the minimum relative humidity with a 2-month lag is the appropriate model for the central region, whereas including the minimum relative humidity with a 4-month lag results in the best model for the southern region.
Neutron Spectroscopic Factors from Transfer Reactions
NASA Astrophysics Data System (ADS)
Lee, Jenny; Tsang, M. B.
2007-05-01
We have extracted the ground state to ground state neutron spectroscopic factors for 80 nuclei ranging in Z from 3 to 24 by analyzing the past measurements of the angular distributions from (d,p) and (p,d) reactions. We demonstrate an approach that provides systematic and consistent values with a minimum of assumptions. A three-body model with global optical potentials and standard geometry of n-potential is applied. For the 60 nuclei where modern shell model calculations are available, such analysis reproduces, to within 20%, the experimental spectroscopic factors for most nuclei. If we constraint the nucleon-target optical potential and the geometries of the bound neutron-wave function with the modern Hartree-Fock calculations, our deduced neutron spectroscopic factors are reduced by 30% on average.
Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie; ...
2016-10-18
In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie
In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less
NASA Astrophysics Data System (ADS)
Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.
2018-04-01
The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.
Variability of fractal dimension of solar radio flux
NASA Astrophysics Data System (ADS)
Bhatt, Hitaishi; Sharma, Som Kumar; Trivedi, Rupal; Vats, Hari Om
2018-04-01
In the present communication, the variation of the fractal dimension of solar radio flux is reported. Solar radio flux observations on a day to day basis at 410, 1415, 2695, 4995, and 8800 MHz are used in this study. The data were recorded at Learmonth Solar Observatory, Australia from 1988 to 2009 covering an epoch of two solar activity cycles (22 yr). The fractal dimension is calculated for the listed frequencies for this period. The fractal dimension, being a measure of randomness, represents variability of solar radio flux at shorter time-scales. The contour plot of fractal dimension on a grid of years versus radio frequency suggests high correlation with solar activity. Fractal dimension increases with increasing frequency suggests randomness increases towards the inner corona. This study also shows that the low frequency is more affected by solar activity (at low frequency fractal dimension difference between solar maximum and solar minimum is 0.42) whereas, the higher frequency is less affected by solar activity (here fractal dimension difference between solar maximum and solar minimum is 0.07). A good positive correlation is found between fractal dimension averaged over all frequencies and yearly averaged sunspot number (Pearson's coefficient is 0.87).
The structure of the NO(X (2)Pi)-N(2) complex: A joint experimental-theoretical study.
Wen, B; Meyer, H; Kłos, J
2010-04-21
We report the first measurement of the spectrum of the NO-N(2) complex in the region of the first vibrational NO overtone transition. The origin band of the complex is blueshifted by 0.30 cm(-1) from the corresponding NO monomer frequency. The observed spectrum consists of three bands assigned to the origin band, the excitation of one quantum of z-axis rotation and one associated hot band. The spacing of the bands and the rotational structure indicate a T-shaped vibrationally averaged structure with the NO molecule forming the top of the T. These findings are confirmed by high level ab initio calculations of the potential energy surfaces in planar symmetry. The deepest minimum is found for a T-shaped geometry on the A(")-surface. As a result the sum potential also has the global minimum for this structure. The different potential surfaces show several additional local minima at slightly higher energies indicating that the complex most likely will perform large amplitude motion even in its ground vibrational state. Nevertheless, as suggested by the measured spectra, the complex must, on average, spend a substantial amount of time near the T-shaped configuration.
Transition path time distributions for Lévy flights
NASA Astrophysics Data System (ADS)
Janakiraman, Deepika
2018-07-01
This paper presents a study of transition path time distributions for Lévy noise-induced barrier crossing. Transition paths are short segments of the reactive trajectories and span the barrier region of the potential without spilling into the reactant/product wells. The time taken to traverse this segment is referred to as the transition path time. Since the transition path is devoid of excursions in the minimum, the corresponding time will give the exclusive barrier crossing time, unlike . This work explores the distribution of transition path times for superdiffusive barrier crossing, analytically. This is made possible by approximating the barrier by an inverted parabola. Using this approximation, the distributions are evaluated in both over- and under-damped limits of friction. The short-time behaviour of the distributions, provide analytical evidence for single-step transition events—a feature in Lévy-barrier crossing as observed in prior simulation studies. The average transition path time is calculated as a function of the Lévy index (α), and the optimal value of α leading to minimum average transition path time is discussed, in both the limits of friction. Langevin dynamics simulations corroborating with the analytical results are also presented.
The influence of climate variables on dengue in Singapore.
Pinto, Edna; Coelho, Micheline; Oliver, Leuda; Massad, Eduardo
2011-12-01
In this work we correlated dengue cases with climatic variables for the city of Singapore. This was done through a Poisson Regression Model (PRM) that considers dengue cases as the dependent variable and the climatic variables (rainfall, maximum and minimum temperature and relative humidity) as independent variables. We also used Principal Components Analysis (PCA) to choose the variables that influence in the increase of the number of dengue cases in Singapore, where PC₁ (Principal component 1) is represented by temperature and rainfall and PC₂ (Principal component 2) is represented by relative humidity. We calculated the probability of occurrence of new cases of dengue and the relative risk of occurrence of dengue cases influenced by climatic variable. The months from July to September showed the highest probabilities of the occurrence of new cases of the disease throughout the year. This was based on an analysis of time series of maximum and minimum temperature. An interesting result was that for every 2-10°C of variation of the maximum temperature, there was an average increase of 22.2-184.6% in the number of dengue cases. For the minimum temperature, we observed that for the same variation, there was an average increase of 26.1-230.3% in the number of the dengue cases from April to August. The precipitation and the relative humidity, after analysis of correlation, were discarded in the use of Poisson Regression Model because they did not present good correlation with the dengue cases. Additionally, the relative risk of the occurrence of the cases of the disease under the influence of the variation of temperature was from 1.2-2.8 for maximum temperature and increased from 1.3-3.3 for minimum temperature. Therefore, the variable temperature (maximum and minimum) was the best predictor for the increased number of dengue cases in Singapore.
Entropy-Based Registration of Point Clouds Using Terrestrial Laser Scanning and Smartphone GPS.
Chen, Maolin; Wang, Siying; Wang, Mingwei; Wan, Youchuan; He, Peipei
2017-01-20
Automatic registration of terrestrial laser scanning point clouds is a crucial but unresolved topic that is of great interest in many domains. This study combines terrestrial laser scanner with a smartphone for the coarse registration of leveled point clouds with small roll and pitch angles and height differences, which is a novel sensor combination mode for terrestrial laser scanning. The approximate distance between two neighboring scan positions is firstly calculated with smartphone GPS coordinates. Then, 2D distribution entropy is used to measure the distribution coherence between the two scans and search for the optimal initial transformation parameters. To this end, we propose a method called Iterative Minimum Entropy (IME) to correct initial transformation parameters based on two criteria: the difference between the average and minimum entropy and the deviation from the minimum entropy to the expected entropy. Finally, the presented method is evaluated using two data sets that contain tens of millions of points from panoramic and non-panoramic, vegetation-dominated and building-dominated cases and can achieve high accuracy and efficiency.
Effect of tank geometry on its average performance
NASA Astrophysics Data System (ADS)
Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.
2018-03-01
The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suckling, Tara; Smith, Tony; Reed, Warren
2013-06-15
Optimal arterial opacification is crucial in imaging the pulmonary arteries using computed tomography (CT). This poses the challenge of precisely timing data acquisition to coincide with the transit of the contrast bolus through the pulmonary vasculature. The aim of this quality assurance exercise was to investigate if a change in CT pulmonary angiography (CTPA) scanning protocol resulted in improved opacification of the pulmonary arteries. Comparison was made between the smart prep protocol (SPP) and the test bolus protocol (TBP) for opacification in the pulmonary trunk. A total of 160 CTPA examinations (80 using each protocol) performed between January 2010 andmore » February 2011 were assessed retrospectively. CT attenuation coefficients were measured in Hounsfield Units (HU) using regions of interest at the level of the pulmonary trunk. The average pixel value, standard deviation (SD), maximum, and minimum were recorded. For each of these variables a mean value was then calculated and compared for these two CTPA protocols. Minimum opacification of 200 HU was achieved in 98% of the TBP sample but only 90% of the SPP sample. The average CT attenuation over the pulmonary trunk for the SPP was 329 (SD = ±21) HU, whereas for the TBP it was 396 (SD = ±22) HU (P = 0.0017). The TBP also recorded higher maximum (P = 0.0024) and minimum (P = 0.0039) levels of opacification. This study has found that a TBP resulted in significantly better opacification of the pulmonary trunk than the SPP.« less
40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...
40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...
40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... this part) Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample... per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method... appendix A of this part) Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour...
Calculated photonic structures for infrared emittance control
NASA Astrophysics Data System (ADS)
Rung, Andreas; Ribbing, Carl G.
2002-06-01
Using an available program package based on the transfer-matrix method, we calculated the photonic band structure for two different structures: a quasi-three-dimensional crystal of square air rods in a high-index matrix and an opal structure of high-index spheres in a matrix of low index, epsilon = 1.5. The high index used is representative of gallium arsenide in the thermal infrared range. The geometric parameters of the rod dimension, sphere radius, and lattice constants were chosen to give total reflectance for normal incidence, i.e., minimum thermal emittance, in either one of the two infrared atmospheric windows. For these four photonic crystals, the bulk reflectance spectra and the wavelength-averaged thermal emittance as a function of crystal thickness were calculated. The results reveal that potentially useful thermal signature suppression is obtained for crystals as thin as 20-50 mum, i.e., comparable with that of a paint layer.
NASA Astrophysics Data System (ADS)
Mottram, Catherine M.; Parrish, Randall R.; Regis, Daniele; Warren, Clare J.; Argles, Tom W.; Harris, Nigel B. W.; Roberts, Nick M. W.
2015-07-01
Quantitative constraints on the rates of tectonic processes underpin our understanding of the mechanisms that form mountains. In the Sikkim Himalaya, late structural doming has revealed time-transgressive evidence of metamorphism and thrusting that permit calculation of the minimum rate of movement on a major ductile fault zone, the Main Central Thrust (MCT), by a novel methodology. U-Th-Pb monazite ages, compositions, and metamorphic pressure-temperature determinations from rocks directly beneath the MCT reveal that samples from 50 km along the transport direction of the thrust experienced similar prograde, peak, and retrograde metamorphic conditions at different times. In the southern, frontal edge of the thrust zone, the rocks were buried to conditions of 550°C and 0.8 GPa between 21 and 18 Ma along the prograde path. Peak metamorphic conditions of 650°C and 0.8-1.0 GPa were subsequently reached as this footwall material was underplated to the hanging wall at 17-14 Ma. This same process occurred at analogous metamorphic conditions between 18-16 Ma and 14.5-13 Ma in the midsection of the thrust zone and between 13 Ma and 12 Ma in the northern, rear edge of the thrust zone. Northward younging muscovite 40Ar/39Ar ages are consistently 4 Ma younger than the youngest monazite ages for equivalent samples. By combining the geochronological data with the >50 km minimum distance separating samples along the transport axis, a minimum average thrusting rate of 10 ± 3 mm yr-1 can be calculated. This provides a minimum constraint on the amount of Miocene India-Asia convergence that was accommodated along the MCT.
Survey of Occupational Noise Exposure in CF Personnel in Selected High-Risk Trades
2003-11-01
peak, maximum level , minimum level , average sound level , time weighted average, dose, projected 8-hour dose, and upper limit time were measured for...10 4.4.2 Maximum Sound Level ...11 4.4.3 Minimum Sound Level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
An Effective Evolutionary Approach for Bicriteria Shortest Path Routing Problems
NASA Astrophysics Data System (ADS)
Lin, Lin; Gen, Mitsuo
Routing problem is one of the important research issues in communication network fields. In this paper, we consider a bicriteria shortest path routing (bSPR) model dedicated to calculating nondominated paths for (1) the minimum total cost and (2) the minimum transmission delay. To solve this bSPR problem, we propose a new multiobjective genetic algorithm (moGA): (1) an efficient chromosome representation using the priority-based encoding method; (2) a new operator of GA parameters auto-tuning, which is adaptively regulation of exploration and exploitation based on the change of the average fitness of parents and offspring which is occurred at each generation; and (3) an interactive adaptive-weight fitness assignment mechanism is implemented that assigns weights to each objective and combines the weighted objectives into a single objective function. Numerical experiments with various scales of network design problems show the effectiveness and the efficiency of our approach by comparing with the recent researches.
WEAMR — A Weighted Energy Aware Multipath Reliable Routing Mechanism for Hotline-Based WSNs
Tufail, Ali; Qamar, Arslan; Khan, Adil Mehmood; Baig, Waleed Akram; Kim, Ki-Hyung
2013-01-01
Reliable source to sink communication is the most important factor for an efficient routing protocol especially in domains of military, healthcare and disaster recovery applications. We present weighted energy aware multipath reliable routing (WEAMR), a novel energy aware multipath routing protocol which utilizes hotline-assisted routing to meet such requirements for mission critical applications. The protocol reduces the number of average hops from source to destination and provides unmatched reliability as compared to well known reactive ad hoc protocols i.e., AODV and AOMDV. Our protocol makes efficient use of network paths based on weighted cost calculation and intelligently selects the best possible paths for data transmissions. The path cost calculation considers end to end number of hops, latency and minimum energy node value in the path. In case of path failure path recalculation is done efficiently with minimum latency and control packets overhead. Our evaluation shows that our proposal provides better end-to-end delivery with less routing overhead and higher packet delivery success ratio compared to AODV and AOMDV. The use of multipath also increases overall life time of WSN network using optimum energy available paths between sender and receiver in WDNs. PMID:23669714
WEAMR-a weighted energy aware multipath reliable routing mechanism for hotline-based WSNs.
Tufail, Ali; Qamar, Arslan; Khan, Adil Mehmood; Baig, Waleed Akram; Kim, Ki-Hyung
2013-05-13
Reliable source to sink communication is the most important factor for an efficient routing protocol especially in domains of military, healthcare and disaster recovery applications. We present weighted energy aware multipath reliable routing (WEAMR), a novel energy aware multipath routing protocol which utilizes hotline-assisted routing to meet such requirements for mission critical applications. The protocol reduces the number of average hops from source to destination and provides unmatched reliability as compared to well known reactive ad hoc protocols i.e., AODV and AOMDV. Our protocol makes efficient use of network paths based on weighted cost calculation and intelligently selects the best possible paths for data transmissions. The path cost calculation considers end to end number of hops, latency and minimum energy node value in the path. In case of path failure path recalculation is done efficiently with minimum latency and control packets overhead. Our evaluation shows that our proposal provides better end-to-end delivery with less routing overhead and higher packet delivery success ratio compared to AODV and AOMDV. The use of multipath also increases overall life time of WSN network using optimum energy available paths between sender and receiver in WDNs.
Deeth, Robert J
2008-08-04
A general molecular mechanics method is presented for modeling the symmetric bidentate, asymmetric bidentate, and bridging modes of metal-carboxylates with a single parameter set by using a double-minimum M-O-C angle-bending potential. The method is implemented within the Molecular Operating Environment (MOE) with parameters based on the Merck molecular force field although, with suitable modifications, other MM packages and force fields could easily be used. Parameters for high-spin d (5) manganese(II) bound to carboxylate and water plus amine, pyridyl, imidazolyl, and pyrazolyl donors are developed based on 26 mononuclear and 29 dinuclear crystallographically characterized complexes. The average rmsd for Mn-L distances is 0.08 A, which is comparable to the experimental uncertainty required to cover multiple binding modes, and the average rmsd in heavy atom positions is around 0.5 A. In all cases, whatever binding mode is reported is also computed to be a stable local minimum. In addition, the structure-based parametrization implicitly captures the energetics and gives the same relative energies of symmetric and asymmetric coordination modes as density functional theory calculations in model and "real" complexes. Molecular dynamics simulations show that carboxylate rotation is favored over "flipping" while a stochastic search algorithm is described for randomly searching conformational space. The model reproduces Mn-Mn distances in dinuclear systems especially accurately, and this feature is employed to illustrate how MM calculations on models for the dimanganese active site of methionine aminopeptidase can help determine some of the details which may be missing from the experimental structure.
Code of Federal Regulations, 2014 CFR
2014-07-01
... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...
Code of Federal Regulations, 2013 CFR
2013-07-01
... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...
12 CFR Appendix M1 to Part 1026 - Repayment Disclosures
Code of Federal Regulations, 2012 CFR
2012-01-01
... terms of a cardholder's account that will expire in a fixed period of time, as set forth by the card... estimates. (1) Minimum payment formulas. When calculating the minimum payment repayment estimate, card... calculate the minimum payment amount for special purchases, such as a “club plan purchase.” Also, assume...
12 CFR Appendix M1 to Part 1026 - Repayment Disclosures
Code of Federal Regulations, 2013 CFR
2013-01-01
... terms of a cardholder's account that will expire in a fixed period of time, as set forth by the card... estimates. (1) Minimum payment formulas. When calculating the minimum payment repayment estimate, card... calculate the minimum payment amount for special purchases, such as a “club plan purchase.” Also, assume...
Unraveling the relationship between arterial flow and intra-aneurysmal hemodynamics.
Morales, Hernán G; Bonnefous, Odile
2015-02-26
Arterial flow rate affects intra-aneurysmal hemodynamics but it is not clear how their relationship is. This uncertainty hinders the comparison among studies, including clinical evaluations, like a pre- and post-treatment status, since arterial flow rates may differ at each time acquisition. The purposes of this work are as follows: (1) To study how intra-aneurysmal hemodynamics changes within the full physiological range of arterial flow rates. (2) To provide characteristic curves of intra-aneurysmal velocity, wall shear stress (WSS) and pressure as functions of the arterial flow rate. Fifteen image-based aneurysm models were studied using computational fluid dynamics (CFD) simulations. The full range of physiological arterial flow rates reported in the literature was covered by 11 pulsatile simulations. For each aneurysm, the spatiotemporal-averaged blood flow velocity, WSS and pressure were calculated. Spatiotemporal-averaged velocity inside the aneurysm linearly increases as a function of the mean arterial flow (minimum R(2)>0.963). Spatiotemporal-averaged WSS and pressure at the aneurysm wall can be represented by quadratic functions of the arterial flow rate (minimum R(2)>0.996). Quantitative characterizations of spatiotemporal-averaged velocity, WSS and pressure inside cerebral aneurysms can be obtained with respect to the arterial flow rate. These characteristic curves provide more information of the relationship between arterial flow and aneurysm hemodynamics since the full range of arterial flow rates is considered. Having these curves, it is possible to compare experimental studies and clinical evaluations when different flow conditions are used. Copyright © 2015 Elsevier Ltd. All rights reserved.
40 CFR 61.356 - Recordkeeping requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... test protocol and the means by which sampling variability and analytical variability were accounted for... also establish the design minimum and average temperature in the combustion zone and the combustion... the design minimum and average temperatures across the catalyst bed inlet and outlet. (C) For a boiler...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendes, Milrian S.; Felinto, Daniel
2011-12-15
We analyze the efficiency and scalability of the Duan-Lukin-Cirac-Zoller (DLCZ) protocol for quantum repeaters focusing on the behavior of the experimentally accessible measures of entanglement for the system, taking into account crucial imperfections of the stored entangled states. We calculate then the degradation of the final state of the quantum-repeater linear chain for increasing sizes of the chain, and characterize it by a lower bound on its concurrence and the ability to violate the Clausner-Horne-Shimony-Holt inequality. The states are calculated up to an arbitrary number of stored excitations, as this number is not fundamentally bound for experiments involving large atomicmore » ensembles. The measurement by avalanche photodetectors is modeled by ''ON/OFF'' positive operator-valued measure operators. As a result, we are able to consistently test the approximation of the real fields by fields with a finite number of excitations, determining the minimum number of excitations required to achieve a desired precision in the prediction of the various measured quantities. This analysis finally determines the minimum purity of the initial state that is required to succeed in the protocol as the size of the chain increases. We also provide a more accurate estimate for the average time required to succeed in each step of the protocol. The minimum purity analysis and the new time estimates are then combined to trace the perspectives for implementation of the DLCZ protocol in present-day laboratory setups.« less
NASA Astrophysics Data System (ADS)
Mendes, Milrian S.; Felinto, Daniel
2011-12-01
We analyze the efficiency and scalability of the Duan-Lukin-Cirac-Zoller (DLCZ) protocol for quantum repeaters focusing on the behavior of the experimentally accessible measures of entanglement for the system, taking into account crucial imperfections of the stored entangled states. We calculate then the degradation of the final state of the quantum-repeater linear chain for increasing sizes of the chain, and characterize it by a lower bound on its concurrence and the ability to violate the Clausner-Horne-Shimony-Holt inequality. The states are calculated up to an arbitrary number of stored excitations, as this number is not fundamentally bound for experiments involving large atomic ensembles. The measurement by avalanche photodetectors is modeled by “ON/OFF” positive operator-valued measure operators. As a result, we are able to consistently test the approximation of the real fields by fields with a finite number of excitations, determining the minimum number of excitations required to achieve a desired precision in the prediction of the various measured quantities. This analysis finally determines the minimum purity of the initial state that is required to succeed in the protocol as the size of the chain increases. We also provide a more accurate estimate for the average time required to succeed in each step of the protocol. The minimum purity analysis and the new time estimates are then combined to trace the perspectives for implementation of the DLCZ protocol in present-day laboratory setups.
NASA Astrophysics Data System (ADS)
Verkhoglyadova, O. P.; Tsurutani, B. T.; Mannucci, A. J.; Mlynczak, M. G.; Hunt, L. A.; Runge, T.
2013-02-01
We study solar wind-ionosphere coupling through the late declining phase/solar minimum and geomagnetic minimum phases during the last solar cycle (SC23) - 2008 and 2009. This interval was characterized by sequences of high-speed solar wind streams (HSSs). The concomitant geomagnetic response was moderate geomagnetic storms and high-intensity, long-duration continuous auroral activity (HILDCAA) events. The JPL Global Ionospheric Map (GIM) software and the GPS total electron content (TEC) database were used to calculate the vertical TEC (VTEC) and estimate daily averaged values in separate latitude and local time ranges. Our results show distinct low- and mid-latitude VTEC responses to HSSs during this interval, with the low-latitude daytime daily averaged values increasing by up to 33 TECU (annual average of ~20 TECU) near local noon (12:00 to 14:00 LT) in 2008. In 2009 during the minimum geomagnetic activity (MGA) interval, the response to HSSs was a maximum of ~30 TECU increases with a slightly lower average value than in 2008. There was a weak nighttime ionospheric response to the HSSs. A well-studied solar cycle declining phase interval, 10-22 October 2003, was analyzed for comparative purposes, with daytime low-latitude VTEC peak values of up to ~58 TECU (event average of ~55 TECU). The ionospheric VTEC changes during 2008-2009 were similar but ~60% less intense on average. There is an evidence of correlations of filtered daily averaged VTEC data with Ap index and solar wind speed. We use the infrared NO and CO2 emission data obtained with SABER on TIMED as a proxy for the radiation balance of the thermosphere. It is shown that infrared emissions increase during HSS events possibly due to increased energy input into the auroral region associated with HILDCAAs. The 2008-2009 HSS intervals were ~85% less intense than the 2003 early declining phase event, with annual averages of daily infrared NO emission power of ~ 3.3 × 1010 W and 2.7 × 1010 W in 2008 and 2009, respectively. The roles of disturbance dynamos caused by high-latitude winds (due to particle precipitation and Joule heating in the auroral zones) and of prompt penetrating electric fields (PPEFs) in the solar wind-ionosphere coupling during these intervals are discussed. A correlation between geoeffective interplanetary electric field components and HSS intervals is shown. Both PPEF and disturbance dynamo mechanisms could play important roles in solar wind-ionosphere coupling during prolonged (up to days) external driving within HILDCAA intervals.
40 CFR 63.1365 - Test methods and initial compliance procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... design minimum and average temperature in the combustion zone and the combustion zone residence time. (B... establish the design minimum and average flame zone temperatures and combustion zone residence time, and... carbon bed temperature after regeneration, design carbon bed regeneration time, and design service life...
Novel mathematical algorithm for pupillometric data analysis.
Canver, Matthew C; Canver, Adam C; Revere, Karen E; Amado, Defne; Bennett, Jean; Chung, Daniel C
2014-01-01
Pupillometry is used clinically to evaluate retinal and optic nerve function by measuring pupillary response to light stimuli. We have developed a mathematical algorithm to automate and expedite the analysis of non-filtered, non-calculated pupillometric data obtained from mouse pupillary light reflex recordings, obtained from dynamic pupillary diameter recordings following exposure of varying light intensities. The non-filtered, non-calculated pupillometric data is filtered through a low pass finite impulse response (FIR) filter. Thresholding is used to remove data caused by eye blinking, loss of pupil tracking, and/or head movement. Twelve physiologically relevant parameters were extracted from the collected data: (1) baseline diameter, (2) minimum diameter, (3) response amplitude, (4) re-dilation amplitude, (5) percent of baseline diameter, (6) response time, (7) re-dilation time, (8) average constriction velocity, (9) average re-dilation velocity, (10) maximum constriction velocity, (11) maximum re-dilation velocity, and (12) onset latency. No significant differences were noted between parameters derived from algorithm calculated values and manually derived results (p ≥ 0.05). This mathematical algorithm will expedite endpoint data derivation and eliminate human error in the manual calculation of pupillometric parameters from non-filtered, non-calculated pupillometric values. Subsequently, these values can be used as reference metrics for characterizing the natural history of retinal disease. Furthermore, it will be instrumental in the assessment of functional visual recovery in humans and pre-clinical models of retinal degeneration and optic nerve disease following pharmacological or gene-based therapies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Advancements in dynamic kill calculations for blowout wells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kouba, G.E.; MacDougall, G.R.; Schumacher, B.W.
1993-09-01
This paper addresses the development, interpretation, and use of dynamic kill equations. To this end, three simple calculation techniques are developed for determining the minimum dynamic kill rate. Two techniques contain only single-phase calculations and are independent of reservoir inflow performance. Despite these limitations, these two methods are useful for bracketing the minimum flow rates necessary to kill a blowing well. For the third technique, a simplified mechanistic multiphase-flow model is used to determine a most-probable minimum kill rate.
Brassey, Charlotte A.; Margetts, Lee; Kitchener, Andrew C.; Withers, Philip J.; Manning, Phillip L.; Sellers, William I.
2013-01-01
Classic beam theory is frequently used in biomechanics to model the stress behaviour of vertebrate long bones, particularly when creating intraspecific scaling models. Although methodologically straightforward, classic beam theory requires complex irregular bones to be approximated as slender beams, and the errors associated with simplifying complex organic structures to such an extent are unknown. Alternative approaches, such as finite element analysis (FEA), while much more time-consuming to perform, require no such assumptions. This study compares the results obtained using classic beam theory with those from FEA to quantify the beam theory errors and to provide recommendations about when a full FEA is essential for reasonable biomechanical predictions. High-resolution computed tomographic scans of eight vertebrate long bones were used to calculate diaphyseal stress owing to various loading regimes. Under compression, FEA values of minimum principal stress (σmin) were on average 142 per cent (±28% s.e.) larger than those predicted by beam theory, with deviation between the two models correlated to shaft curvature (two-tailed p = 0.03, r2 = 0.56). Under bending, FEA values of maximum principal stress (σmax) and beam theory values differed on average by 12 per cent (±4% s.e.), with deviation between the models significantly correlated to cross-sectional asymmetry at midshaft (two-tailed p = 0.02, r2 = 0.62). In torsion, assuming maximum stress values occurred at the location of minimum cortical thickness brought beam theory and FEA values closest in line, and in this case FEA values of τtorsion were on average 14 per cent (±5% s.e.) higher than beam theory. Therefore, FEA is the preferred modelling solution when estimates of absolute diaphyseal stress are required, although values calculated by beam theory for bending may be acceptable in some situations. PMID:23173199
NASA Astrophysics Data System (ADS)
Lee, Zoe; Baas, Andreas
2013-04-01
It is widely recognised that boundary layer turbulence plays an important role in sediment transport dynamics in aeolian environments. Improvements in the design and affordability of ultrasonic anemometers have provided significant contributions to studies of aeolian turbulence, by facilitating high frequency monitoring of three dimensional wind velocities. Consequently, research has moved beyond studies of mean airflow properties, to investigations into quasi-instantaneous turbulent fluctuations at high spatio-temporal scales. To fully understand, how temporal fluctuations in shear stress drive wind erosivity and sediment transport, research into the best practice for calculating shear stress is necessary. This paper builds upon work published by Lee and Baas (2012) on the influence of streamline correction techniques on Reynolds shear stress, by investigating the time-averaging interval used in the calculation. Concerns relating to the selection of appropriate averaging intervals for turbulence research, where the data are typically non-stationary at all timescales, are well documented in the literature (e.g. Treviño and Andreas, 2000). For example, Finnigan et al. (2003) found that underestimating the required averaging interval can lead to a reduction in the calculated momentum flux, as contributions from turbulent eddies longer than the averaging interval are lost. To avoid the risk of underestimating fluxes, researchers have typically used the total measurement duration as a single averaging period. For non-stationary data, however, using the whole measurement run as a single block average is inadequate for defining turbulent fluctuations. The data presented in this paper were collected in a field study of boundary layer turbulence conducted at Tramore beach near Rosapenna, County Donegal, Ireland. High-frequency (50 Hz) 3D wind velocity measurements were collected using ultrasonic anemometry at thirteen different heights between 0.11 and 1.62 metres above the bed. A technique for determining time-averaging intervals for a series of anemometers stacked in a close vertical array is presented. A minimum timescale is identified using spectral analysis to determine the inertial sub-range, where energy is neither produced nor dissipated but passed down to increasingly smaller scales. An autocorrelation function is then used to derive a scaling pattern between anemometer heights, which defines a series of averaging intervals of increasing length with height above the surface. Results demonstrate the effect of different averaging intervals on the calculation of Reynolds shear stress and highlight the inadequacy of using the total measurement duration as a single block average. Lee, Z. S. & Baas, A. C. W. (2012). Streamline correction for the analysis of boundary layer turbulence. Geomorphology, 171-172, 69-82. Treviño, G. and Andreas, E.L., 2000. Averaging Intervals For Spectral Analysis Of Nonstationary Turbulence. Boundary-Layer Meteorology, 95(2): 231-247. Finnigan, J.J., Clement, R., Malhi, Y., Leuning, R. and Cleugh, H.A., 2003. Re-evaluation of long-term flux measurement techniques. Part I: Averaging and coordinate rotation. Boundary-Layer Meteorology, 107(1): 1-48.
Comparing exposure metrics for classifying ‘dangerous heat’ in heat wave and health warning systems
Zhang, Kai; Rood, Richard B.; Michailidis, George; Oswald, Evan M.; Schwartz, Joel D.; Zanobetti, Antonella; Ebi, Kristie L.; O’Neill, Marie S.
2012-01-01
Heat waves have been linked to excess mortality and morbidity, and are projected to increase in frequency and intensity with a warming climate. This study compares exposure metrics to trigger heat wave and health warning systems (HHWS), and introduces a novel multi-level hybrid clustering method to identify potential dangerously hot days. Two-level and three-level hybrid clustering analysis as well as common indices used to trigger HHWS, including spatial synoptic classification (SSC); and 90th, 95th, and 99th percentiles of minimum and relative minimum temperature (using a 10 day reference period), were calculated using a summertime weather dataset in Detroit from 1976 to 2006. The days classified as ‘hot’ with hybrid clustering analysis, SSC, minimum and relative minimum temperature methods differed by method type. SSC tended to include the days with, on average, 2.6 °C lower daily minimum temperature and 5.3 °C lower dew point than days identified by other methods. These metrics were evaluated by comparing their performance in predicting excess daily mortality. The 99th percentile of minimum temperature was generally the most predictive, followed by the three-level hybrid clustering method, the 95th percentile of minimum temperature, SSC and others. Our proposed clustering framework has more flexibility and requires less substantial meteorological prior information than the synoptic classification methods. Comparison of these metrics in predicting excess daily mortality suggests that metrics thought to better characterize physiological heat stress by considering several weather conditions simultaneously may not be the same metrics that are better at predicting heat-related mortality, which has significant implications in HHWSs. PMID:22673187
NASA Technical Reports Server (NTRS)
Zhang, Ping; Bounoua, Lahouari; Imhoff, Marc L.; Wolfe, Robert E.; Thome, Kurtis
2014-01-01
The National Land Cover Database (NLCD) Impervious Surface Area (ISA) and MODIS Land Surface Temperature (LST) are used in a spatial analysis to assess the surface-temperature-based urban heat island's (UHIS) signature on LST amplitude over the continental USA and to make comparisons to local air temperatures. Air-temperature-based UHIs (UHIA), calculated using the Global Historical Climatology Network (GHCN) daily air temperatures, are compared with UHIS for urban areas in different biomes during different seasons. NLCD ISA is used to define urban and rural temperatures and to stratify the sampling for LST and air temperatures. We find that the MODIS LST agrees well with observed air temperature during the nighttime, but tends to overestimate it during the daytime, especially during summer and in nonforested areas. The minimum air temperature analyses show that UHIs in forests have an average UHIA of 1 C during the summer. The UHIS, calculated from nighttime LST, has similar magnitude of 1-2 C. By contrast, the LSTs show a midday summer UHIS of 3-4 C for cities in forests, whereas the average summer UHIA calculated from maximum air temperature is close to 0 C. In addition, the LSTs and air temperatures difference between 2006 and 2011 are in agreement, albeit with different magnitude.
Dubois, A B; Ogilvy, C S
1978-12-01
1. Pressures on the right and left sides of the tails of swimming bluefish were measured and found to have a range of +5.9 to -5.9 cm H2O. The pressures were resolved into their forward and lateral vectorial components of force to allow calculation of forward and lateral force and power at speeds ranging from 0.26 to 0.87 m/s. 2. The peak to peak changes in force of acceleration of the body, measured with a forward accelerometer averaged 209 g or 2.05 N at 0.48 m/s, and were compared with the maximum to minimum excursions of forward tail force averaging 201 g or 1.97 N at the same speed. The mean difference was 8 g, S.D. of the mean difference +/-29, SE. of mean difference +/-10 g. 3. Mean tail thrust was calculated as the time average of tail force in the forward direction. It averaged 65 g , or 0.64 N, at 0.48 m/s. The mean forward power was 0.34 N m/s at 0.48 m/s. The drag of the gauges and wires accounted for 10% of this figure. 4. The mean lateral power of the tail was 1.28 N m/s at a mean speed of 0.48 m/s. 5. The propulsive efficiency of the tail, calculated as the ratio of forward power to forward plus lateral power, was found to be 0.20 S.D.+/-0.04, S.E.+/-0.01 and was not related to speed. This suggests that 80% of the mechanical power of the tail was wasted. Turbulence in the water may have contributed to this large drag and low tail efficiency.
Enders, Philip; Adler, Werner; Schaub, Friederike; Hermann, Manuel M; Diestelhorst, Michael; Dietlein, Thomas; Cursiefen, Claus; Heindl, Ludwig M
2017-10-24
To compare a simultaneously optimized continuous minimum rim surface parameter between Bruch's membrane opening (BMO) and the internal limiting membrane to the standard sequential minimization used for calculating the BMO minimum rim area in spectral domain optical coherence tomography (SD-OCT). In this case-control, cross-sectional study, 704 eyes of 445 participants underwent SD-OCT of the optic nerve head (ONH), visual field testing, and clinical examination. Globally and clock-hour sector-wise optimized BMO-based minimum rim area was calculated independently. Outcome parameters included BMO-globally optimized minimum rim area (BMO-gMRA) and sector-wise optimized BMO-minimum rim area (BMO-MRA). BMO area was 1.89 ± 0.05 mm 2 . Mean global BMO-MRA was 0.97 ± 0.34 mm 2 , mean global BMO-gMRA was 1.01 ± 0.36 mm 2 . Both parameters correlated with r = 0.995 (P < 0.001); mean difference was 0.04 mm 2 (P < 0.001). In all sectors, parameters differed by 3.0-4.2%. In receiver operating characteristics, the calculated area under the curve (AUC) to differentiate glaucoma was 0.873 for BMO-MRA, compared to 0.866 for BMO-gMRA (P = 0.004). Among ONH sectors, the temporal inferior location showed the highest AUC. Optimization strategies to calculate BMO-based minimum rim area led to significantly different results. Imposing an additional adjacency constraint within calculation of BMO-MRA does not improve diagnostic power. Global and temporal inferior BMO-MRA performed best in differentiating glaucoma patients.
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation... where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated on the basis of count. (b) In all other...
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation... where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated on the basis of count. (b) In all other...
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation... where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated on the basis of count. (b) In all other...
Heart Rate During Sleep: Implications for Monitoring Training Status
Waldeck, Miriam R.; Lambert, Michael I.
2003-01-01
Resting heart rate has sometimes been used as a marker of training status. It is reasonable to assume that the relationship between heart rate and training status should be more evident during sleep when extraneous factors that may influence heart rate are reduced. Therefore the aim of the study was to assess the repeatability of monitoring heart rate during sleep when training status remained unchanged, to determine if this measurement had sufficient precision to be used as a marker of training status. The heart rate of ten female subjects was monitored for 24 hours on three occasions over three weeks whilst training status remained unchanged. Average, minimum and maximum heart rate during sleep was calculated. The average heart rate of the group during sleep was similar on each of the three tests (65 ± 9, 63 ± 6 and 67 ± 7 beats·min-1 respectively). The range in minimum heart rate variation during sleep for all subjects over the three testing sessions was from 0 to 10 beats·min-1 (mean = 5 ± 3 beats·min-1) and for maximum heart rate variation was 2 to 31 beats·min-1 (mean = 13 ± 9 beats·min-1). In summary it was found that on an individual basis the minimum heart rate during sleep varied by about 8 beats·min-1. This amount of intrinsic day-to-day variation needs to be considered when changes in heart rate that may occur with changes in training status are interpreted. PMID:24688273
Climate Prediction Center - Monitoring and Data - Regional Climate Maps:
; Precipitation & Temperature > Regional Climate Maps: USA Menu Weekly 1-Month 3-Month 12-Month Weekly Total Precipitation Average Temperature Extreme Maximum Temperature Extreme Minimum Temperature Departure of Average Temperature from Normal Extreme Apparent Temperature Minimum Wind Chill Temperature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H; Guerrero, M; Prado, K
Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors,more » cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.« less
Validation of the UNC OCT Index for the Diagnosis of Early Glaucoma
Lee, Gary; Budenz, Donald L.; Warren, Joshua L.; Wall, Michael; Artes, Paul H.; Callan, Thomas M.; Flanagan, John G.
2018-01-01
Purpose To independently validate the performance of the University of North Carolina Optical Coherence Tomography (UNC OCT) Index in diagnosing and predicting early glaucoma. Methods Data of 118 normal subjects (118 eyes) and 96 subjects (96 eyes) with early glaucoma defined as visual field mean deviation (MD) greater than −4 decibels (dB), aged 40 to 80 years, and who were enrolled in the Full-Threshold Testing Size III, V, VI comparison study were used in this study. CIRRUS OCT average and quadrants' retinal nerve fiber layer (RNFL); optic disc vertical cup-to-disc ratio (VCDR), cup-to-disc area ratio, and rim area; and average, minimum, and six sectoral ganglion cell-inner plexiform layer (GCIPL) measurements were run through the UNC OCT Index algorithm. Area under the receiver operating characteristic curve (AUC) and sensitivities at 95% and 99% specificity were calculated and compared between single parameters and the UNC OCT Index. Results Mean age was 60.1 ± 11.0 years for normal subjects and 66.5 ± 8.1 years for glaucoma patients (P < 0.001). MD was 0.29 ± 1.04 dB and −1.30 ± 1.35 dB in normal and glaucomatous eyes (P < 0.001), respectively. The AUC of the UNC OCT Index was 0.96. The best single metrics when compared to the UNC OCT Index were VCDR (0.93, P = 0.054), average RNFL (0.92, P = 0.014), and minimum GCIPL (0.91, P = 0.009). The sensitivities at 95% and 99% specificity were 85.4% and 76.0% (UNC OCT Index), 71.9% and 62.5% (VCDR, all P < 0.001), 64.6% and 53.1% (average RNFL, all P < 0.001), and 66.7% and 58.3% (minimum GCIPL, all P < 0.001), respectively. Conclusions The findings confirm that the UNC OCT Index may provide improved diagnostic perforce over that of single OCT parameters and may be a good tool for detection of early glaucoma. Translational Relevance The UNC OCT Index algorithm may be incorporated easily into routine clinical practice and be useful for detecting early glaucoma. PMID:29629238
Validation of the UNC OCT Index for the Diagnosis of Early Glaucoma.
Mwanza, Jean-Claude; Lee, Gary; Budenz, Donald L; Warren, Joshua L; Wall, Michael; Artes, Paul H; Callan, Thomas M; Flanagan, John G
2018-04-01
To independently validate the performance of the University of North Carolina Optical Coherence Tomography (UNC OCT) Index in diagnosing and predicting early glaucoma. Data of 118 normal subjects (118 eyes) and 96 subjects (96 eyes) with early glaucoma defined as visual field mean deviation (MD) greater than -4 decibels (dB), aged 40 to 80 years, and who were enrolled in the Full-Threshold Testing Size III, V, VI comparison study were used in this study. CIRRUS OCT average and quadrants' retinal nerve fiber layer (RNFL); optic disc vertical cup-to-disc ratio (VCDR), cup-to-disc area ratio, and rim area; and average, minimum, and six sectoral ganglion cell-inner plexiform layer (GCIPL) measurements were run through the UNC OCT Index algorithm. Area under the receiver operating characteristic curve (AUC) and sensitivities at 95% and 99% specificity were calculated and compared between single parameters and the UNC OCT Index. Mean age was 60.1 ± 11.0 years for normal subjects and 66.5 ± 8.1 years for glaucoma patients ( P < 0.001). MD was 0.29 ± 1.04 dB and -1.30 ± 1.35 dB in normal and glaucomatous eyes ( P < 0.001), respectively. The AUC of the UNC OCT Index was 0.96. The best single metrics when compared to the UNC OCT Index were VCDR (0.93, P = 0.054), average RNFL (0.92, P = 0.014), and minimum GCIPL (0.91, P = 0.009). The sensitivities at 95% and 99% specificity were 85.4% and 76.0% (UNC OCT Index), 71.9% and 62.5% (VCDR, all P < 0.001), 64.6% and 53.1% (average RNFL, all P < 0.001), and 66.7% and 58.3% (minimum GCIPL, all P < 0.001), respectively. The findings confirm that the UNC OCT Index may provide improved diagnostic perforce over that of single OCT parameters and may be a good tool for detection of early glaucoma. The UNC OCT Index algorithm may be incorporated easily into routine clinical practice and be useful for detecting early glaucoma.
Code of Federal Regulations, 2010 CFR
2010-07-01
...-TRANSPORTATION AND TEMPORARY STORAGE OF HOUSEHOLD GOODS AND PROFESSIONAL BOOKS, PAPERS, AND EQUIPMENT (PBP&E... calculated when a carrier charges a minimum weight, but the actual weight of HHG, PBP&E and temporary storage... actual weight of HHG, PBP&E and temporary storage is less than the minimum weight charged? Charges for...
Goh, Jody P; Koh, Victor; Chan, Yiong Huak; Ngo, Cheryl
2017-07-01
To study the distribution of macular ganglion cell-inner plexiform layer (GC-IPL) thickness and peripapillary retinal nerve fiber layer (RNFL) thickness in children with refractive errors. Two hundred forty-three healthy eyes from 139 children with refractive error ranging from -10.00 to +5.00 D were recruited from the National University Hospital Eye Surgery outpatient clinic. After a comprehensive ocular examination, refraction, and axial length (AL) measurement (IOLMaster), macular GC-IPL and RNFL thickness values were obtained with a spectral domain Cirrus high definition optical coherence tomography system (Carl Zeiss Meditec Inc.). Only scans with signal strength of >6/10 were included. Correlation between variables was calculated using the Pearson correlation coefficient. A multivariate analysis using mixed models was done to adjust for confounders. The mean spherical equivalent refraction was -3.20±3.51 D and mean AL was 24.39±1.72 mm. Average, minimum, superior, and inferior GC-IPL were 82.59±6.29, 77.17±9.65, 83.68±6.96, and 81.64±6.70 μm, respectively. Average, superior, and inferior peripapillary RNFL were 99.00±11.45, 123.20±25.81, and 124.24±22.23 μm, respectively. Average, superior, and inferior GC-IPL were correlated with AL (β=-2.056, P-value 0.000; β=-2.383, P-value 0.000; β=-1.721, P-value 0.000), but minimum GC-IPL was not (β=-1.056, P-value 0.115). None of the RNFL parameters were correlated with AL. This study establishes normative macular GC-IPL and RNFL thickness in children with refractive errors. Our results suggest that high definition optical coherence tomography RNFL parameters and minimum GC-IPL are not affected by AL or myopia in children, and therefore warrants further evaluation in pediatric glaucoma patients.
Anticipating Cycle 24 Minimum and its Consequences: An Update
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2008-01-01
This Technical Publication updates estimates for cycle 24 minimum and discusses consequences associated with cycle 23 being a longer than average period cycle and cycle 24 having parametric minimum values smaller (or larger for the case of spotless days) than long term medians. Through December 2007, cycle 23 has persisted 140 mo from its 12-mo moving average (12-mma) minimum monthly mean sunspot number occurrence date (May 1996). Longer than average period cycles of the modern era (since cycle 12) have minimum-to-minimum periods of about 139.0+/-6.3 mo (the 90-percent prediction interval), inferring that cycle 24 s minimum monthly mean sunspot number should be expected before July 2008. The major consequence of this is that, unless cycle 24 is a statistical outlier (like cycle 21), its maximum amplitude (RM) likely will be smaller than previously forecast. If, however, in the course of its rise cycle 24 s 12-mma of the weighted mean latitude (L) of spot groups exceeds 24 deg, then one expects RM >131, and if its 12-mma of highest latitude (H) spot groups exceeds 38 deg, then one expects RM >127. High-latitude new cycle spot groups, while first reported in January 2008, have not, as yet, become the dominant form of spot groups. Minimum values in L and H were observed in mid 2007 and values are now slowly increasing, a precondition for the imminent onset of the new sunspot cycle.
Zhang, Xu; Jin, Weiqi; Li, Jiakun; Wang, Xia; Li, Shuo
2017-04-01
Thermal imaging technology is an effective means of detecting hazardous gas leaks. Much attention has been paid to evaluation of the performance of gas leak infrared imaging detection systems due to several potential applications. The minimum resolvable temperature difference (MRTD) and the minimum detectable temperature difference (MDTD) are commonly used as the main indicators of thermal imaging system performance. This paper establishes a minimum detectable gas concentration (MDGC) performance evaluation model based on the definition and derivation of MDTD. We proposed the direct calculation and equivalent calculation method of MDGC based on the MDTD measurement system. We build an experimental MDGC measurement system, which indicates the MDGC model can describe the detection performance of a thermal imaging system to typical gases. The direct calculation, equivalent calculation, and direct measurement results are consistent. The MDGC and the minimum resolvable gas concentration (MRGC) model can effectively describe the performance of "detection" and "spatial detail resolution" of thermal imaging systems to gas leak, respectively, and constitute the main performance indicators of gas leak detection systems.
Mobility based multicast routing in wireless mesh networks
NASA Astrophysics Data System (ADS)
Jain, Sanjeev; Tripathi, Vijay S.; Tiwari, Sudarshan
2013-01-01
There exist two fundamental approaches to multicast routing namely minimum cost trees and shortest path trees. The (MCT's) minimum cost tree is one which connects receiver and sources by providing a minimum number of transmissions (MNTs) the MNTs approach is generally used for energy constraint sensor and mobile ad hoc networks. In this paper we have considered node mobility and try to find out simulation based comparison of the (SPT's) shortest path tree, (MST's) minimum steiner trees and minimum number of transmission trees in wireless mesh networks by using the performance metrics like as an end to end delay, average jitter, throughput and packet delivery ratio, average unicast packet delivery ratio, etc. We have also evaluated multicast performance in the small and large wireless mesh networks. In case of multicast performance in the small networks we have found that when the traffic load is moderate or high the SPTs outperform the MSTs and MNTs in all cases. The SPTs have lowest end to end delay and average jitter in almost all cases. In case of multicast performance in the large network we have seen that the MSTs provide minimum total edge cost and minimum number of transmissions. We have also found that the one drawback of SPTs, when the group size is large and rate of multicast sending is high SPTs causes more packet losses to other flows as MCTs.
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and... weigh ten pounds or less, or in any container where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated...
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and... weigh ten pounds or less, or in any container where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated...
Performance of Velicer's Minimum Average Partial Factor Retention Method with Categorical Variables
ERIC Educational Resources Information Center
Garrido, Luis E.; Abad, Francisco J.; Ponsoda, Vicente
2011-01-01
Despite strong evidence supporting the use of Velicer's minimum average partial (MAP) method to establish the dimensionality of continuous variables, little is known about its performance with categorical data. Seeking to fill this void, the current study takes an in-depth look at the performance of the MAP procedure in the presence of…
40 CFR Table 1 to Subpart III of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...
40 CFR Table 1 to Subpart Eeee of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...
Hydrologic and climatic changes in three small watersheds after timber harvest.
W.B. Fowler; J.D. Helvey; E.N. Felix
1987-01-01
No significant increases in annual water yield were shown for three small watersheds in northeastern Oregon after shelterwood cutting (30-percent canopy removal, 50-percent basal area removal) and clearcutting. Average maximum air temperature increased after harvest and average minimum air temperature decreased by up to 2.6 °C. Both maximum and minimum water...
40 CFR Table 1 to Subpart III of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...
40 CFR Table 1 to Subpart Eeee of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...
Verification and optimization of the CFETR baseline scenario
NASA Astrophysics Data System (ADS)
Zhao, D.; Lao, L. L.; Meneghini, O.; Staebler, G. M.; Candy, J.; Smith, S. P.; Snyder, P. B.; Prater, R.; Chen, X.; Chan, V. S.; Li, J.; Chen, J.; Shi, N.; Guo, W.; Pan, C.; Jian, X.
2016-10-01
The baseline scenario of China Fusion Engineering Test Reactor (CFETR) was designed starting from 0D calculations. The CFETR baseline scenario satisfies the minimum goal of Fusion Nuclear Science Facility aimed at bridging the gaps between ITER and DEMO. 1.5D calculations are presented to verify the on-going efforts in higher-dimensional modeling of CFETR. Steady-state scenarios are calculated self-consistently by the OMFIT integrated modeling framework that includes EFIT for equilibrium, ONETWO for sources and current, TGYRO for transport. With 68MW of neutral beam power and 8MW of ECH injected to the plasma, the average ion temperature
Zhao, Jinhui; Martin, Gina; Macdonald, Scott; Vallance, Kate; Treno, Andrew; Ponicki, William; Tu, Andrew; Buxton, Jane
2013-01-01
Objectives. We investigated whether periodic increases in minimum alcohol prices were associated with reduced alcohol-attributable hospital admissions in British Columbia. Methods. The longitudinal panel study (2002–2009) incorporated minimum alcohol prices, density of alcohol outlets, and age- and gender-standardized rates of acute, chronic, and 100% alcohol-attributable admissions. We applied mixed-method regression models to data from 89 geographic areas of British Columbia across 32 time periods, adjusting for spatial and temporal autocorrelation, moving average effects, season, and a range of economic and social variables. Results. A 10% increase in the average minimum price of all alcoholic beverages was associated with an 8.95% decrease in acute alcohol-attributable admissions and a 9.22% reduction in chronic alcohol-attributable admissions 2 years later. A Can$ 0.10 increase in average minimum price would prevent 166 acute admissions in the 1st year and 275 chronic admissions 2 years later. We also estimated significant, though smaller, adverse impacts of increased private liquor store density on hospital admission rates for all types of alcohol-attributable admissions. Conclusions. Significant health benefits were observed when minimum alcohol prices in British Columbia were increased. By contrast, adverse health outcomes were associated with an expansion of private liquor stores. PMID:23597383
Plant Distribution Data Show Broader Climatic Limits than Expert-Based Climatic Tolerance Estimates
Curtis, Caroline A.; Bradley, Bethany A.
2016-01-01
Background Although increasingly sophisticated environmental measures are being applied to species distributions models, the focus remains on using climatic data to provide estimates of habitat suitability. Climatic tolerance estimates based on expert knowledge are available for a wide range of plants via the USDA PLANTS database. We aim to test how climatic tolerance inferred from plant distribution records relates to tolerance estimated by experts. Further, we use this information to identify circumstances when species distributions are more likely to approximate climatic tolerance. Methods We compiled expert knowledge estimates of minimum and maximum precipitation and minimum temperature tolerance for over 1800 conservation plant species from the ‘plant characteristics’ information in the USDA PLANTS database. We derived climatic tolerance from distribution data downloaded from the Global Biodiversity and Information Facility (GBIF) and corresponding climate from WorldClim. We compared expert-derived climatic tolerance to empirical estimates to find the difference between their inferred climate niches (ΔCN), and tested whether ΔCN was influenced by growth form or range size. Results Climate niches calculated from distribution data were significantly broader than expert-based tolerance estimates (Mann-Whitney p values << 0.001). The average plant could tolerate 24 mm lower minimum precipitation, 14 mm higher maximum precipitation, and 7° C lower minimum temperatures based on distribution data relative to expert-based tolerance estimates. Species with larger ranges had greater ΔCN for minimum precipitation and minimum temperature. For maximum precipitation and minimum temperature, forbs and grasses tended to have larger ΔCN while grasses and trees had larger ΔCN for minimum precipitation. Conclusion Our results show that distribution data are consistently broader than USDA PLANTS experts’ knowledge and likely provide more robust estimates of climatic tolerance, especially for widespread forbs and grasses. These findings suggest that widely available expert-based climatic tolerance estimates underrepresent species’ fundamental niche and likely fail to capture the realized niche. PMID:27870859
NASA Technical Reports Server (NTRS)
Hermance, J. F. (Principal Investigator)
1981-01-01
A spherical harmonic analysis program is being tested which takes magnetic data in universal time from a set of arbitrarily space observatories and calculates a value for the instantaneous magnetic field at any point on the globe. The calculation is done as a least mean-squares value fit to a set of spherical harmonics up to any desired order. The program accepts as a set of input the orbit position of a satellite coordinates it with ground-based magnetic data for a given time. The output is a predicted time series for the magnetic field on the Earth's surface at the (r, theta) position directly under the hypothetically orbiting satellite for the duration of the time period of the input data set. By tracking the surface magnetic field beneath the satellite, narrow-band averages crosspowers between the spatially coordinated satellite and the ground-based data sets are computed. These crosspowers are used to calculate field transfer coefficients with minimum noise distortion. The application of this technique to calculating the vector response function W is discussed.
ERIC Educational Resources Information Center
Yu, Jiang; Williford, William R.
1991-01-01
Used sample from New York State Driver License File to mathematically extend dimension of file so that data purging procedure exerts minimum influence on calculation of drinking-driving recidivism. Examined impact of dimension of data on recidivism rate and mathematically extended file until impact of data dimension was minimum. Calculated New…
40 CFR Table 1 to Subpart Cccc of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...
40 CFR Table 1 to Subpart Cccc of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...
40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part) Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B, of appendix A of this part) Dioxins/furans...
40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2012 CFR
2012-07-01
... part) Hydrogen chloride 62 parts per million by dry volume 3-run average (1 hour minimum sample time...) Sulfur dioxide 20 parts per million by dry volume 3-run average (1 hour minimum sample time per run...-8) or ASTM D6784-02 (Reapproved 2008).c Opacity 10 percent Three 1-hour blocks consisting of ten 6...
Work extraction from quantum systems with bounded fluctuations in work.
Richens, Jonathan G; Masanes, Lluis
2016-11-25
In the standard framework of thermodynamics, work is a random variable whose average is bounded by the change in free energy of the system. This average work is calculated without regard for the size of its fluctuations. Here we show that for some processes, such as reversible cooling, the fluctuations in work diverge. Realistic thermal machines may be unable to cope with arbitrarily large fluctuations. Hence, it is important to understand how thermodynamic efficiency rates are modified by bounding fluctuations. We quantify the work content and work of formation of arbitrary finite dimensional quantum states when the fluctuations in work are bounded by a given amount c. By varying c we interpolate between the standard and minimum free energies. We derive fundamental trade-offs between the magnitude of work and its fluctuations. As one application of these results, we derive the corrected Carnot efficiency of a qubit heat engine with bounded fluctuations.
Work extraction from quantum systems with bounded fluctuations in work
Richens, Jonathan G.; Masanes, Lluis
2016-01-01
In the standard framework of thermodynamics, work is a random variable whose average is bounded by the change in free energy of the system. This average work is calculated without regard for the size of its fluctuations. Here we show that for some processes, such as reversible cooling, the fluctuations in work diverge. Realistic thermal machines may be unable to cope with arbitrarily large fluctuations. Hence, it is important to understand how thermodynamic efficiency rates are modified by bounding fluctuations. We quantify the work content and work of formation of arbitrary finite dimensional quantum states when the fluctuations in work are bounded by a given amount c. By varying c we interpolate between the standard and minimum free energies. We derive fundamental trade-offs between the magnitude of work and its fluctuations. As one application of these results, we derive the corrected Carnot efficiency of a qubit heat engine with bounded fluctuations. PMID:27886177
Work extraction from quantum systems with bounded fluctuations in work
NASA Astrophysics Data System (ADS)
Richens, Jonathan G.; Masanes, Lluis
2016-11-01
In the standard framework of thermodynamics, work is a random variable whose average is bounded by the change in free energy of the system. This average work is calculated without regard for the size of its fluctuations. Here we show that for some processes, such as reversible cooling, the fluctuations in work diverge. Realistic thermal machines may be unable to cope with arbitrarily large fluctuations. Hence, it is important to understand how thermodynamic efficiency rates are modified by bounding fluctuations. We quantify the work content and work of formation of arbitrary finite dimensional quantum states when the fluctuations in work are bounded by a given amount c. By varying c we interpolate between the standard and minimum free energies. We derive fundamental trade-offs between the magnitude of work and its fluctuations. As one application of these results, we derive the corrected Carnot efficiency of a qubit heat engine with bounded fluctuations.
Williams, Peter
2010-11-01
Healthy food baskets have been used around the world for a variety of purposes, including: examining the difference in cost between healthy and unhealthy food; mapping the availability of healthy foods in different locations; calculating the minimum cost of an adequate diet for social policy planning; developing educational material on low cost eating and examining trends on food costs over time. In Australia, the Illawarra Healthy Food Basket was developed in 2000 to monitor trends in the affordability of healthy food compared to average weekly wages and social welfare benefits for the unemployed. It consists of 57 items selected to meet the nutritional requirements of a reference family of five. Bi-annual costing from 2000-2009 has shown that the basket costs have increased by 38.4% in the 10-year period, but that affordability has remained relatively constant at around 30% of average household incomes.
Variable rates of late Quaternary strike slip on the San Jacinto fault zone, southern California.
Sharp, R.V.
1981-01-01
3 strike slip displacements of strata with known approximate ages have been measured at 2 locations on the San Jacinto fault zone. Minimum horizontal offset between 5.7 and 8.6km in no more than 0.73Myr NE of Anza indicates 8-12 mm/yr average slip rate since late Pleistocene time. Horizontal slip of 1.7m has been calculated for the youngest sediment of Lake Cahuilla since its deposition 271- 510 yr BP. The corresponding slip rate is 2.8-5.0 mm/yr. Right lateral offset of 10.9m measured on a buried stream channel older than 5060 yr BP but younger than 6820 yr BP yields average slip rates for the intermediate time periods, 400 to 6000 yr BP of 1-2 mm/yr. The rates of slip suggest a relatively quiescent period from about 4000 BC to about 1600 AD.-from Author
Judd-Ofelt Analysis of Dy3+-Activated Aluminosilicate Glasses Prepared by Sol-Gel Method
NASA Astrophysics Data System (ADS)
Sengthong, Buonyavong; Van Tuyen, Ho; An, Nguyen Thi Thai; Van Do, Phan; Hai, Nguyen Thi Quy; Chau, Pham Thi Minh; Quang, Vu Xuan
2018-04-01
Aluminosilicate (AS) glasses doped with different Dy3+ concentrations were synthesized via sol-gel method. Absorption, photoluminescence spectra and lifetime of this material have been studied. From analytical results of absorption spectra, the Judd-Ofelt (JO) parameters of prepared samples have been determined. These JO parameters combined with photoluminescence spectra have been used to evaluate transition probabilities ( A R), branching ratios ( β) and the calculated oscillator strengths of AS:Dy3+ glasses. The radiative branching ratio of 4F9/2 → 6H13/2 transition has a minimum value at 62.2% for β R which predicts that this transition in AS:Dy3+ glasses can give rise to lasing action. JO parameters show that the Ω2 increases with the increasing of Dy3+ ion concentration due to the increased polarizability of the average coordination medium and decreased average symmetry.
Williams, Peter
2010-01-01
Healthy food baskets have been used around the world for a variety of purposes, including: examining the difference in cost between healthy and unhealthy food; mapping the availability of healthy foods in different locations; calculating the minimum cost of an adequate diet for social policy planning; developing educational material on low cost eating and examining trends on food costs over time. In Australia, the Illawarra Healthy Food Basket was developed in 2000 to monitor trends in the affordability of healthy food compared to average weekly wages and social welfare benefits for the unemployed. It consists of 57 items selected to meet the nutritional requirements of a reference family of five. Bi-annual costing from 2000–2009 has shown that the basket costs have increased by 38.4% in the 10-year period, but that affordability has remained relatively constant at around 30% of average household incomes. PMID:22254001
NASA Astrophysics Data System (ADS)
Le, Zichun; Suo, Kaihua; Fu, Minglei; Jiang, Ling; Dong, Wen
2012-03-01
In order to minimize the average end to end delay for data transporting in hybrid wireless optical broadband access network, a novel routing algorithm named MSTMCF (minimum spanning tree and minimum cost flow) is devised. The routing problem is described as a minimum spanning tree and minimum cost flow model and corresponding algorithm procedures are given. To verify the effectiveness of MSTMCF algorithm, extensively simulations based on OWNS have been done under different types of traffic source.
GVVPT2 energy gradient using a Lagrangian formulation.
Theis, Daniel; Khait, Yuriy G; Hoffmann, Mark R
2011-07-28
A Lagrangian based approach was used to obtain analytic formulas for GVVPT2 energy nuclear gradients. The formalism can use either complete or incomplete model (or reference) spaces, and is limited, in this regard, only by the capabilities of the MCSCF program. An efficient means of evaluating the gradient equations is described. Demonstrative calculations were performed and compared with finite difference calculations on several molecules and show that the GVVPT2 gradients are accurate. Of particular interest, the suggested formalism can straightforwardly use state-averaged MCSCF descriptions of the reference space in which the states have arbitrary weights. This capability is demonstrated by some calculations on the ground and first excited singlet states of LiH, including calculations near an avoided crossing. The accuracy and usefulness of the GVVPT2 method and its gradient are highlighted by comparing the geometry of the near-C(2v) minimum on the conical intersection seam between the 1 (1)A(1) and 2 (1)A(1) surfaces of O(3) with values that were calculated at the multireference configuration interaction, including single and double excitations (MRCISD), level of theory. © 2011 American Institute of Physics
Assessing exclusionary power of a paternity test involving a pair of alleged grandparents.
Scarpetta, Marco A; Staub, Rick W; Einum, David D
2007-02-01
The power of a genetic test battery to exclude a pair of individuals as grandparents is an important consideration for parentage testing laboratories. However, a reliable method to calculate such a statistic with short-tandem repeat (STR) genetic markers has not been presented. Two formulae describing the random grandparents not excluded (RGPNE) statistic at a single genetic locus were derived: RGPNE = a(4 - 6a + 4a(2)- a(3)) when the paternal obligate allele (POA) is defined and RGPNE = 2[(a + b)(2 - a - b)][1 - (a + b)(2 - a - b)] + [(a + b)(2 - a - b)] when the POA is ambiguous. A minimum number of genetic markers required to yield cumulative RGPNE values of not greater than 0.01 was calculated with weighted average allele frequencies of the CODIS STR loci. RGPNE data for actual grandparentage cases are also presented to empirically examine the exclusionary power of routine casework. A comparison of RGPNE and random man not excluded (RMNE) values demonstrates the increased difficulty involved in excluding two individuals as grandparents compared to excluding a single alleged parent. A minimum of 12 STR markers is necessary to achieve RGPNE values of not greater than 0.01 when the mother is tested; more than 25 markers are required without the mother. Cumulative RGPNE values for each of 22 nonexclusionary grandparentage cases were not more than 0.01 but were significantly weaker when calculated without data from the mother. Calculation of the RGPNE provides a simple means to help minimize the potential of false inclusions in grandparentage analyses. This study also underscores the importance of testing the mother when examining the parents of an unavailable alleged father (AF).
Code of Federal Regulations, 2010 CFR
2010-10-01
... maximum stress thus calculated and the factor 4.25 shall not exceed the minimum ultimate strength of the... foot on hatchways in position 2 and the product of the maximum stress thus calculated and the factor 5... product of the maximum stress thus calculated and the factor 5 shall not exceed the minimum ultimate...
Code of Federal Regulations, 2011 CFR
2011-10-01
... maximum stress thus calculated and the factor 4.25 shall not exceed the minimum ultimate strength of the... foot on hatchways in position 2 and the product of the maximum stress thus calculated and the factor 5... product of the maximum stress thus calculated and the factor 5 shall not exceed the minimum ultimate...
Code of Federal Regulations, 2012 CFR
2012-10-01
... maximum stress thus calculated and the factor 4.25 shall not exceed the minimum ultimate strength of the... foot on hatchways in position 2 and the product of the maximum stress thus calculated and the factor 5... product of the maximum stress thus calculated and the factor 5 shall not exceed the minimum ultimate...
Code of Federal Regulations, 2014 CFR
2014-10-01
... maximum stress thus calculated and the factor 4.25 shall not exceed the minimum ultimate strength of the... foot on hatchways in position 2 and the product of the maximum stress thus calculated and the factor 5... product of the maximum stress thus calculated and the factor 5 shall not exceed the minimum ultimate...
Code of Federal Regulations, 2013 CFR
2013-10-01
... maximum stress thus calculated and the factor 4.25 shall not exceed the minimum ultimate strength of the... foot on hatchways in position 2 and the product of the maximum stress thus calculated and the factor 5... product of the maximum stress thus calculated and the factor 5 shall not exceed the minimum ultimate...
The Effect of the Minimum Compensating Cash Balance on School District Investments.
ERIC Educational Resources Information Center
Dembowski, Frederick L.
Banks are usually reimbursed for their checking account services either by a fixed service charge or by requiring a minimum or minimum-average compensating cash balance. This paper demonstrates how to determine the optimal minimum balance for a school district to maintain in its account. It is assumed that both the bank and the school district use…
Bush, Philip W; Drake, Robert E; Xie, Haiyi; McHugo, Gregory J; Haslett, William R
2009-08-01
Stable employment promotes recovery for persons with severe mental illness by enhancing income and quality of life, but its impact on mental health costs has been unclear. This study examined service cost over ten years among participants in a co-occurring disorders study. Latent-class growth analysis of competitive employment identified trajectory groups. The authors calculated annual costs of outpatient services and institutional stays for 187 participants and examined group differences in ten-year utilization and cost. A steady-work group (N=51) included individuals whose work hours increased rapidly and then stabilized to average 5,060 hours per person over ten years. A late-work group (N=57) and a no-work group (N=79) did not differ significantly in utilization or cost outcomes, so they were combined into a minimum-work group (N=136). More education, a bipolar disorder diagnosis (versus schizophrenia or schizoaffective disorder), work in the past year, and lower scores on the expanded Brief Psychiatric Rating Scale predicted membership in the steady-work group. These variables were controlled for in the outcomes analysis. Use of outpatient services for the steady-work group declined at a significantly greater rate than it did for the minimum-work group, while institutional (hospital, jail, or prison) stays declined for both groups without a significant difference. The average cost per participant for outpatient services and institutional stays for the minimum-work group exceeded that of the steady-work group by $166,350 over ten years. Highly significant reductions in service use were associated with steady employment. Given supported employment's well-established contributions to recovery, evidence of long-term reductions in the cost of mental health services should lead policy makers and insurers to promote wider implementation.
Mort, Brendan C; Autschbach, Jochen
2006-08-09
Vibrational corrections (zero-point and temperature dependent) of the H-D spin-spin coupling constant J(HD) for six transition metal hydride and dihydrogen complexes have been computed from a vibrational average of J(HD) as a function of temperature. Effective (vibrationally averaged) H-D distances have also been determined. The very strong temperature dependence of J(HD) for one of the complexes, [Ir(dmpm)Cp*H2]2 + (dmpm = bis(dimethylphosphino)methane) can be modeled simply by the Boltzmann average of the zero-point vibrationally averaged JHD of two isomers. For this complex and four others, the vibrational corrections to JHD are shown to be highly significant and lead to improved agreement between theory and experiment in most cases. The zero-point vibrational correction is important for all complexes. Depending on the shape of the potential energy and J-coupling surfaces, for some of the complexes higher vibrationally excited states can also contribute to the vibrational corrections at temperatures above 0 K and lead to a temperature dependence. We identify different classes of complexes where a significant temperature dependence of J(HD) may or may not occur for different reasons. A method is outlined by which the temperature dependence of the HD spin-spin coupling constant can be determined with standard quantum chemistry software. Comparisons are made with experimental data and previously calculated values where applicable. We also discuss an example where a low-order expansion around the minimum of a complicated potential energy surface appears not to be sufficient for reproducing the experimentally observed temperature dependence.
The Consequences of Indexing the Minimum Wage to Average Wages in the U.S. Economy.
ERIC Educational Resources Information Center
Macpherson, David A.; Even, William E.
The consequences of indexing the minimum wage to average wages in the U.S. economy were analyzed. The study data were drawn from the 1974-1978 May Current Population Survey (CPS) and the 180 monthly CPS Outgoing Rotation Group files for 1979-1993 (approximate annual sample sizes of 40,000 and 180,000, respectively). The effects of indexing on the…
Code of Federal Regulations, 2011 CFR
2011-07-01
....011) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part... by volume (ppmv) 20 5.5 11 3-run average (1-hour minimum sample time per run) EPA Reference Method 10... dscf) 16 (7.0) or 0.013 (0.0057) 0.85 (0.37) or 0.020 (0.0087) 9.3 (4.1) or 0.054 (0.024) 3-run average...
Discrete meso-element simulation of chemical reactions in shear bands
NASA Astrophysics Data System (ADS)
Tamura, S.; Horie, Y.
1998-07-01
A meso-dynamic simulation technique is used to investigate the chemical reactions in high speed shearing of reactive porous mixtures. The reaction speed is assumed to be a function of temperature, pressure and mixing of materials. To gain a theoretical insight into the experiments reported by Nesterenko et al., a parametric study of material flow and local temperature was carried out using a Nb and Si mixture. In the model calculation, a heterogeneous shear region of 5 μm width, consisting of alternating layers of Nb and Si, was created first in a mixture and then sheared at the rate of 8.0×107s-1. Results show that the material flow is mostly homogeneous, but contains a local agglomeration and circulatory flow. This behavior accelerates mass mixing and causes a significant temperature increase. To evaluate the mixing of material, average minimum distance of materials separation was calculated. Voids effect were also investigated.
The Revised OB-1 Method for Metal-Water Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westfall, Robert Michael; Wright, Richard Q
The OB-1 method for the calculation of the minimum critical mass (mcm) of fissile actinides in metal/water systems was described in a 2008 Nuclear Science and Engineering (NS&E) article. The purpose of the present work is to update and expand the application of this method with current nuclear data, including data uncertainties. The mcm and the hypothetical fissile metal density ({rho}{sub F}) in grams of metal/liter are obtained by a fit to values predicted with transport calculations. The input parameters required are thermal values for fission and absorption cross sections and nubar. A factor of ({radical}{pi})/2 is used to convertmore » to Maxwellian averaged values. The uncertainties for the fission and capture cross sections and the estimated nubar uncertainties are used to determine the uncertainties in the mcm, either in percent or grams.« less
The SRS-Viewer: A Software Tool for Displaying and Evaluation of Pyroshock Data
NASA Astrophysics Data System (ADS)
Eberl, Stefan
2014-06-01
For the evaluation of the success of a pyroshock, the time domain and the corresponding Shock-Response- Spectra (SRS) have to be considered. The SRS-Viewer is an IABG developed software tool [1] to read data in Universal File format (*.unv) and either display or plot for each accelerometer the time domain, corresponding SRS and the specified Reference-SRS with tolerances in the background.The software calculates the "Average (AVG)", "Maximum (MAX)" and "Minimum (MIN)" SRS of any selection of accelerometers. A statistical analysis calculates the percentages of measured SRS above the specified Reference-SRS level and the percentage within the tolerance bands for comparison with the specified success criteria.Overlay plots of single accelerometers of different test runs enable to monitor the repeatability of the shock input and the integrity of the specimen. Furthermore the difference between the shock on a mass-dummy and the real test unit can be examined.
All the adiabatic bound states of NO{sub 2}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salzgeber, R.F.; Mandelshtam, V.; Schlier, C.
1998-07-01
We calculated all 2967 even and odd bound states of the adiabatic ground state of NO{sub 2}, using a modification of the abthinspinitio potential energy surface of Leonardi {ital et al.} [J. Chem. Phys. {bold 105}, 9051 (1996)]. The calculation was performed by harmonic inversion of the Chebyshev correlation function generated by a DVR Hamiltonian in Radau coordinates. The relative error for the computed eigenenergies (measured from the potential minimum), is 10{sup {minus}4} or better, corresponding to an absolute error of less than about 2.5thinspcm{sup {minus}1}. Near the dissociation threshold the average density of states is about 0.2/cm{sup {minus}1} formore » each symmetry. Statistical analysis of the states shows some interesting structure of the rigidity parameter {Delta}{sub 3} as a function of energy. {copyright} {ital 1998 American Institute of Physics.}« less
Effects of epidemic threshold definition on disease spread statistics
NASA Astrophysics Data System (ADS)
Lagorio, C.; Migueles, M. V.; Braunstein, L. A.; López, E.; Macri, P. A.
2009-03-01
We study the statistical properties of SIR epidemics in random networks, when an epidemic is defined as only those SIR propagations that reach or exceed a minimum size sc. Using percolation theory to calculate the average fractional size
A Earth Outgoing Longwave Radiation Climate Model
NASA Astrophysics Data System (ADS)
Yang, Shi-Keng
An Earth outgoing longwave radiation (OLWR) climate model has been constructed for radiation budget study. The model consists of the upward radiative transfer parameterization of Thompson and Warren (1982), the cloud cover model of Sherr et al. (1968) and a monthly average climatology defined by the data from Crutcher and Meserve (1971) and Taljaard et al. (1969). Additional required information is provided by the empirical 100mb water vapor mixing ratio equation of Harries (1976), and the mixing ratio interpolation scheme of Briegleb and Ramanathan (1982). Cloud top temperature is adjusted so that the calculation would agree with NOAA scanning radiometer measurements. Both clear sky and cloudy sky cases are calculated and discussed for global average, zonal average and world-wide distributed cases. The results agree well with the satellite observations. The clear sky case shows that the OLWR field is highly modulated by water vapor, especially in the tropics. The strongest longitudinal variation occurs in the tropics. This variation can be mostly explained by the strong water vapor gradient. Although in the zonal average case the tropics have a minimum in OLWR, the minimum is essentially contributed by a few very low flux regions, such as the Amazon, Indonesia and the Congo. There are regions in the tropics such that their OLWR is as large as that of the subtropics. In the high latitudes, where cold air contains less water vapor, OLWR is basically modulated by the surface temperature. Thus, the topographical heat capacity becomes a dominant factor in determining the distribution. Clouds enhance water vapor modulation of OLWR. Tropical clouds have the coldest cloud top temperatures. This again increases the longitudinal variation in the region. However, in the polar region, where temperature inversion is prominent, cloud top temperature is warmer than the surface. Hence, cloud has the effect of increasing OLWR. The implication of this cloud mechanism is that the latitudinal gradient of net radiation is thus further increased, and the forcing of the general atmospheric circulation is substantially different due to the increased additional available energy. The analysis of the results also suggests that to improve the performance of the Budyko-Sellers type energy balance climate model in the tropical region, the parameterization of the longwave cooling should include a water vapor absorbing term.
Forecast of Frost Days Based on Monthly Temperatures
NASA Astrophysics Data System (ADS)
Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.
2009-04-01
Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed, Faisal; Loma Linda University Medical Center, Department of Radiation Oncology, Loma Linda, CA; Sarkar, Vikren
Purpose: To evaluate radiation dose delivered to pelvic lymph nodes, if daily Image Guided Radiation Therapy (IGRT) was implemented with treatment shifts based on the primary site (primary clinical target volume [CTV]). Our secondary goal was to compare dosimetric coverage with patient outcomes. Materials and methods: A total of 10 female patients with gynecologic malignancies were evaluated retrospectively after completion of definitive intensity-modulated radiation therapy (IMRT) to their pelvic lymph nodes and primary tumor site. IGRT consisted of daily kilovoltage computed tomography (CT)-on-rails imaging fused with initial planning scans for position verification. The initial plan was created using Varian's Eclipsemore » treatment planning software. Patients were treated with a median radiation dose of 45 Gy (range: 37.5 to 50 Gy) to the primary volume and 45 Gy (range: 45 to 64.8 Gy) to nodal structures. One IGRT scan per week was randomly selected from each patient's treatment course and re-planned on the Eclipse treatment planning station. CTVs were recreated by fusion on the IGRT image series, and the patient's treatment plan was applied to the new image set to calculate delivered dose. We evaluated the minimum, maximum, and 95% dose coverage for primary and nodal structures. Reconstructed primary tumor volumes were recreated within 4.7% of initial planning volume (0.9% to 8.6%), and reconstructed nodal volumes were recreated to within 2.9% of initial planning volume (0.01% to 5.5%). Results: Dosimetric parameters averaged less than 10% (range: 1% to 9%) of the original planned dose (45 Gy) for primary and nodal volumes on all patients (n = 10). For all patients, ≥99.3% of the primary tumor volume received ≥ 95% the prescribed dose (V95%) and the average minimum dose was 96.1% of the prescribed dose. In evaluating nodal CTV coverage, ≥ 99.8% of the volume received ≥ 95% the prescribed dose and the average minimum dose was 93%. In evaluating individual IGRT sessions, we found that 6 patients had an estimated minimal nodal CTV dose less than 90% (range: 78 to 99%) of that planned. With a median follow-up of 42.5 months, 2 patients experienced systemic disease progression at an average of 19.6 months. One patient was found to have a local or regional failure with an average follow-up of 42 months. Conclusion: Using only 3 dimensional IGRT corrections in gynecological radiation allows excellent coverage of the primary target volume and good average nodal CTV coverage. If IGRT corrections are based on alignment to the primary tumor volume, and is only able to be corrected in 3 degrees, this can create situations in which nodal volumes may be under dosed. Utilizing multiple IGRT sessions appears to average out dose discrepancies over the course of treatment. The implication of underdosing in a single IGRT session needs further evaluation in future studies. Based on the concern of minimum dose to a nodal target volume, these findings may signal caution when using IGRT and IMRT in gynecological radiation patients. Possible techniques to overcome this situation may include averaging shifts between tumor and nodal volume, use of a treatment couch with 6° of freedom, deformable registration, or adaptive planning.« less
Biomass of freshwater turtles: a geographic comparison
DOE Office of Scientific and Technical Information (OSTI.GOV)
Congdon, J.D.; Greene, J.L.; Gibbons, J.W.
1986-01-01
Standing crop biomass of freshwater turtles and minimum annual biomass of egg production were calculated for marsh and farm pond habitats in South Caroling and in Michigan. The species in South Carolina included Chelydra serpentina, Deirochelys reticularia, Kinosternon subrubrum, Pseudemys floridana, P. scripta and Sternotherus odoratus. The species in Michigan were Chelydra serpentina, Chrysemys picta and Emydoidea blandingi. Biomass was also determined for a single species population of P. scripta on a barrier island near Charleston, South Carolina. Population density and biomass of Pseudemys scripta in Green Pond on Capers Island were higher than densities and biomass of the entiremore » six-species community studied on the mainland. In both the farm pond and marsh habitat in South Carolina P. scripta was the numerically dominant species and had the highest biomass. In Michigan, Chrysemys picta was the numerically dominant species; however, the biomass of Chelydra serpentina was higher. The three-species community in Michigan in two marshes (58 kg ha/sup -1/ and 46 kg ha/sup -1/) and farm ponds (23 kg ha/sup -1/) had lower biomasses than did the six-species community in a South Carolina marsh (73 kg/sup -1/). Minimum annual egg production by all species in South Carolina averaged 1.93 kg ha/sup -1/ and in Michigan averaged 2.89 kg ha/sup -1/ of marsh.« less
NASA Astrophysics Data System (ADS)
Tehsin, Sara; Rehman, Saad; Riaz, Farhan; Saeed, Omer; Hassan, Ali; Khan, Muazzam; Alam, Muhammad S.
2017-05-01
A fully invariant system helps in resolving difficulties in object detection when camera or object orientation and position are unknown. In this paper, the proposed correlation filter based mechanism provides the capability to suppress noise, clutter and occlusion. Minimum Average Correlation Energy (MACE) filter yields sharp correlation peaks while considering the controlled correlation peak value. Difference of Gaussian (DOG) Wavelet has been added at the preprocessing stage in proposed filter design that facilitates target detection in orientation variant cluttered environment. Logarithmic transformation is combined with a DOG composite minimum average correlation energy filter (WMACE), capable of producing sharp correlation peaks despite any kind of geometric distortion of target object. The proposed filter has shown improved performance over some of the other variant correlation filters which are discussed in the result section.
Power limits for microbial life.
LaRowe, Douglas E; Amend, Jan P
2015-01-01
To better understand the origin, evolution, and extent of life, we seek to determine the minimum flux of energy needed for organisms to remain viable. Despite the difficulties associated with direct measurement of the power limits for life, it is possible to use existing data and models to constrain the minimum flux of energy required to sustain microorganisms. Here, a we apply a bioenergetic model to a well characterized marine sedimentary environment in order to quantify the amount of power organisms use in an ultralow-energy setting. In particular, we show a direct link between power consumption in this environment and the amount of biomass (cells cm(-3)) found in it. The power supply resulting from the aerobic degradation of particular organic carbon (POC) at IODP Site U1370 in the South Pacific Gyre is between ∼10(-12) and 10(-16) W cm(-3). The rates of POC degradation are calculated using a continuum model while Gibbs energies have been computed using geochemical data describing the sediment as a function of depth. Although laboratory-determined values of maintenance power do a poor job of representing the amount of biomass in U1370 sediments, the number of cells per cm(-3) can be well-captured using a maintenance power, 190 zW cell(-1), two orders of magnitude lower than the lowest value reported in the literature. In addition, we have combined cell counts and calculated power supplies to determine that, on average, the microorganisms at Site U1370 require 50-3500 zW cell(-1), with most values under ∼300 zW cell(-1). Furthermore, we carried out an analysis of the absolute minimum power requirement for a single cell to remain viable to be on the order of 1 zW cell(-1).
Wiley, Jeffrey B.
2006-01-01
Five time periods between 1930 and 2002 are identified as having distinct patterns of annual minimum daily mean flows (minimum flows). Average minimum flows increased around 1970 at many streamflow-gaging stations in West Virginia. Before 1930, however, there might have been a period of minimum flows greater than any period identified between 1930 and 2002. The effects of climate variability are probably the principal causes of the differences among the five time periods. Comparisons of selected streamflow statistics are made between values computed for the five identified time periods and values computed for the 1930-2002 interval for 15 streamflow-gaging stations. The average difference between statistics computed for the five time periods and the 1930-2002 interval decreases with increasing magnitude of the low-flow statistic. The greatest individual-station absolute difference was 582.5 percent greater for the 7-day 10-year low flow computed for 1970-1979 compared to the value computed for 1930-2002. The hydrologically based low flows indicate approximately equal or smaller absolute differences than biologically based low flows. The average 1-day 3-year biologically based low flow (1B3) and 4-day 3-year biologically based low flow (4B3) are less than the average 1-day 10-year hydrologically based low flow (1Q10) and 7-day 10-year hydrologic-based low flow (7Q10) respectively, and range between 28.5 percent less and 13.6 percent greater. Seasonally, the average difference between low-flow statistics computed for the five time periods and 1930-2002 is not consistent between magnitudes of low-flow statistics, and the greatest difference is for the summer (July 1-September 30) and fall (October 1-December 31) for the same time period as the greatest difference determined in the annual analysis. The greatest average difference between 1B3 and 4B3 compared to 1Q10 and 7Q10, respectively, is in the spring (April 1-June 30), ranging between 11.6 and 102.3 percent greater. Statistics computed for the individual station's record period may not represent the statistics computed for the period 1930 to 2002 because (1) station records are available predominantly after about 1970 when minimum flows were greater than the average between 1930 and 2002 and (2) some short-term station records are mostly during dry periods, whereas others are mostly during wet periods. A criterion-based sampling of the individual station's record periods at stations was taken to reduce the effects of statistics computed for the entire record periods not representing the statistics computed for 1930-2002. The criterion used to sample the entire record periods is based on a comparison between the regional minimum flows and the minimum flows at the stations. Criterion-based sampling of the available record periods was superior to record-extension techniques for this study because more stations were selected and areal distribution of stations was more widespread. Principal component and correlation analyses of the minimum flows at 20 stations in or near West Virginia identify three regions of the State encompassing stations with similar patterns of minimum flows: the Lower Appalachian Plateaus, the Upper Appalachian Plateaus, and the Eastern Panhandle. All record periods of 10 years or greater between 1930 and 2002 where the average of the regional minimum flows are nearly equal to the average for 1930-2002 are determined as representative of 1930-2002. Selected statistics are presented for the longest representative record period that matches the record period for 77 stations in West Virginia and 40 stations near West Virginia. These statistics can be used to develop equations for estimating flow in ungaged stream locations.
NASA Astrophysics Data System (ADS)
Knapp, Paul A.; Soulé, Peter T.
2005-07-01
In mid-autumn 2002, an exceptional 5-day cold spell affected much of the interior Pacific Northwest, with minimum temperatures averaging 13°C below long-term means (1953-2002). On 31 October, minimum temperature records occurred at 98 of the 106 recording stations, with records lowered in some locations by 9°C. Calculation of recurrence intervals of minimum temperatures shows that 50% of the stations experienced a >500-yr event. The synoptic conditions responsible were the development of a pronounced high pressure ridge over western Canada and an intense low pressure area centered in the Intermountain West that promoted strong northeasterly winds. The cold spell occurred near the end of the growing season for an ecologically critical and dominant tree species of the interior Pacific Northwest—western juniper—and followed an extended period of severe drought. In spring 2003, it became apparent that the cold had caused high rates of tree mortality and canopy dieback in a species that is remarkable for its longevity and resistance to climatic stress. The cold event altered western juniper dominance in some areas, and this alteration may have long-term impacts on water budgets, fire intensities and frequencies, animal species interrelationships, and interspecific competition among plant species.
Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Wisconsin
Walker, J.F.; Osen, L.L.; Hughes, P.E.
1987-01-01
A minimum budget of $510,000 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gaging stations. At this minimum budget, the theoretical average standard error of instantaneous discharge is 14.4%. The maximum budget analyzed was $650,000 and resulted in an average standard of error of instantaneous discharge of 7.2%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, R; Ding, C; Jiang, S
Purpose Spine SRS/SAbR treatment plans typically require very steep dose gradients to meet spinal cord constraints and it is crucial that the dose distribution be accurate. However, these plans are typically calculated on helical free-breathing CT scans, which often contain motion artifacts. While the spine itself doesn’t exhibit very much intra-fraction motion, tissues around the spine, particularly the liver, do move with respiration. We investigated the dosimetric effect of liver motion on dose distributions calculated on helical free-breathing CT scans for spine SAbR delivered to the T and L spine. Methods We took 5 spine SAbR plans and used densitymore » overrides to simulate an average reconstruction CT image set, which would more closely represent the patient anatomy during treatment. The value used for the density override was 0.66 g/cc. All patients were planned using our standard beam arrangement, which consists of 13 coplanar step and shoot IMRT beams. The original plan was recalculated with the same MU on the “average” scan and target coverage and spinal cord dose were compared to the original plan. Results The average changes in minimum PTV dose, PTV coverage, max cord dose and volume of cord receiving 10 Gy were 0.6%, 0.8%, 0.3% and 4.4% (0.012 cc), respectively. Conclusion SAbR spine plans are surprisingly robust relative to surrounding organ motion due to respiration. Motion artifacts in helical planning CT scans do not cause clinically significant differences when these plans are re-calculated on pseudo-average CT reconstructions. This is likely due to the beam arrangement used because only three beams pass through the liver and only one beam passes completely through the density override. The effect of the respiratory motion on VMAT plans for spine SAbR is being evaluated.« less
Sunspot Activity Near Cycle Minimum and What it Might Suggest for Cycle 24, the Next Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2009-01-01
In late 2008, 12-month moving averages of sunspot number, number of spotless days, number of groups, area of sunspots, and area per group were reflective of sunspot cycle minimum conditions for cycle 24, these values being of or near record value. The first spotless day occurred in January 2004 and the first new-cycle, high-latitude spot was reported in January 2008, although old-cycle, low-latitude spots have continued to be seen through April 2009, yielding an overlap of old and new cycle spots of at least 16 mo. New-cycle spots first became dominant over old-cycle spots in September 2008. The minimum value of the weighted mean latitude of sunspots occurred in May 2007, measuring 6.6 deg, and the minimum value of the highest-latitude spot followed in June 2007, measuring 11.7 deg. A cycle length of at least 150 mo is inferred for cycle 23, making it the longest cycle of the modern era. Based on both the maximum-minimum and amplitude-period relationships, cycle 24 is expected to be only of average to below-average size, peaking probably in late 2012 to early 2013, unless it proves to be a statistical outlier.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.
2011-05-17
The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank to ensure uniformity of the discharge stream. Mixing is accomplished with one to four dual-nozzle slurry pumps located within the tank liquid. For the work, a Tank 48 simulation model with a maximum of four slurry pumps in operation has been developed to estimate flow patterns for efficient solid mixing. The modeling calculations were performed by using two modeling approaches. One approach is a single-phase Computational Fluid Dynamics (CFD) model to evaluate the flow patterns and qualitativemore » mixing behaviors for a range of different modeling conditions since the model was previously benchmarked against the test results. The other is a two-phase CFD model to estimate solid concentrations in a quantitative way by solving the Eulerian governing equations for the continuous fluid and discrete solid phases over the entire fluid domain of Tank 48. The two-phase results should be considered as the preliminary scoping calculations since the model was not validated against the test results yet. A series of sensitivity calculations for different numbers of pumps and operating conditions has been performed to provide operational guidance for solids suspension and mixing in the tank. In the analysis, the pump was assumed to be stationary. Major solid obstructions including the pump housing, the pump columns, and the 82 inch central support column were included. The steady state and three-dimensional analyses with a two-equation turbulence model were performed with FLUENT{trademark} for the single-phase approach and CFX for the two-phase approach. Recommended operational guidance was developed assuming that local fluid velocity can be used as a measure of sludge suspension and spatial mixing under single-phase tank model. For quantitative analysis, a two-phase fluid-solid model was developed for the same modeling conditions as the single-phase model. The modeling results show that the flow patterns driven by four pump operation satisfy the solid suspension requirement, and the average solid concentration at the plane of the transfer pump inlet is about 12% higher than the tank average concentrations for the 70 inch tank level and about the same as the tank average value for the 29 inch liquid level. When one of the four pumps is not operated, the flow patterns are satisfied with the minimum suspension velocity criterion. However, the solid concentration near the tank bottom is increased by about 30%, although the average solid concentrations near the transfer pump inlet have about the same value as the four-pump baseline results. The flow pattern results show that although the two-pump case satisfies the minimum velocity requirement to suspend the sludge particles, it provides the marginal mixing results for the heavier or larger insoluble materials such as MST and KTPB particles. The results demonstrated that when more than one jet are aiming at the same position of the mixing tank domain, inefficient flow patterns are provided due to the highly localized momentum dissipation, resulting in inactive suspension zone. Thus, after completion of the indexed solids suspension, pump rotations are recommended to avoid producing the nonuniform flow patterns. It is noted that when tank liquid level is reduced from the highest level of 70 inches to the minimum level of 29 inches for a given number of operating pumps, the solid mixing efficiency becomes better since the ratio of the pump power to the mixing volume becomes larger. These results are consistent with the literature results.« less
Sanz-Mengibar, Jose Manuel; Altschuck, Natalie; Sanchez-de-Muniain, Paloma; Bauer, Christian; Santonja-Medina, Fernando
2017-04-01
To understand whether there is a trunk postural control threshold in the sagittal plane for the transition between the Gross Motor Function Classification System (GMFCS) levels measured with 3-dimensional gait analysis. Kinematics from 97 children with spastic bilateral cerebral palsy from spine angles according to Plug-In Gait model (Vicon) were plotted relative to their GMFCS level. Only average and minimum values of the lumbar spine segment correlated with GMFCS levels. Maximal values at loading response correlated independently with age at all functional levels. Average and minimum values were significant when analyzing age in combination with GMFCS level. There are specific postural control patterns in the average and minimum values for the position between trunk and pelvis in the sagittal plane during gait, for the transition among GMFCS I-III levels. Higher classifications of gross motor skills correlate with more extended spine angles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miliordos, Evangelos; Xantheas, Sotiris S.
We propose a general procedure for the numerical calculation of the harmonic vibrational frequencies that is based on internal coordinates and Wilson’s GF methodology via double differentiation of the energy. The internal coordinates are defined as the geometrical parameters of a Z-matrix structure, thus avoiding issues related to their redundancy. Linear arrangements of atoms are described using a dummy atom of infinite mass. The procedure has been automated in FORTRAN90 and its main advantage lies in the nontrivial reduction of the number of single-point energy calculations needed for the construction of the Hessian matrix when compared to the corresponding numbermore » using double differentiation in Cartesian coordinates. For molecules of C 1 symmetry the computational savings in the energy calculations amount to 36N – 30, where N is the number of atoms, with additional savings when symmetry is present. Typical applications for small and medium size molecules in their minimum and transition state geometries as well as hydrogen bonded clusters (water dimer and trimer) are presented. Finally, in all cases the frequencies based on internal coordinates differ on average by <1 cm –1 from those obtained from Cartesian coordinates.« less
Shrinkage Estimators for a Composite Measure of Quality Conceptualized as a Formative Construct
Shwartz, Michael; Peköz, Erol A; Christiansen, Cindy L; Burgess, James F; Berlowitz, Dan
2013-01-01
Objective To demonstrate the value of shrinkage estimators when calculating a composite quality measure as the weighted average of a set of individual quality indicators. Data Sources Rates of 28 quality indicators (QIs) calculated from the minimum dataset from residents of 112 Veterans Health Administration nursing homes in fiscal years 2005–2008. Study Design We compared composite scores calculated from the 28 QIs using both observed rates and shrunken rates derived from a Bayesian multivariate normal-binomial model. Principal Findings Shrunken-rate composite scores, because they take into account unreliability of estimates from small samples and the correlation among QIs, have more intuitive appeal than observed-rate composite scores. Facilities can be profiled based on more policy-relevant measures than point estimates of composite scores, and interval estimates can be calculated without assuming the QIs are independent. Usually, shrunken-rate composite scores in 1 year are better able to predict the observed total number of QI events or the observed-rate composite scores in the following year than the initial year observed-rate composite scores. Conclusion Shrinkage estimators can be useful when a composite measure is conceptualized as a formative construct. PMID:22716650
The Minimum Binding Energy and Size of Doubly Muonic D3 Molecule
NASA Astrophysics Data System (ADS)
Eskandari, M. R.; Faghihi, F.; Mahdavi, M.
The minimum energy and size of doubly muonic D3 molecule, which two of the electrons are replaced by the much heavier muons, are calculated by the well-known variational method. The calculations show that the system possesses two minimum positions, one at typically muonic distance and the second at the atomic distance. It is shown that at the muonic distance, the effective charge, zeff is 2.9. We assumed a symmetric planar vibrational model between two minima and an oscillation potential energy is approximated in this region.
Cooper, Justin; Marx, Bernd; Buhl, Johannes; Hombach, Volker
2002-09-01
This paper investigates the minimum distance for a human body in the near field of a cellular telephone base station antenna for which there is compliance with the IEEE or ICNIRP threshold values for radio frequency electromagnetic energy absorption in the human body. First, local maximum specific absorption rates (SARs), measured and averaged over volumes equivalent to 1 and to 10 g tissue within the trunk region of a physical, liquid filled shell phantom facing and irradiated by a typical GSM 900 base station antenna, were compared to corresponding calculated SAR values. The calculation used a homogeneous Visible Human body model in front of a simulated base station antenna of the same type. Both real and simulated base station antennas operated at 935 MHz. Antenna-body distances were between 1 and 65 cm. The agreement between measurements and calculations was excellent. This gave confidence in the subsequent calculated SAR values for the heterogeneous Visible Human model, for which each tissue was assigned the currently accepted values for permittivity and conductivity at 935 MHz. Calculated SAR values within the trunk of the body were found to be about double those for the homogeneous case. When the IEEE standard and the ICNIRP guidelines are both to be complied with, the local SAR averaged over 1 g tissue was found to be the determining parameter. Emitted power values from the antenna that produced the maximum SAR value over 1 g specified in the IEEE standard at the base station are less than those needed to reach the ICNIRP threshold specified for the local SAR averaged over 10 g. For the GSM base station antenna investigated here operating at 935 MHz with 40 W emitted power, the model indicates that the human body should not be closer to the antenna than 18 cm for controlled environment exposure, or about 95 cm for uncontrolled environment exposure. These safe distance limits are for SARs averaged over 1 g tissue. The corresponding safety distance limits under the ICNIRP guidelines for SAR taken over 10 g tissue are 5 cm for occupational exposure and about 75 cm for general-public exposure. Copyright 2002 Wiley-Liss, Inc.
EnviroAtlas - Potential Evapotranspiration 1950 - 2099 for the Conterminous United States
The EnviroAtlas Climate Scenarios were generated from NASA Earth Exchange (NEX) Downscaled Climate Projections (NEX-DCP30) ensemble averages (the average of over 30 available climate models) for each of the four representative concentration pathways (RCP) for the contiguous U.S. at 30 arc-second (approx. 800 m2) spatial resolution. In addition to the three climate variables provided by the NEX-DCP30 dataset (minimum monthly temperature, maximum monthly temperature, and precipitation) a corresponding estimate of potential evapotranspiration (PET) was developed to match the spatial and temporal scales of the input dataset. PET represents the cumulative amount of water returned to the atmosphere due to evaporation from Earth00e2??s surface and plant transpiration under ideal circumstances (i.e., a vegetated surface shading the ground and unlimited water supply). PET was calculated using the Hamon PET equation (Hamon, 1961) and CBM model for daylength (Forsythe et al. 1995) for the 4 RCPs (2.6, 4.5, 6.0, 8.5) and organized by season (Winter, Spring, Summer, and Fall) and annually for the years 2006 00e2?? 2099. Additionally, PET was calculated for the ensemble average of all historic runs and organized similarly for the years 1950 00e2?? 2005. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-u
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Guangsheng; Tan, Zhenyu, E-mail: tzy@sdu.edu.cn; Pan, Jie
In this work, a comparative study on the frequency effects of the electrical characteristics of the pulsed dielectric barrier discharges in He/O{sub 2} and in Ar/O{sub 2} at atmospheric pressure has been performed by means of the numerical simulation based on a 1-D fluid model at frequencies below 100 kHz. The frequency dependences of the characteristic quantities of the discharges in the two gases have been systematically calculated and analyzed under the oxygen concentrations below 2%. The characteristic quantities include the discharge current density, the averaged electron density, the electric field, and the averaged electron temperature. Especially, the frequency effects onmore » the averaged particle densities of the reactive species have also been calculated. This work gives the following significant results. For the two gases, there are two bipolar discharges in one period of applied voltage pulse under the considered frequency range and oxygen concentrations, as occurred in the pure noble gases. The frequency affects the two discharges in He/O{sub 2}, but in Ar/O{sub 2}, it induces a strong effect only on the first discharge. For the first discharge in each gas, there is a characteristic frequency at which the characteristic quantities reach their respective minimum, and this frequency appears earlier for Ar/O{sub 2}. For the second discharge in Ar/O{sub 2}, the averaged electron density presents a slight variation with the frequency. In addition, the discharge in Ar/O{sub 2} is strong and the averaged electron temperature is low, compared to those in He/O{sub 2.} The total averaged particle density of the reactive species in Ar/O{sub 2} is larger than those in He/O{sub 2} by about one order of magnitude.« less
NASA Astrophysics Data System (ADS)
Hadder, Eric Michael
There are many computer aided engineering tools and software used by aerospace engineers to design and predict specific parameters of an airplane. These tools help a design engineer predict and calculate such parameters such as lift, drag, pitching moment, takeoff range, maximum takeoff weight, maximum flight range and much more. However, there are very limited ways to predict and calculate the minimum control speeds of an airplane in engine inoperative flight. There are simple solutions, as well as complicated solutions, yet there is neither standard technique nor consistency throughout the aerospace industry. To further complicate this subject, airplane designers have the option of using an Automatic Thrust Control System (ATCS), which directly alters the minimum control speeds of an airplane. This work addresses this issue with a tool used to predict and calculate the Minimum Control Speed on the Ground (VMCG) as well as the Minimum Control Airspeed (VMCA) of any existing or design-stage airplane. With simple line art of an airplane, a program called VORLAX is used to generate an aerodynamic database used to calculate the stability derivatives of an airplane. Using another program called Numerical Propulsion System Simulation (NPSS), a propulsion database is generated to use with the aerodynamic database to calculate both VMCG and VMCA. This tool was tested using two airplanes, the Airbus A320 and the Lockheed Martin C130J-30 Super Hercules. The A320 does not use an Automatic Thrust Control System (ATCS), whereas the C130J-30 does use an ATCS. The tool was able to properly calculate and match known values of VMCG and VMCA for both of the airplanes. The fact that this tool was able to calculate the known values of VMCG and VMCA for both airplanes means that this tool would be able to predict the VMCG and VMCA of an airplane in the preliminary stages of design. This would allow design engineers the ability to use an Automatic Thrust Control System (ATCS) as part of the design of an airplane and still have the ability to predict the VMCG and VMCA of the airplane.
Three-dimensional modeling and animation of two carpal bones: a technique.
Green, Jason K; Werner, Frederick W; Wang, Haoyu; Weiner, Marsha M; Sacks, Jonathan M; Short, Walter H
2004-05-01
The objectives of this study were to (a). create 3D reconstructions of two carpal bones from single CT data sets and animate these bones with experimental in vitro motion data collected during dynamic loading of the wrist joint, (b). develop a technique to calculate the minimum interbone distance between the two carpal bones, and (c). validate the interbone distance calculation process. This method utilized commercial software to create the animations and an in-house program to interface with three-dimensional CAD software to calculate the minimum distance between the irregular geometries of the bones. This interbone minimum distance provides quantitative information regarding the motion of the bones studied and may help to understand and quantify the effects of ligamentous injury.
Prediction of Cycle 25 based on Polar Fields
NASA Astrophysics Data System (ADS)
Svalgaard, Leif; Sun, Xudong; Bobra, Monica
2016-10-01
WSO: The pole-most aperture measures the lineof-sight field between about 55° and the poles. Each 10 days the usable daily polar field measurements in a centered 30-day window are averaged. A 20nHz low pass filter eliminates yearly geometric projection effects. SDO-HMI: Line-of-sight magnetic observations (Blos above 60° lat.) at 720s cadence are converted to radial field (Br), under the assumption that the actual field vector is radial. Twice-per-day values are calculated as the mean weighted by de-projected image pixel areas for each latitudinal bin within ±45-deg longitude. These raw (12-hour) data have been averaged into the same windows as WSO's and reduced to the WSO scale taking saturation (1.8) and projection (COS(72°)) into account. We have argued that the 'poloidal' field in the years leading up to solar minimum is a good proxy for the size of the next cycle (SNmax≈ DM [WSO scale μT]). The successful prediction of Cycle 24 seems to bear that out, as well as the observed corroboration from previous cycles. As a measure of the poloidal field we used the average 'Dipole Moment', i.e. the difference, DM, between the fields at the North pole and the South pole. The 20nHz filtered WSO DM matches well the HMI DM on the WSO scale using the same 30-day window as WSO. So, we can extend WSO using HMI into the future as needed. Preliminarily, the polar fields now are as strong as before the last minimum and may increase further, so Cycle 25 may be at least a repeat of Cycle 24, not any smaller and possible a bit stronger.
40 CFR 63.1257 - Test methods and compliance procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
...)(2), or 63.1256(h)(2)(i)(C) with a minimum residence time of 0.5 seconds and a minimum temperature of... temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B) For a...
40 CFR 63.1257 - Test methods and compliance procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
...)(2), or 63.1256(h)(2)(i)(C) with a minimum residence time of 0.5 seconds and a minimum temperature of... temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B) For a...
40 CFR 63.1257 - Test methods and compliance procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
...)(2), or 63.1256(h)(2)(i)(C) with a minimum residence time of 0.5 seconds and a minimum temperature of... temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B) For a...
Markoulli, Maria; Duong, Tran Bao; Lin, Margaret; Papas, Eric
2018-02-01
To compare non-invasive break-up time (NIBUT) when measured with the Tearscope-Plus™ and the Oculus® Keratograph 5M, and to compare lipid layer thicknesses (LLT) when measured with the Tearscope-Plus™ and the LipiView®. This study also set out to establish the repeatability of these methods. The following measurements were taken from both eyes of 24 participants on two occasions: non-invasive keratograph break-up time using the Oculus® (NIKBUT-1 and NIKBUT-average), NIBUT using the Tearscope-Plus™, and LLT using the LipiView® (minimum, maximum, and average) and Tearscope-Plus™. The Tearscope-Plus™ grades were converted to nanometers. There were no significant differences between eyes (Tearscope-Plus™ NIBUT: p = 0.52; NIKBUT-1: p = 0.052; NIKBUT-average: p = 0.73; Tearscope-Plus™ LLT: p = 0.13; LipiView® average, maximum, or minimum: p = 0.68, 0.39 and 0.50, respectively) or days (Tearscope-Plus™ NIBUT: p = 0.32; NIKBUT-1: p = 0.65; NIKBUT-average: p = 0.54; Tearscope-Plus™ LLT: p = 0.26; LipiView® average, maximum, or minimum: p = 0.20, 0.09, and 0.10, respectively). LLT was significantly greater with the Tearscope-Plus™ (80.4 ± 34.0 nm) compared with the LipiView® average (56.3 ± 16.1 nm, p = 0.007), minimum (50.1 ± 15.8 nm, p < 0.001) but not maximum (67.2 ± 19.6 nm, p = 0.55). NIBUT was significantly greater with the Tearscope-Plus™ (15.9 ± 10.7 seconds) compared to the NIKBUT-1 (8.2 ± 3.5 seconds, p = 0.006) but not NIKBUT-average (10.9 ± 3.9 seconds, p = 0.08). The Tearscope-Plus™ is not interchangeable with either the Oculus® K5M measurement of tear stability (NIKBUT-1) or the LipiView® maximum and minimum lipid thickness.
NASA Astrophysics Data System (ADS)
Sudharsanan, Subramania I.; Mahalanobis, Abhijit; Sundareshan, Malur K.
1990-12-01
Discrete frequency domain design of Minimum Average Correlation Energy filters for optical pattern recognition introduces an implementational limitation of circular correlation. An alternative methodology which uses space domain computations to overcome this problem is presented. The technique is generalized to construct an improved synthetic discriminant function which satisfies the conflicting requirements of reduced noise variance and sharp correlation peaks to facilitate ease of detection. A quantitative evaluation of the performance characteristics of the new filter is conducted and is shown to compare favorably with the well known Minimum Variance Synthetic Discriminant Function and the space domain Minimum Average Correlation Energy filter, which are special cases of the present design.
Code of Federal Regulations, 2011 CFR
2011-07-01
... time 1 Method for demonstrating compliance 2 Particulate matter mg/dscm (gr/dscf) 197 (0.086) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method 26A or 29 of appendix A-8 of part 60. Carbon monoxide ppmv 40 3-run average (1-hour minimum...
Code of Federal Regulations, 2011 CFR
2011-07-01
... time 1 Method for demonstrating compliance 2 Particulate matter mg/dscm (gr/dscf) 87 (0.038) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method 26A or 29 of appendix A-8 of part 60. Carbon monoxide ppmv 20 3-run average (1-hour minimum...
Code of Federal Regulations, 2013 CFR
2013-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10..., appendix A-3 or appendix A-8). Sulfur dioxide 11 parts per million dry volume 3-run average (1 hour minimum... Apply to Incinerators on and After [Date to be specified in state plan] a 6 Table 6 to Subpart DDDD of...
Code of Federal Regulations, 2014 CFR
2014-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10..., appendix A-3 or appendix A-8). Sulfur dioxide 11 parts per million dry volume 3-run average (1 hour minimum... Apply to Incinerators on and After [Date to be specified in state plan] a 6 Table 6 to Subpart DDDD of...
NASA Astrophysics Data System (ADS)
Loboda, I. P.; Bogachev, S. A.
2015-07-01
We employ an automated detection algorithm to perform a global study of solar prominence characteristics. We process four months of TESIS observations in the He II 304Å line taken close to the solar minimum of 2008-2009 and mainly focus on quiescent and quiescent-eruptive prominences. We detect a total of 389 individual features ranging from 25×25 to 150×500 Mm2 in size and obtain distributions of many of their spatial characteristics, such as latitudinal position, height, size, and shape. To study their dynamics, we classify prominences as either stable or eruptive and calculate their average centroid velocities, which are found to rarely exceed 3 km/s. In addition, we give rough estimates of mass and gravitational energy for every detected prominence and use these values to estimate the total mass and gravitational energy of all simultaneously existing prominences (1012 - 1014 kg and 1029 - 1031 erg). Finally, we investigate the form of the gravitational energy spectrum of prominences and derive it to be a power-law of index -1.1 ± 0.2.
Determination of the Residence Time of Food Particles During Aseptic Sterilization
NASA Technical Reports Server (NTRS)
Carl, J. R.; Arndt, G. D.; Nguyen, T. X.
1994-01-01
The paper describes a non-invasive method to measure the time an individual particle takes to move through a length of stainless steel pipe. The food product is in two phase flow (liquids and solids) and passes through a pipe with pressures of approximately 60 psig and temperatures of 270-285 F. The proposed problem solution is based on the detection of transitory amplitude and/or phase changes in a microwave transmission path caused by the passage of the particles of interest. The particles are enhanced in some way, as will be discussed later, such that they will provide transitory changes that are distinctive enough not to be mistaken for normal variations in the received signal (caused by the non-homogeneous nature of the medium). Two detectors (transmission paths across the pipe) will be required and place at a known separation. A minimum transit time calculation is made from which the maximum velocity can be determined. This provides the minimum residence time. Also average velocity and statistical variations can be computed so that the amount of 'over-cooking' can be determined.
Natural and anthropogenic radioactivity in the environment of Kopaonik mountain, Serbia.
Mitrović, Branislava; Ajtić, Jelena; Lazić, Marko; Andrić, Velibor; Krstić, Nikola; Vranješ, Borjana; Vićentijević, Mihajlo
2016-08-01
To evaluate the state of the environment in Kopaonik, a mountain in Serbia, the activity concentrations of (4) K, (226)Ra, (232)Th and (137)Cs in five different types of environmental samples are determined by gamma ray spectrometry, and radiological hazard due to terrestrial radionuclides is calculated. The mean activity concentrations of natural radionuclides in the soil are higher than the global average. However, with an exception of two sampling locations, the external radiation hazard index is below one, implying an insignificant radiation hazard. Apart from (40)K, content of the natural radionuclides is predominantly below minimum detectable activities in grass and cow milk, but not in mosses. Although (137)Cs is present in the soil, grass, mosses and herbal plants, its specific activity in cow milk is below minimum detectable activity. Amongst the investigated herbal plants, Vaccinium myrtillus L. shows accumulating properties, as a high content of (137)Cs is detected therein. Therefore, moderation is advised in consuming Vaccinium myrtillus L. tea. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Soohaeng; Apra, Edoardo; Zeng, Xiao Cheng
The lowest-energy structures of water clusters (H2O)16 and (H2O)17 were revisited at the MP2 and CCSD(T) levels of theory. A new global minimum structure for (H2O)16 was found at the MP2 and CCSD(T) levels of theory and the effect of zero-point energy corrections on the relative stability of the low-lying minimum energy structures was assessed. For (H2O)17 the CCSD(T) calculations confirm the previously found at the MP2 level of theory "interior" arrangement (fully coordinated water molecule inside a spherical cluster) as the global minimum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Soohaeng; Apra, Edoardo; Zeng, X.C.
The lowest-energy structures of water clusters (H2O)16 and (H2O)17 were revisited at the MP2 and CCSD(T) levels of theory. A new global minimum structure for (H2O)16 was found at both the MP2 and CCSD(T) levels of theory, and the effect of zero-point energy corrections on the relative stability of the low-lying minimum energy structures was assessed. For (H2O)17, the CCSD(T) calculations confirm the previously found at the MP2 level of theory interior arrangement (fully coordinated water molecule inside a spherical cluster) as the global minimum
[Embryo volume estimated by three-dimensional ultrasonography at seven to ten weeks of pregnancy].
Filho, João Bortoletti; Nardozza, Luciano Marcondes Machado; Araujo Júnior, Edward; Rôlo, Líliam Cristine; Nowak, Paulo Martin; Moron, Antonio Fernandes
2008-10-01
to evaluate the embryo's volume (EV) between the seventh and the tenth gestational week, through tridimensional ultrasonography. a transversal study with 63 normal pregnant women between the seventh and the tenth gestational week. The ultrasonographical exams have been performed with a volumetric abdominal transducer. Virtual Organ Computer-aided Analysis (VOCAL) has been used to calculate EV, with a rotation angle of 12 masculine and a delimitation of 15 sequential slides. The average, median, standard deviation and maximum and minimum values have been calculated for the EV in all the gestational ages. A dispersion graphic has been drawn to assess the correlation between EV and the craniogluteal length (CGL), the adjustment being done by the determination coefficient (R(2)). To determine EV's reference intervals as a function of the CGL, the following formula was used: percentile=EV+K versus SD, with K=1.96. CGL has varied from 9.0 to 39.7 mm, with an average of 23.9 mm (+/-7.9 mm), while EV has varied from 0.1 to 7.6 cm(3), with an average of 2.7 cm(3) (+/-3.2 cm(3)). EV was highly correlated to CGL, the best adjustment being obtained with quadratic regression (EV=0.2-0.055 versus CGL+0.005 versus CGL(2); R(2)=0.8). The average EV has varied from 0.1 (-0.3 to 0.5 cm(3)) to 6.7 cm(3) (3.8 to 9.7 cm(3)) within the interval of 9 to 40 mm of CGL. EV has increased 67 times in this interval, while CGL, only 4.4 times. EV is a more sensitive parameter than CGL to evaluate embryo growth between the seventh and the tenth week of gestation.
NASA Astrophysics Data System (ADS)
Hirano, Tsuneo; Nagashima, Umpei; Jensen, Per
2018-04-01
For NCS in the X ˜ 2 Π electronic ground state, three-dimensional potential energy surfaces (3D PESs) have been calculated ab initio at the core-valence, full-valence MR-SDCI+Q/[aug-cc-pCVQZ (N, C, S)] level of theory. The ab initio 3D PESs are employed in second-order-perturbation-theory and DVR3D calculations to obtain various molecular constants and ro-vibrationally averaged structures. The 3D PESs show that the X ˜ 2 Π NCS has its potential minimum at a linear configuration, and hence it is a "linear molecule." The equilibrium structure has re (N-C) = 1.1778 Å, re (C-S) = 1.6335 Å, and ∠e (N-C-S) = 180°. The ro-vibrationally averaged structure, determined as expectation values over DVR3D wavefunctions, has 〈 r (N-C)〉0 = 1.1836 Å, 〈 r (C-S)〉0 = 1.6356 Å, and 〈 ∠ (N-C-S)〉0 = 172.5°. Using these expectation values as the initial guess, a bent r0 structure having an 〈 ∠ (N-C-S)〉0 of 172.2° is deduced from the experimentally reported B0 values for NC32S and NC34S. Our previous prediction that a linear molecule, in any ro-vibrational state including the ro-vibrational ground state, is to be "observed" as being bent on ro-vibrational average, has been confirmed here theoretically through the expectation value for the bond-angle deviation from linearity, 〈 ρ bar 〉 , and experimentally through the interpretation of the experimentally derived rotational-constant values.
NASA Astrophysics Data System (ADS)
Chen, Lixia; van Westen, Cees J.; Hussin, Haydar; Ciurean, Roxana L.; Turkington, Thea; Chavarro-Rincon, Diana; Shrestha, Dhruba P.
2016-11-01
Extreme rainfall events are the main triggering causes for hydro-meteorological hazards in mountainous areas, where development is often constrained by the limited space suitable for construction. In these areas, hazard and risk assessments are fundamental for risk mitigation, especially for preventive planning, risk communication and emergency preparedness. Multi-hazard risk assessment in mountainous areas at local and regional scales remain a major challenge because of lack of data related to past events and causal factors, and the interactions between different types of hazards. The lack of data leads to a high level of uncertainty in the application of quantitative methods for hazard and risk assessment. Therefore, a systematic approach is required to combine these quantitative methods with expert-based assumptions and decisions. In this study, a quantitative multi-hazard risk assessment was carried out in the Fella River valley, prone to debris flows and flood in the north-eastern Italian Alps. The main steps include data collection and development of inventory maps, definition of hazard scenarios, hazard assessment in terms of temporal and spatial probability calculation and intensity modelling, elements-at-risk mapping, estimation of asset values and the number of people, physical vulnerability assessment, the generation of risk curves and annual risk calculation. To compare the risk for each type of hazard, risk curves were generated for debris flows, river floods and flash floods. Uncertainties were expressed as minimum, average and maximum values of temporal and spatial probability, replacement costs of assets, population numbers, and physical vulnerability. These result in minimum, average and maximum risk curves. To validate this approach, a back analysis was conducted using the extreme hydro-meteorological event that occurred in August 2003 in the Fella River valley. The results show a good performance when compared to the historical damage reports.
Estimating wheat and maize daily evapotranspiration using artificial neural network
NASA Astrophysics Data System (ADS)
Abrishami, Nazanin; Sepaskhah, Ali Reza; Shahrokhnia, Mohammad Hossein
2018-02-01
In this research, artificial neural network (ANN) is used for estimating wheat and maize daily standard evapotranspiration. Ten ANN models with different structures were designed for each crop. Daily climatic data [maximum temperature (T max), minimum temperature (T min), average temperature (T ave), maximum relative humidity (RHmax), minimum relative humidity (RHmin), average relative humidity (RHave), wind speed (U 2), sunshine hours (n), net radiation (Rn)], leaf area index (LAI), and plant height (h) were used as inputs. For five structures of ten, the evapotranspiration (ETC) values calculated by ETC = ET0 × K C equation (ET0 from Penman-Monteith equation and K C from FAO-56, ANNC) were used as outputs, and for the other five structures, the ETC values measured by weighing lysimeter (ANNM) were used as outputs. In all structures, a feed forward multiple-layer network with one or two hidden layers and sigmoid transfer function and BR or LM training algorithm was used. Favorite network was selected based on various statistical criteria. The results showed the suitable capability and acceptable accuracy of ANNs, particularly those having two hidden layers in their structure in estimating the daily evapotranspiration. Best model for estimation of maize daily evapotranspiration is «M»ANN1 C (8-4-2-1), with T max, T min, RHmax, RHmin, U 2, n, LAI, and h as input data and LM training rule and its statistical parameters (NRMSE, d, and R2) are 0.178, 0.980, and 0.982, respectively. Best model for estimation of wheat daily evapotranspiration is «W»ANN5 C (5-2-3-1), with T max, T min, Rn, LAI, and h as input data and LM training rule, its statistical parameters (NRMSE, d, and R 2) are 0.108, 0.987, and 0.981 respectively. In addition, if the calculated ETC used as the output of the network for both wheat and maize, higher accurate estimation was obtained. Therefore, ANN is suitable method for estimating evapotranspiration of wheat and maize.
Low-flow characteristics of streams in Ohio through water year 1997
Straub, David E.
2001-01-01
This report presents selected low-flow and flow-duration characteristics for 386 sites throughout Ohio. These sites include 195 long-term continuous-record stations with streamflow data through water year 1997 (October 1 to September 30) and for 191 low-flow partial-record stations with measurements into water year 1999. The characteristics presented for the long-term continuous-record stations are minimum daily streamflow; average daily streamflow; harmonic mean flow; 1-, 7-, 30-, and 90-day minimum average low flow with 2-, 5-, 10-, 20-, and 50-year recurrence intervals; and 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, 50-, 40-, 30-, 20-, and 10-percent daily duration flows. The characteristics presented for the low-flow partial-record stations are minimum observed streamflow; estimated 1-, 7-, 30-, and 90-day minimum average low flow with 2-, 10-, and 20-year recurrence intervals; and estimated 98-, 95-, 90-, 85- and 80-percent daily duration flows. The low-flow frequency and duration analyses were done for three seasonal periods (warm weather, May 1 to November 30; winter, December 1 to February 28/29; and autumn, September 1 to November 30), plus the annual period based on the climatic year (April 1 to March 31).
NASA Technical Reports Server (NTRS)
Teren, F.
1977-01-01
Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.
Journal of Chinese Society of Astronautics (Selected Articles),
1983-03-10
Graphics Disclaimer...................... ..... .. . .. .. . . ... Calculation of Minimum Entry Heat Transfer Shape of a Space * Vehicle , by, Zhou Qi...the best quality copy available. ..- ii CALCULATION OF MINIMUM ENTRY HEAT TRANSFER SHAPE OF A SPACE VEHICLE Zhou Qi cheng ABSTRACT This paper dealt...entry heat transfer shape under specified fineness ratio and total vehicle weight conditions could be obtained using a variational method. Finally, the
Code of Federal Regulations, 2010 CFR
2010-10-01
... Threshold Amount, and Percent Used To Calculate IPA Minimum Participation Assigned to Each Mothership Under... Annual Threshold Amount, and Percent Used To Calculate IPA Minimum Participation Assigned to Each...-out allocation (2,220) Column G Number of Chinook salmon deducted from the annual threshold amount of...
Stockwell, Tim; Zhao, Jinhui; Sherk, Adam; Callaghan, Russell C; Macdonald, Scott; Gatley, Jodi
2017-07-01
Saskatchewan's introduction in April 2010 of minimum prices graded by alcohol strength led to an average minimum price increase of 9.1% per Canadian standard drink (=13.45 g ethanol). This increase was shown to be associated with reduced consumption and switching to lower alcohol content beverages. Police also informally reported marked reductions in night-time alcohol-related crime. This study aims to assess the impacts of changes to Saskatchewan's minimum alcohol-pricing regulations between 2008 and 2012 on selected crime events often related to alcohol use. Data were obtained from Canada's Uniform Crime Reporting Survey. Auto-regressive integrated moving average time series models were used to test immediate and lagged associations between minimum price increases and rates of night-time and police identified alcohol-related crimes. Controls were included for simultaneous crime rates in the neighbouring province of Alberta, economic variables, linear trend, seasonality and autoregressive and/or moving-average effects. The introduction of increased minimum-alcohol prices was associated with an abrupt decrease in night-time alcohol-related traffic offences for men (-8.0%, P < 0.001), but not women. No significant immediate changes were observed for non-alcohol-related driving offences, disorderly conduct or violence. Significant monthly lagged effects were observed for violent offences (-19.7% at month 4 to -18.2% at month 6), which broadly corresponded to lagged effects in on-premise alcohol sales. Increased minimum alcohol prices may contribute to reductions in alcohol-related traffic-related and violent crimes perpetrated by men. Observed lagged effects for violent incidents may be due to a delay in bars passing on increased prices to their customers, perhaps because of inventory stockpiling. [Stockwell T, Zhao J, Sherk A, Callaghan RC, Macdonald S, Gatley J. Assessing the impacts of Saskatchewan's minimum alcohol pricing regulations on alcohol-related crime. Drug Alcohol Rev 2017;36:492-501]. © 2016 Australasian Professional Society on Alcohol and other Drugs.
Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location
NASA Astrophysics Data System (ADS)
Zhao, A. H.
2014-12-01
Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.
The advantage of calculating emission reduction with local emission factor in South Sumatera region
NASA Astrophysics Data System (ADS)
Buchari, Erika
2017-11-01
Green House Gases (GHG) which have different Global Warming Potential, usually expressed in CO2 equivalent. German has succeeded in emission reduction of CO2 in year 1990s, while Japan since 2001 increased load factor of public transports. Indonesia National Medium Term Development Plan, 2015-2019, has set up the target of minimum 26% and maximum 41% National Emission Reduction in 2019. Intergovernmental Panel on Climate Change (IPCC), defined three types of accuracy in counting emission of GHG, as tier 1, tier 2, and tier 3. In tier 1, calculation is based on fuel used and average emission (default), which is obtained from statistical data. While in tier 2, calculation is based fuel used and local emission factors. Tier 3 is more accurate from those in tier 1 and 2, and the calculation is based on fuel used from modelling method or from direct measurement. This paper is aimed to evaluate the calculation with tier 2 and tier 3 in South Sumatera region. In 2012, Regional Action Plan for Greenhouse Gases of South Sumatera for 2020 is about 6,569,000 ton per year and with tier 3 is about without mitigation and 6,229,858.468 ton per year. It was found that the calculation in tier 3 is more accurate in terms of fuel used of variation vehicles so that the actions of mitigation can be planned more realistically.
Quiet-Time Suprathermal ( 0.1-1.5 keV) Electrons in the Solar Wind
NASA Astrophysics Data System (ADS)
Wang, L.; Tao, J.; Zong, Q.; Li, G.; Salem, C. S.; Wimmer-Schweingruber, R. F.; He, J.; Tu, C.; Bale, S. D.
2016-12-01
We present a statistical survey of the energy spectrum of solar wind suprathermal (˜0.1-1.5 keV) electrons measured by the WIND/3DP instrument at 1 AU during quiet times at the minimum and maximum of solar cycles 23 and 24. After separating (beaming) strahl electrons from (isotropic) halo electrons according to their different behaviors in the angular distribution, we fit the observed energy spectrum of both strahl and halo electrons at ˜0.1-1.5 keV to a Kappa distribution function with an index κ and effective temperature Teff. We also calculate the number density n and average energy Eavg of strahl and halo electrons by integrating the electron measurements between ˜0.1 and 1.5 keV. We find a strong positive correlation between κ and Teff for both strahl and halo electrons, and a strong positive correlation between the strahl n and halo n, likely reflecting the nature of the generation of these suprathermal electrons. In both solar cycles, κ is larger at solar minimum than at solar maximum for both strahl and halo electrons. The halo κ is generally smaller than the strahl κ (except during the solar minimum of cycle 23). The strahl n is larger at solar maximum, but the halo n shows no difference between solar minimum and maximum. Both the strahl n and halo n have no clear association with the solar wind core population, but the density ratio between the strahl and halo roughly anti-correlates (correlates) with the solar wind density (velocity).
Quiet-time Suprathermal (~0.1-1.5 keV) Electrons in the Solar Wind
NASA Astrophysics Data System (ADS)
Tao, Jiawei; Wang, Linghua; Zong, Qiugang; Li, Gang; Salem, Chadi S.; Wimmer-Schweingruber, Robert F.; He, Jiansen; Tu, Chuanyi; Bale, Stuart D.
2016-03-01
We present a statistical survey of the energy spectrum of solar wind suprathermal (˜0.1-1.5 keV) electrons measured by the WIND 3DP instrument at 1 AU during quiet times at the minimum and maximum of solar cycles 23 and 24. After separating (beaming) strahl electrons from (isotropic) halo electrons according to their different behaviors in the angular distribution, we fit the observed energy spectrum of both strahl and halo electrons at ˜0.1-1.5 keV to a Kappa distribution function with an index κ and effective temperature Teff. We also calculate the number density n and average energy Eavg of strahl and halo electrons by integrating the electron measurements between ˜0.1 and 1.5 keV. We find a strong positive correlation between κ and Teff for both strahl and halo electrons, and a strong positive correlation between the strahl n and halo n, likely reflecting the nature of the generation of these suprathermal electrons. In both solar cycles, κ is larger at solar minimum than at solar maximum for both strahl and halo electrons. The halo κ is generally smaller than the strahl κ (except during the solar minimum of cycle 23). The strahl n is larger at solar maximum, but the halo n shows no difference between solar minimum and maximum. Both the strahl n and halo n have no clear association with the solar wind core population, but the density ratio between the strahl and halo roughly anti-correlates (correlates) with the solar wind density (velocity).
Code of Federal Regulations, 2011 CFR
2011-07-01
...) (grains per dry standard cubic foot (gr/dscf)) 115 (0.05) 69 (0.03) 34 (0.015) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method...-run average (1-hour minimum sample time per run) EPA Reference Method 10 or 10B of appendix A-4 of...
Code of Federal Regulations, 2010 CFR
2010-07-01
...) (grains per dry standard cubic foot (gr/dscf)) 115 (0.05) 69 (0.03) 34 (0.015) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method...-run average (1-hour minimum sample time per run) EPA Reference Method 10 or 10B of appendix A-4 of...
NASA Technical Reports Server (NTRS)
Epperson, David L.; Davis, Jerry M.; Bloomfield, Peter; Karl, Thomas R.; Mcnab, Alan L.; Gallo, Kevin P.
1995-01-01
A methodology is presented for estimating the urban bias of surface shelter temperatures due to the effect of the urban heat island. Multiple regression techniques were used to predict surface shelter temperatures based on the time period 1986-89 using upper-air data from the European Centre for Medium-Range Weather Forecasts (ECMWF) to represent the background climate, site-specific data to represent the local landscape, and satellite-derived data -- the normalized difference vegetation index (NDVI) and the Defense Meteorological Satellite Program (DMSP) nighttime brightness data -- to represent the urban and rural landscape. Local NDVI and DMSP values were calculated for each station using the mean NDVI and DMSP values from a 3 km x 3 km area centered over the given station. Regional NDVI and DMSP values were calculated to represent a typical rural value for each station using the mean NDVI and DMSP values from a 1 deg x 1 deg latitude-longitude area in which the given station was located. Models for the United States were then developed for monthly maximum, mean, and minimum temperatures using data from over 1000 stations in the U.S. Cooperative (COOP) Network and for monthly mean temperatures with data from over 1150 stations in the Global Historical Climate Network (GHCN). Local biases, or the differences between the model predictions using the observed NDVI and DMSP values, and the predictions using the background regional values were calculated and compared with the results of other research. The local or urban bias of U.S. temperatures, as derived from all U.S. stations (urban and rural) used in the models, averaged near 0.40 C for monthly minimum temperatures, near 0.25 C for monthly mean temperatures, and near 0.10 C for monthly maximum temperatures. The biases of monthly minimum temperatures for individual stations ranged from near -1.1 C for rural stations to 2.4 C for stations from the largest urban areas. The results of this study indicate minimal problems for global application once global NDVI and DMSP data become available.
Hazratian, Teimour; Rassi, Yavar; Oshaghi, Mohammad Ali; Yaghoobi-Ershadi, Mohammad Reza; Fallah, Esmael; Shirzadi, Mohammad Reza; Rafizadeh, Sina
2011-08-01
To investigate species composition, density, accumulated degree-day and diversity of sand flies during April to October 2010 in Azarshahr district, a new focus of visceral leishmaniasis in north western Iran. Sand flies were collected using sticky traps biweekly and were stored in 96% ethanol. All specimens were mounted in Puri's medium for species identification using valid keys of sandflies. The density was calculated by the formula: number of specimens/m(2) of sticky traps and number of specimens/number of traps. Degree-day was calculated as follows: (Maximum temperature + Minimum temperature)/2-Minimum threshold. Diversity indices of the collected sand flies within different villages were estimated by the Shannon-weaver formula ( H'=∑i=1sPilog(e)Pi). Totally 5 557 specimens comprising 16 Species (14 Phlebotomus, and 2 Sergentomyia) were indentified. The activity of the species extended from April to October. Common sand-flies in resting places were Phlebotomus papatasi, Phlebotomus sergenti and Phlebotomus mongolensis. The monthly average density was 37.6, 41.1, 40.23, 30.38 and 30.67 for Almalodash, Jaragil, Segaiesh, Amirdizaj and Germezgol villages, respectively. Accumulated degree-day from early January to late May was approximately 289 degree days. The minimum threshold temperature for calculating of accumulated degree-day was 17.32°. According on the Shannon-weaver (H'), diversity of sand flies within area study were estimated as 0.917, 1.867, 1.339, 1.673, and 1.562 in Almalodash, Jaragil, Segaiesh, Amirdizaj and Germezgol villages, respectively. This study is the first detailed research in terms of species composition, density, accumulated degree-day and diversity of sand flies in an endemic focus of visceral leishamaniasis in Azarshahr district. The population dynamics of sand flies in Azarshahr district were greatly affected by climatic factors. According to this study the highest activity of the collected sand fly species occurs at the teritary week of August. It could help health authorities to predicate period of maximum risk of visceral leishamaniasis transmission and implement control program. Copyright © 2011 Hainan Medical College. Published by Elsevier B.V. All rights reserved.
Experimental study on an FBG strain sensor
NASA Astrophysics Data System (ADS)
Liu, Hong-lin; Zhu, Zheng-wei; Zheng, Yong; Liu, Bang; Xiao, Feng
2018-01-01
Landslides and other geological disasters occur frequently and often cause high financial and humanitarian cost. The real-time, early-warning monitoring of landslides has important significance in reducing casualties and property losses. In this paper, by taking the high initial precision and high sensitivity advantage of FBG, an FBG strain sensor is designed combining FBGs with inclinometer. The sensor was regarded as a cantilever beam with one end fixed. According to the anisotropic material properties of the inclinometer, a theoretical formula between the FBG wavelength and the deflection of the sensor was established using the elastic mechanics principle. Accuracy of the formula established had been verified through laboratory calibration testing and model slope monitoring experiments. The displacement of landslide could be calculated by the established theoretical formula using the changing values of FBG central wavelength obtained by the demodulation instrument remotely. Results showed that the maximum error at different heights was 9.09%; the average of the maximum error was 6.35%, and its corresponding variance was 2.12; the minimum error was 4.18%; the average of the minimum error was 5.99%, and its corresponding variance was 0.50. The maximum error of the theoretical and the measured displacement decrease gradually, and the variance of the error also decreases gradually. This indicates that the theoretical results are more and more reliable. It also shows that the sensor and the theoretical formula established in this paper can be used for remote, real-time, high precision and early warning monitoring of the slope.
Dosimetric verification of IMRT treatment planning using Monte Carlo simulations for prostate cancer
NASA Astrophysics Data System (ADS)
Yang, J.; Li, J.; Chen, L.; Price, R.; McNeeley, S.; Qin, L.; Wang, L.; Xiong, W.; Ma, C.-M.
2005-03-01
The purpose of this work is to investigate the accuracy of dose calculation of a commercial treatment planning system (Corvus, Normos Corp., Sewickley, PA). In this study, 30 prostate intensity-modulated radiotherapy (IMRT) treatment plans from the commercial treatment planning system were recalculated using the Monte Carlo method. Dose-volume histograms and isodose distributions were compared. Other quantities such as minimum dose to the target (Dmin), the dose received by 98% of the target volume (D98), dose at the isocentre (Diso), mean target dose (Dmean) and the maximum critical structure dose (Dmax) were also evaluated based on our clinical criteria. For coplanar plans, the dose differences between Monte Carlo and the commercial treatment planning system with and without heterogeneity correction were not significant. The differences in the isocentre dose between the commercial treatment planning system and Monte Carlo simulations were less than 3% for all coplanar cases. The differences on D98 were less than 2% on average. The differences in the mean dose to the target between the commercial system and Monte Carlo results were within 3%. The differences in the maximum bladder dose were within 3% for most cases. The maximum dose differences for the rectum were less than 4% for all the cases. For non-coplanar plans, the difference in the minimum target dose between the treatment planning system and Monte Carlo calculations was up to 9% if the heterogeneity correction was not applied in Corvus. This was caused by the excessive attenuation of the non-coplanar beams by the femurs. When the heterogeneity correction was applied in Corvus, the differences were reduced significantly. These results suggest that heterogeneity correction should be used in dose calculation for prostate cancer with non-coplanar beam arrangements.
Yoshikawa, Hiroto; Roback, Donald M; Larue, Susan M; Nolan, Michael W
2015-01-01
Potential benefits of planning radiation therapy on a contrast-enhanced computed tomography scan (ceCT) should be weighed against the possibility that this practice may be associated with an inadvertent risk of overdosing nearby normal tissues. This study investigated the influence of ceCT on intensity-modulated stereotactic body radiotherapy (IM-SBRT) planning. Dogs with head and neck, pelvic, or appendicular tumors were included in this retrospective cross-sectional study. All IM-SBRT plans were constructed on a pre- or ceCT. Contours for tumor and organs at risk (OAR) were manually constructed and copied onto both CT's; IM-SBRT plans were calculated on each CT in a manner that resulted in equal radiation fluence. The maximum and mean doses for OAR, and minimum, maximum, and mean doses for targets were compared. Data were collected from 40 dogs per anatomic site (head and neck, pelvis, and limbs). The average dose difference between minimum, maximum, and mean doses as calculated on pre- and ceCT plans for the gross tumor volume was less than 1% for all anatomic sites. Similarly, the differences between mean and maximum doses for OAR were less than 1%. The difference in dose distribution between plans made on CTs with and without contrast enhancement was tolerable at all treatment sites. Therefore, although caution would be recommended when planning IM-SBRT for tumors near "reservoirs" for contrast media (such as the heart and urinary bladder), findings supported the use of ceCT with this dose calculation algorithm for both target delineation and IM-SBRT treatment planning. © 2015 American College of Veterinary Radiology.
Fuster-Lluch, Oscar; Zapater-Hernández, Pedro; Gerónimo-Pardo, Manuel
2017-10-01
The pharmacokinetic profile of intravenous acetaminophen administered to critically ill multiple-trauma patients was studied after 4 consecutive doses of 1 g every 6 hours. Eleven blood samples were taken (predose and 15, 30, 45, 60, 90, 120, 180, 240, 300, and 360 minutes postdose), and urine was collected (during 6-hour intervals between doses) to determine serum and urine acetaminophen concentrations. These were used to calculate the following pharmacokinetic parameters: maximum and minimum concentrations, terminal half-life, area under serum concentration-time curve from 0 to 6 hours, mean residence time, volume of distribution, and serum and renal clearance of acetaminophen. Daily doses of acetaminophen required to obtain steady-state minimum (bolus dosing) and average plasma concentrations (continuous infusion) of 10 μg/mL were calculated (10 μg/mL is the presumed lower limit of the analgesic range). Data are expressed as median [interquartile range]. Twenty-two patients were studied, mostly young (age 44 [34-64] years) males (68%), not obese (weight 78 [70-84] kg). Acetaminophen concentrations and pharmacokinetic parameters were these: maximum concentration 33.6 [25.7-38.7] μg/mL and minimum concentration 0.5 [0.2-2.3] μg/mL, all values below 10 μg/mL and 8 below the detection limit; half-life 1.2 [1.0-1.9] hours; area under the curve for 6 hours 34.7 [29.7-52.7] μg·h/mL; mean residence time 1.8 [1.3-2.6] hours; steady-state volume of distribution 50.8 [42.5-66.5] L; and serum and renal clearance 28.8 [18.9-33.7] L/h and 15 [11-19] mL/min, respectively. Theoretically, daily doses for a steady-state minimum concentration of 10 μg/mL would be 12.2 [7.8-16.4] g/day (166 [112-202] mg/[kg·day]); for an average steady-state concentration of 10 μg/mL, they would be 6.9 [4.5-8.1] g/day (91 [59-111] mg/[kg·day]). In conclusion, administration of acetaminophen at the recommended dosage of 1 g per 6 hours to critically ill multiple-trauma patients yields serum concentrations below 10 μg/mL due to increased elimination. To reach the 10 μg/mL target, and from a strictly pharmacokinetic point of view, continuous infusion may be more feasible than bolus dosing. Such a change in dosing strategy requires appropriate, pharmacokinetic-pharmacodynamic and specific safety study. © 2017, The American College of Clinical Pharmacology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakhtiari, M; Schmitt, J; Sarfaraz, M
2015-06-15
Purpose: To establish a minimum number of patients required to obtain statistically accurate Planning Target Volume (PTV) margins for prostate Intensity Modulated Radiation Therapy (IMRT). Methods: A total of 320 prostate patients, consisting of a total number of 9311 daily setups, were analyzed. These patients had gone under IMRT treatments. Daily localization was done using the skin marks and the proper shifts were determined by the CBCT to match the prostate gland. The Van Herk formalism is used to obtain the margins using the systematic and random setup variations. The total patient population was divided into different grouping sizes varyingmore » from 1 group of 320 patients to 64 groups of 5 patients. Each grouping was used to determine the average PTV margin and its associated standard deviation. Results: Analyzing all 320 patients lead to an average Superior-Inferior margin of 1.15 cm. The grouping with 10 patients per group (32 groups) resulted to an average PTV margin between 0.6–1.7 cm with the mean value of 1.09 cm and a standard deviation (STD) of 0.30 cm. As the number of patients in groups increases the mean value of average margin between groups tends to converge to the true average PTV of 1.15 cm and STD decreases. For groups of 20, 64, and 160 patients a Superior-Inferior margin of 1.12, 1.14, and 1.16 cm with STD of 0.22, 0.11, and 0.01 cm were found, respectively. Similar tendency was observed for Left-Right and Anterior-Posterior margins. Conclusion: The estimation of the required margin for PTV strongly depends on the number of patients studied. According to this study at least ∼60 patients are needed to calculate a statistically acceptable PTV margin for a criterion of STD < 0.1 cm. Numbers greater than ∼60 patients do little to increase the accuracy of the PTV margin estimation.« less
Zakiullah; Saeed, Muhammad; Muhammad, Naveed; Khan, Saeed Ahmad; Gul, Farah; Khuda, Fazli; Humayun, Muhammad; Khan, Hamayun
2012-07-01
'Naswar' is a smokeless tobacco product (STP) widely used in Pakistan. It has been correlated with oral and oesophageal cancer in recent clinical studies. The toxic effects associated with STPs have been associated with trace level contaminants present in these products. The toxin levels of Pakistani naswar are reported for the first time in this study. A total of 30 Pakistani brands of naswar were tested for a variety of toxic constituents and carcinogens such as cadmium, arsenic, lead and other carcinogenic metals, nitrite and nitrate, and nicotine and pH. The average values of all the toxins studied were well above their allowable limits, making the product a health risk for consumers. Calculated lifetime cancer risk from cadmium and lead was 1 lac (100,000) to 10 lac (1,000,000) times higher than the minimum 10E-4 (0.00001) to 10E-6 (0.000001), which is the 'target range' for potentially hazardous substances, according to the US Environmental Protection Agency. Similarly, the level of arsenic was in the range of 0.15 to 14.04 μg/g, the average being 1.25 μg/g. The estimated average bioavailable concentration of arsenic is 0.125-0.25 μg/g, which is higher than the allowable standard of 0.01 μg/g. Similarly, the average minimum daily intake of chromium and nickel was 126.97 μg and 122.01 μg, as compared to allowable 30-35 μg and 35 μg, respectively; a 4-5 times higher exposure. However, beryllium was not detected in any of the brands studied. The pH was highly basic, averaging 8.56, which favours the formation of tobacco specific amines thus making the product potentially toxic. This study validates clinical studies correlating incidence of cancer with naswar use in Pakistan. This study shows that the production, packaging, sale and consumption of naswar should be regulated so as to protect the public from the health hazards associated with its consumption.
NASA Astrophysics Data System (ADS)
Reiss, D.; Zanetti, M.; Neukum, G.
2011-09-01
Active dust devils were observed in Syria Planum in Mars Observer Camera - Wide Angle (MOC-WA) and High Resolution Stereo Camera (HRSC) imagery acquired on the same day with a time delay of ˜26 min. The unique operating technique of the HRSC allowed the measurement of the traverse velocities and directions of motion. Large dust devils observed in the HRSC image could be retraced to their counterparts in the earlier acquired MOC-WA image. Minimum lifetimes of three large (avg. ˜700 m in diameter) dust devils are ˜26 min, as inferred from retracing. For one of these large dust devil (˜820 m in diameter) it was possible to calculate a minimum lifetime of ˜74 min based on the measured horizontal speed and the length of its associated dust devil track. The comparison of our minimum lifetimes with previous published results of minimum and average lifetimes of small (˜19 m in diameter, avg. min. lifetime of ˜2.83 min) and medium (˜185 m in diameter, avg. min. lifetime of ˜13 min) dust devils imply that larger dust devils on Mars are active for much longer periods of time than smaller ones, as it is the case for terrestrial dust devils. Knowledge of martian dust devil lifetimes is an important parameter for the calculation of dust lifting rates. Estimates of the contribution of large dust devils (>300-1000 m in diameter) indicate that they may contribute, at least regionally, to ˜50% of dust entrainment by dust devils into the atmosphere compared to the dust devils <300 m in diameter given that the size-frequency distribution follows a power-law. Although large dust devils occur relatively rarely and the sediment fluxes are probably lower compared to smaller dust devils, their contribution to the background dust opacity by dust devils on Mars could be at least regionally large due to their longer lifetimes and ability of dust lifting into high atmospheric layers.
Spatial variability in airborne pollen concentrations.
Raynor, G S; Ogden, E C; Hayes, J V
1975-03-01
Tests were conducted to determine the relationship between airborne pollen concentrations and distance. Simultaneous samples were taken in 171 tests with sets of eight rotoslide samplers spaced from one to 486 M. apart in straight lines. Use of all possible pairs gave 28 separation distances. Tests were conducted over a 2-year period in urban and rural locations distant from major pollen sources during both tree and ragweed pollen seasons. Samples were taken at a height of 1.5 M. during 5-to 20-minute periods. Tests were grouped by pollen type, location, year, and direction of the wind relative to the line. Data were analyzed to evaluate variability without regard to sampler spacing and variability as a function of separation distance. The mean, standard deviation, coefficient of variation, ratio of maximum to the mean, and ratio of minimum to the mean were calculated for each test, each group of tests, and all cases. The average coefficient of variation is 0.21, the maximum over the mean, 1.39 and the minimum over the mean, 0.69. No relationship was found with experimental conditions. Samples taken at the minimum separation distance had a mean difference of 18 per cent. Differences between pairs of samples increased with distance in 10 of 13 groups. These results suggest that airborne pollens are not always well mixed in the lower atmosphere and that a sample becomes less representative with increasing distance from the sampling location.
13 CFR 120.829 - Job Opportunity average a CDC must maintain.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average of...
13 CFR 120.829 - Job Opportunity average a CDC must maintain.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average of...
13 CFR 120.829 - Job Opportunity average a CDC must maintain.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average of...
14 CFR 23.1443 - Minimum mass flow of supplemental oxygen.
Code of Federal Regulations, 2010 CFR
2010-01-01
... discretion. (c) If first-aid oxygen equipment is installed, the minimum mass flow of oxygen to each user may... upon an average flow rate of 3 liters per minute per person for whom first-aid oxygen is required. (d...
Code of Federal Regulations, 2010 CFR
2010-10-01
...; (b) The calculated tank plating thickness, including any corrosion allowance, must be the minimum thickness without a negative plate tolerance; and (c) The minimum tank plating thickness must not be less...
VizieR Online Data Catalog: V and R CCD photometry of visual binaries (Abad+, 2004)
NASA Astrophysics Data System (ADS)
Abad, C.; Docobo, J. A.; Lanchares, V.; Lahulla, J. F.; Abelleira, P.; Blanco, J.; Alvarez, C.
2003-11-01
Table 1 gives relevant data for the visual binaries observed. Observations were carried out over a short period of time, therefore we assign the mean epoch (1998.58) for the totality of data. Data of individual stars are presented as average data with errors, by parameter, when various observations have been calculated, as well as the number of observations involved. Errors corresponding to astrometric relative positions between components are always present. For single observations, parameter fitting errors, specially for dx and dy parameters, have been calculated analysing the chi2 test around the minimum. Following the rules for error propagation, theta and rho errors can be estimated. Then, Table 1 shows single observation errors with an additional significant digit. When a star does not have known references, we include it in Table 2, where J2000 position and magnitudes are from the USNO-A2.0 catalogue (Monet et al., 1998, Cat. ). (2 data files).
NASA Astrophysics Data System (ADS)
Farrukh, Muhammad Akhyar; Kauser, Robina; Adnan, Rohana
2013-09-01
The kinetics of vitamin C by ferric chloride hexahydrate has been investigated in the aqueous ethanol solution of basic surfactant viz. octadecylamine (ODA) under pseudo-first order conditions. The critical micelle concentration (CMC) of surfactant was determined by surface tension measurement. The effect of pH (2.5-4.5) and temperature (15-35°C) in the presence and absence of surfactant were investigated. Activation parameters, Δ E a, Δ H #, Δ S #, Δ G ≠, for the reaction were calculated by using Arrhenius and Eyring plot. Surface excess concentration (Γmax), minimum area per surfactant molecule ( A min), average area occupied by each molecule of surfactant ( a), surface pressure at the CMC (Πmax), Gibb's energy of micellization (Δ G M°), Gibb's energy of adsorption (Δ G ad°), were calculated. It was found that the reaction in the presence of surfactant showed faster oxidation rate than the aqueous ethanol solution. Reaction mechanism has been deduced in the presence and absence of surfactant.
Effects of the crustal magnetic fields on the Martian atmospheric ion escape rate
NASA Astrophysics Data System (ADS)
Ramstad, Robin; Barabash, Stas; Futaana, Yoshifumi; Nilsson, Hans; Holmström, Mats
2016-10-01
Eight years (2007-2015) of ion flux measurements from Mars Express are used to statistically investigate the influence of the Martian magnetic crustal fields on the atmospheric ion escape rate. We combine all Analyzer of Space Plasmas and Energetic Atoms/Ion Mass Analyzer (ASPERA-3/IMA) measurements taken during nominal upstream solar wind and solar extreme ultraviolet conditions to compute global average ion distribution functions, individually for the north/south hemispheres and for varying solar zenith angles (SZAs) of the strongest crustal magnetic field. Escape rates are subsequently calculated from each of the average distribution functions. The maximum escape rate (4.2 ± 1.2) × 1024s-1 is found for SZA = 60°-80°, while the minimum escape rate (1.7 ± 0.6) × 1024s-1 is found for SZA = 28°-60°, showing that the dayside orientation of the crustal fields significantly affects the global escape rate (p = 97%). However, averaged over time, independent of SZA, we find no statistically significant difference in the escape rates from the two hemispheres (escape from southern hemisphere 46% ± 18% of global rate).
Heat capacities and thermodynamic properties of annite (aluminous iron biotite)
Hemingway, B.S.; Robie, R.A.
1990-01-01
The heat capacities have been measured between 7 and 650 K by quasi-adiabatic calorimetry and differential scanning calorimetry. At 298.15 K and 1 bar, the calorimetric entropy for our sample is 354.9??0.7 J/(mol.K). A minimum configurational entropy of 18.7 J/(mol.K) for full disorder of Al/Si in the tetrahedral sites should be added to the calorimetric entropy for third-law calculations. The heat capacity equation [Cp in units of J/mol.K)] Cp0 = 583.586 + 0.075246T - 3420.60T-0.5 - (4.4551 ?? 106)T-2 fits the experimental and estimated heat capacities for our sample (valid range 250 to 1000 K) with an average deviation of 0.37%. -from Authors
Estimation of cold stress effect on dairy cows
NASA Astrophysics Data System (ADS)
Brouček, J.; Letkovičová, M.; Kovalčuj, K.
1991-03-01
Twelve crossbred heifers (Slovak Spotted x Holstein-Friesian) were housed in an open, uninsulated barn with straw bedding and a concrete-floored yard. Minimum temperatures inside the barn were as low as -19°C. The average milk yield decreased as the temperatures approached these minima. Compared with the temperate conditions, the feed intake and blood levels of glucose and free fatty acids increased. The level of sodium declined significantly during the second cold period. Correlations and regressions between milk yield and biochemical parameters were calculated, and the results indicate that the concentrations of free fatty acids, cholesterol, and triiodothyronine and the haematocrit values may serve to predict milk production during periods of cold stress, or in lactations of 305 days.
Study of percolation behavior depending on molecular structure design
NASA Astrophysics Data System (ADS)
Yu, Ji Woong; Lee, Won Bo
Each differently designed anisotropic nano-crystals(ANCs) are studied using Langevin dynamic simulation and their percolation behaviors are presented. Popular molecular dynamics software LAMMPS was used to design the system and perform the simulation. We calculated the minimum number density at which percolation occurs(i.e. percolation threshold), radial distribution function, and the average number of ANCs for a cluster. Electrical conductivity is improved when the number of transfers of electrons between ANCs, so called ''inter-hopping process'', which has the considerable contribution to resistance decreases and the number of inter-hopping process is directly related with the concentration of ANCs. Therefore, with the investigation of relationship between molecular architecture and percolation behavior, optimal design of ANC can be achieved.
Indirect Solar Wind Measurements Using Archival Cometary Tail Observations
NASA Astrophysics Data System (ADS)
Zolotova, Nadezhda; Sizonenko, Yuriy; Vokhmyanin, Mikhail; Veselovsky, Igor
2018-05-01
This paper addresses the problem of the solar wind behaviour during the Maunder minimum. Records on plasma tails of comets can shed light on the physical parameters of the solar wind in the past. We analyse descriptions and drawings of comets between the eleventh and eighteenth century. To distinguish between dust and plasma tails, we address their colour, shape, and orientation. Based on the calculations made by F.A. Bredikhin, we found that cometary tails deviate from the antisolar direction on average by more than 10°, which is typical for dust tails. We also examined the catalogues of Hevelius and Lubieniecki. The first indication of a plasma tail was revealed only for the great comet C/1769 P1.
Test Duration for Water Intake, Average Daily Gain, and Dry Matter Intake in Beef Cattle.
Ahlberg, C M; Allwardt, K; Broocks, A; Bruno, K; McPhillips, L; Taylor, A; Krehbiel, C R; Calvo-Lorenzo, M; Richards, C J; Place, S E; DeSilva, U; VanOverbeke, D L; Mateescu, R G; Kuehn, L A; Weaber, R L; Bormann, J M; Rolf, M M
2018-05-22
Water is an essential nutrient, but the effect it has on performance generally receives little attention. There are few systems and guidelines for collection of water intake phenotypes in beef cattle, which makes large-scale research on water intake a challenge. The Beef Improvement Federation has established guidelines for feed intake and average daily gain tests, but no guidelines exist for water intake. The goal of this study was to determine the test duration necessary for collection of accurate water intake phenotypes. To facilitate this goal, individual daily water intake (WI) and feed intake (FI) records were collected on 578 crossbred steers for a total of 70 d using an Insentec system at the Oklahoma State University Willard Sparks Beef Research Unit. Steers were fed in 5 groups and were individually weighed every 14 days. Within each group, steers were blocked by body weight (low and high) and randomly assigned to 1 of 4 pens containing approximately 30 steers per pen. Each pen provided 103.0 m2 of shade and included an Insentec system containing 6 feed bunks and 1 water bunk. Steers were fed a constant diet across groups and dry matter intake was calculated using the average of weekly percent dry matter within group. Average feed and water intakes for each animal were computed for increasingly large test durations (7, 14, 21, 28, 35, 42, 49, 56, 63 and 70 d), and ADG was calculated using a regression formed from body weights (BW) taken every14 d (0, 14, 28, 42, 56, and 70 d). Intervals for all traits were computed starting from both the beginning (d 0) and the end of the testing period (d 70). Pearson and Spearman correlations were computed for phenotypes from each shortened test period and for the full 70-d test. Minimum test duration was determined when the Pearson correlations were greater than 0.95 for each trait. Our results indicated that minimum test duration for WI, DMI, and ADG were 35, 42, and 70 d, respectively. No comparable studies exist for WI; however, our results for FI and ADG are consistent with those in the literature. Although further testing in other populations of cattle and areas of the country should take place, our results suggest that WI phenotypes can be collected concurrently with DMI, without extending test duration, even if following procedures for decoupled intake and gain tests.
ERIC Educational Resources Information Center
Dong, Nianbo; Maynard, Rebecca
2013-01-01
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…
Daylight Observations of Venus with Naked Eye in the Goryeosa
NASA Astrophysics Data System (ADS)
Lee, Ki-Won
2017-03-01
In this paper, we investigate the observations of Venus in daytime that are recorded in the Goryeosa (History of the Goryeo Dynasty, A.D. 918-1392). There are a total of 167 accounts of such observations in this historical book, spanning a period of 378 yr (from 1014 to 1392). These include six accounts where the days of the observation are not specified and two accounts where the phase angles are outside the calculation range of the equation used in our study. We analyze the number distribution of 164 accounts in 16 yr intervals covering the period from 1023 to 1391. We find that this distribution shows its minimum at around 1232, when the Goryeo dynasty moved the capital to the Ganghwa Island because of the Mongol invasion, and its maximum at around 1390, about the time when the dynasty fell. In addition, we calculate the azimuth, altitude, solar elongation, and apparent magnitude of Venus at sunset for 159 observations, excluding the eight accounts mentioned above, using the DE 406 ephemeris and modern astronomical algorithms. We find that the average elongation and magnitude of Venus on the days of those accounts were and -4.5, respectively, whereas the minimum magnitude was -3.8. The results of this study are useful for estimating the practical conditions for observing Venus in daylight with the naked eye and they also provide additional insight into the corresponding historical accounts contained in the Goryeosa.
Wieczorek, Michael; LaMotte, Andrew E.
2010-01-01
This tabular data set represents thecatchment-average for the 30-year (1971-2000) average daily minimum temperature in Celsius multiplied by 100 compiled for every MRB_E2RF1 catchment of selected Major River Basins (MRBs, Crawford and others, 2006). The source data were the United States Average Monthly or Annual Minimum Temperature, 1971 - 2000 raster data set produced by the PRISM Group at Oregon State University. The MRB_E2RF1 catchments are based on a modified version of the Environmental Protection Agency's (USEPA) ERF1_2 and include enhancements to support national and regional-scale surface-water quality modeling (Nolan and others, 2002; Brakebill and others, 2011). Data were compiled for every MRB_E2RF1 catchment for the conterminous United States covering New England and Mid-Atlantic (MRB1), South Atlantic-Gulf and Tennessee (MRB2), the Great Lakes, Ohio, Upper Mississippi, and Souris-Red-Rainy (MRB3), the Missouri (MRB4), the Lower Mississippi, Arkansas-White-Red, and Texas-Gulf (MRB5), the Rio Grande, Colorado, and the Great basin (MRB6), the Pacific Northwest (MRB7) river basins, and California (MRB8).
Wieczorek, Michael; LaMotte, Andrew E.
2010-01-01
This tabular data set represents thecatchment-average for the 30-year (1971-2000) average daily minimum temperature in Celsius multiplied by 100 compiled for every MRB_E2RF1 catchment of selected Major River Basins (MRBs, Crawford and others, 2006). The source data were the United States Average Monthly or Annual Minimum Temperature, 1971 - 2000 raster data set produced by the PRISM Group at Oregon State University. The MRB_E2RF1 catchments are based on a modified version of the Environmental Protection Agency's (USEPA) ERF1_2 and include enhancements to support national and regional-scale surface-water quality modeling (Nolan and others, 2002; Brakebill and others, 2011). Data were compiled for every MRB_E2RF1 catchment for the conterminous United States covering New England and Mid-Atlantic (MRB1), South Atlantic-Gulf and Tennessee (MRB2), the Great Lakes, Ohio, Upper Mississippi, and Souris-Red-Rainy (MRB3), the Missouri (MRB4), the Lower Mississippi, Arkansas-White-Red, and Texas-Gulf (MRB5), the Rio Grande, Colorado, and the Great basin (MRB6), the Pacific Northwest (MRB7) river basins, and California (MRB8).
Undergraduate paramedic students cannot do drug calculations.
Eastwood, Kathryn; Boyle, Malcolm J; Williams, Brett
2012-01-01
Previous investigation of drug calculation skills of qualified paramedics has highlighted poor mathematical ability with no published studies having been undertaken on undergraduate paramedics. There are three major error classifications. Conceptual errors involve an inability to formulate an equation from information given, arithmetical errors involve an inability to operate a given equation, and finally computation errors are simple errors of addition, subtraction, division and multiplication. The objective of this study was to determine if undergraduate paramedics at a large Australia university could accurately perform common drug calculations and basic mathematical equations normally required in the workplace. A cross-sectional study methodology using a paper-based questionnaire was administered to undergraduate paramedic students to collect demographical data, student attitudes regarding their drug calculation performance, and answers to a series of basic mathematical and drug calculation questions. Ethics approval was granted. The mean score of correct answers was 39.5% with one student scoring 100%, 3.3% of students (n=3) scoring greater than 90%, and 63% (n=58) scoring 50% or less, despite 62% (n=57) of the students stating they 'did not have any drug calculations issues'. On average those who completed a minimum of year 12 Specialist Maths achieved scores over 50%. Conceptual errors made up 48.5%, arithmetical 31.1% and computational 17.4%. This study suggests undergraduate paramedics have deficiencies in performing accurate calculations, with conceptual errors indicating a fundamental lack of mathematical understanding. The results suggest an unacceptable level of mathematical competence to practice safely in the unpredictable prehospital environment.
NASA Technical Reports Server (NTRS)
Chapman, Dean R
1952-01-01
A theoretical investigation is made of the airfoil profile for minimum pressure drag at zero lift in supersonic flow. In the first part of the report a general method is developed for calculating the profile having the least pressure drag for a given auxiliary condition, such as a given structural requirement or a given thickness ratio. The various structural requirements considered include bending strength, bending stiffness, torsional strength, and torsional stiffness. No assumption is made regarding the trailing-edge thickness; the optimum value is determined in the calculations as a function of the base pressure. To illustrate the general method, the optimum airfoil, defined as the airfoil having minimum pressure drag for a given auxiliary condition, is calculated in a second part of the report using the equations of linearized supersonic flow.
41 CFR 302-4.704 - Must we require a minimum driving distance per day?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Federal Travel Regulation System RELOCATION ALLOWANCES PERMANENT CHANGE OF STATION (PCS) ALLOWANCES FOR... driving distance not less than an average of 300 miles per day. However, an exception to the daily minimum... reasons acceptable to you. ...
State cigarette minimum price laws - United States, 2009.
2010-04-09
Cigarette price increases reduce the demand for cigarettes and thereby reduce smoking prevalence, cigarette consumption, and youth initiation of smoking. Excise tax increases are the most effective government intervention to increase the price of cigarettes, but cigarette manufacturers use trade discounts, coupons, and other promotions to counteract the effects of these tax increases and appeal to price-sensitive smokers. State cigarette minimum price laws, initiated by states in the 1940s and 1950s to protect tobacco retailers from predatory business practices, typically require a minimum percentage markup to be added to the wholesale and/or retail price. If a statute prohibits trade discounts from the minimum price calculation, these laws have the potential to counteract discounting by cigarette manufacturers. To assess the status of cigarette minimum price laws in the United States, CDC surveyed state statutes and identified those states with minimum price laws in effect as of December 31, 2009. This report summarizes the results of that survey, which determined that 25 states had minimum price laws for cigarettes (median wholesale markup: 4.00%; median retail markup: 8.00%), and seven of those states also expressly prohibited the use of trade discounts in the minimum retail price calculation. Minimum price laws can help prevent trade discounting from eroding the positive effects of state excise tax increases and higher cigarette prices on public health.
Achieving cost-neutrality with long-acting reversible contraceptive methods.
Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna
2015-01-01
This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20-29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. Copyright © 2014 Elsevier Inc. All rights reserved.
Mapping Atmospheric Moisture Climatologies across the Conterminous United States
Daly, Christopher; Smith, Joseph I.; Olson, Keith V.
2015-01-01
Spatial climate datasets of 1981–2010 long-term mean monthly average dew point and minimum and maximum vapor pressure deficit were developed for the conterminous United States at 30-arcsec (~800m) resolution. Interpolation of long-term averages (twelve monthly values per variable) was performed using PRISM (Parameter-elevation Relationships on Independent Slopes Model). Surface stations available for analysis numbered only 4,000 for dew point and 3,500 for vapor pressure deficit, compared to 16,000 for previously-developed grids of 1981–2010 long-term mean monthly minimum and maximum temperature. Therefore, a form of Climatologically-Aided Interpolation (CAI) was used, in which the 1981–2010 temperature grids were used as predictor grids. For each grid cell, PRISM calculated a local regression function between the interpolated climate variable and the predictor grid. Nearby stations entering the regression were assigned weights based on the physiographic similarity of the station to the grid cell that included the effects of distance, elevation, coastal proximity, vertical atmospheric layer, and topographic position. Interpolation uncertainties were estimated using cross-validation exercises. Given that CAI interpolation was used, a new method was developed to allow uncertainties in predictor grids to be accounted for in estimating the total interpolation error. Local land use/land cover properties had noticeable effects on the spatial patterns of atmospheric moisture content and deficit. An example of this was relatively high dew points and low vapor pressure deficits at stations located in or near irrigated fields. The new grids, in combination with existing temperature grids, enable the user to derive a full suite of atmospheric moisture variables, such as minimum and maximum relative humidity, vapor pressure, and dew point depression, with accompanying assumptions. All of these grids are available online at http://prism.oregonstate.edu, and include 800-m and 4-km resolution data, images, metadata, pedigree information, and station inventory files. PMID:26485026
Achieving cost-neutrality with long-acting reversible contraceptive methods⋆
Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna
2014-01-01
Objectives This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. Study design A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20–29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. Results The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. Conclusions This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Implications Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. PMID:25282161
Thomson, R; Kawrakow, I
2012-06-01
Widely-used classical trajectory Monte Carlo simulations of low energy electron transport neglect the quantum nature of electrons; however, at sub-1 keV energies quantum effects have the potential to become significant. This work compares quantum and classical simulations within a simplified model of electron transport in water. Electron transport is modeled in water droplets using quantum mechanical (QM) and classical trajectory Monte Carlo (MC) methods. Water droplets are modeled as collections of point scatterers representing water molecules from which electrons may be isotropically scattered. The role of inelastic scattering is investigated by introducing absorption. QM calculations involve numerically solving a system of coupled equations for the electron wavefield incident on each scatterer. A minimum distance between scatterers is introduced to approximate structured water. The average QM water droplet incoherent cross section is compared with the MC cross section; a relative error (RE) on the MC results is computed. RE varies with electron energy, average and minimum distances between scatterers, and scattering amplitude. The mean free path is generally the relevant length scale for estimating RE. The introduction of a minimum distance between scatterers increases RE substantially (factors of 5 to 10), suggesting that the structure of water must be modeled for accurate simulations. Inelastic scattering does not improve agreement between QM and MC simulations: for the same magnitude of elastic scattering, the introduction of inelastic scattering increases RE. Droplet cross sections are sensitive to droplet size and shape; considerable variations in RE are observed with changing droplet size and shape. At sub-1 keV energies, quantum effects may become non-negligible for electron transport in condensed media. Electron transport is strongly affected by the structure of the medium. Inelastic scatter does not improve agreement between QM and MC simulations of low energy electron transport in condensed media. © 2012 American Association of Physicists in Medicine.
Climate influence on Baltic cod, sprat, and herring stock-recruitment relationships
NASA Astrophysics Data System (ADS)
Margonski, Piotr; Hansson, Sture; Tomczak, Maciej T.; Grzebielec, Ryszard
2010-10-01
A wide range of possible recruitment drivers were tested for key exploited fish species in the Baltic Sea Regional Advisory Council (RAC) area: Eastern Baltic Cod, Central Baltic Herring, Gulf of Riga Herring, and sprat. For each of the stocks, two hypotheses were tested: (i) recruitment is significantly related to spawning stock biomass, climatic forcing, and feeding conditions and (ii) by acknowledging these drivers, management decisions can be improved. Climate impact expressed by climatic indices or changes in water temperature was included in all the final models. Recruitment of the herring stock appeared to be influenced by different factors: the spawning stock biomass, winter Baltic Sea Index prior to spawning, and potentially the November-December sea surface temperature during the winter after spawning were important to Gulf of Riga Herring, while the final models for Central Baltic Herring included spawning stock biomass and August sea surface temperature. Recruitment of sprat appeared to be influenced by July-August temperature, but was independent of the spawning biomass when SSB > 200,000 tons. Recruitment of Eastern Baltic Cod was significantly related to spawning stock biomass, the winter North Atlantic Oscillation index, and the reproductive volume in the Gotland Basin in May. All the models including extrinsic factors significantly improved prediction ability as compared to traditional models, which account for impacts of the spawning stock biomass alone. Based on the final models the minimum spawning stock biomass to derive the associated minimum recruitment under average environmental conditions was calculated for each stock. Using uncertainty analyses, the spawning stock biomass required to produce associated minimum recruitment was presented with different probabilities considering the influence of the extrinsic drivers. This tool allows for recruitment to be predicted with a required probability, that is, higher than the average 50% estimated from the models. Further, this approach considers unfavorable environmental conditions which mean that a higher spawning stock biomass is needed to maintain recruitment at a required level.
Improved Reweighting of Accelerated Molecular Dynamics Simulations for Free Energy Calculation.
Miao, Yinglong; Sinko, William; Pierce, Levi; Bucher, Denis; Walker, Ross C; McCammon, J Andrew
2014-07-08
Accelerated molecular dynamics (aMD) simulations greatly improve the efficiency of conventional molecular dynamics (cMD) for sampling biomolecular conformations, but they require proper reweighting for free energy calculation. In this work, we systematically compare the accuracy of different reweighting algorithms including the exponential average, Maclaurin series, and cumulant expansion on three model systems: alanine dipeptide, chignolin, and Trp-cage. Exponential average reweighting can recover the original free energy profiles easily only when the distribution of the boost potential is narrow (e.g., the range ≤20 k B T) as found in dihedral-boost aMD simulation of alanine dipeptide. In dual-boost aMD simulations of the studied systems, exponential average generally leads to high energetic fluctuations, largely due to the fact that the Boltzmann reweighting factors are dominated by a very few high boost potential frames. In comparison, reweighting based on Maclaurin series expansion (equivalent to cumulant expansion on the first order) greatly suppresses the energetic noise but often gives incorrect energy minimum positions and significant errors at the energy barriers (∼2-3 k B T). Finally, reweighting using cumulant expansion to the second order is able to recover the most accurate free energy profiles within statistical errors of ∼ k B T, particularly when the distribution of the boost potential exhibits low anharmonicity (i.e., near-Gaussian distribution), and should be of wide applicability. A toolkit of Python scripts for aMD reweighting "PyReweighting" is distributed free of charge at http://mccammon.ucsd.edu/computing/amdReweighting/.
Improved Reweighting of Accelerated Molecular Dynamics Simulations for Free Energy Calculation
2015-01-01
Accelerated molecular dynamics (aMD) simulations greatly improve the efficiency of conventional molecular dynamics (cMD) for sampling biomolecular conformations, but they require proper reweighting for free energy calculation. In this work, we systematically compare the accuracy of different reweighting algorithms including the exponential average, Maclaurin series, and cumulant expansion on three model systems: alanine dipeptide, chignolin, and Trp-cage. Exponential average reweighting can recover the original free energy profiles easily only when the distribution of the boost potential is narrow (e.g., the range ≤20kBT) as found in dihedral-boost aMD simulation of alanine dipeptide. In dual-boost aMD simulations of the studied systems, exponential average generally leads to high energetic fluctuations, largely due to the fact that the Boltzmann reweighting factors are dominated by a very few high boost potential frames. In comparison, reweighting based on Maclaurin series expansion (equivalent to cumulant expansion on the first order) greatly suppresses the energetic noise but often gives incorrect energy minimum positions and significant errors at the energy barriers (∼2–3kBT). Finally, reweighting using cumulant expansion to the second order is able to recover the most accurate free energy profiles within statistical errors of ∼kBT, particularly when the distribution of the boost potential exhibits low anharmonicity (i.e., near-Gaussian distribution), and should be of wide applicability. A toolkit of Python scripts for aMD reweighting “PyReweighting” is distributed free of charge at http://mccammon.ucsd.edu/computing/amdReweighting/. PMID:25061441
NASA Astrophysics Data System (ADS)
Kolb, Ulrich; Baraffe, Isabelle
Using improved, up-to-date stellar input physics tested against observations of low-mass stars and brown dwarfs we calculate the secular evolution of low-donor-mass CVs, including those which form with a brown dwarf donor star. Our models confirm the mismatch between the calculated minimum period (plus or minus in ~= 70 min) and the observed short-period cut-off (~= 80 min) in the CV period histogram. Theoretical period distributions synthesized from our model sequences always show an accumulation of systems at the minimum period, a feature absent in the observed distribution. We suggest that non-magnetic CVs become unobservable as they are effectively trapped in permanent quiescence before they reach plus or minus in, and that small-number statistics may hide the period spike for magnetic CVs. We calculate the minimum period for high mass transfer rate sequences and discuss the relevance of these for explaining the location of CV secondaries in the orbital-period-spectral-type diagram. We also show that a recently suggested revised mass-radius relation for low-mass main-sequence stars cannot explain the CV period gap.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cilla, Savino, E-mail: savinocilla@gmail.com; Deodato, Francesco; Macchia, Gabriella
We reported our initial experience in using Elekta volumetric modulated arc therapy (VMAT) and an anatomy-based treatment planning system (TPS) for single high-dose radiosurgery (SRS-VMAT) of liver metastases. This study included a cohort of 12 patients treated with a 26-Gy single fraction. Single-arc VMAT plans were generated with Ergo++ TPS. The prescription isodose surface (IDS) was selected to fulfill the 2 following criteria: 95% of planning target volume (PTV) reached 100% of the prescription dose and 99% of PTV reached a minimum of 90% of prescription dose. A 1-mm multileaf collimator (MLC) block margin was added around the PTV. Formore » a comparison of dose distributions with literature data, several conformity indexes (conformity index [CI], conformation number [CN], and gradient index [GI]) were calculated. Treatment efficiency and pretreatment dosimetric verification were assessed. Early clinical data were also reported. Our results reported that target and organ-at-risk objectives were met for all patients. Mean and maximum doses to PTVs were on average 112.9% and 121.5% of prescribed dose, respectively. A very high degree of dose conformity was obtained, with CI, CN, and GI average values equal to 1.29, 0.80, and 3.63, respectively. The beam-on-time was on average 9.3 minutes, i.e., 0.36 min/Gy. The mean number of monitor units was 3162, i.e., 121.6 MU/Gy. Pretreatment verification (3%-3 mm) showed an optimal agreement with calculated values; mean γ value was 0.27 and 98.2% of measured points resulted with γ < 1. With a median follow-up of 16 months complete response was observed in 12/14 (86%) lesions; partial response was observed in 2/14 (14%) lesions. No radiation-induced liver disease (RILD) was observed in any patients as well no duodenal ulceration or esophagitis or gastric hemorrhage. In conclusion, this analysis demonstrated the feasibility and the appropriateness of high-dose single-fraction SRS-VMAT in liver metastases performed with Elekta VMAT and Ergo++ TPS. Preliminary clinical outcomes showed a high rate of local control and minimum incidence of acute toxicity.« less
40 CFR 60.2170 - Is there a minimum amount of monitoring data I must obtain?
Code of Federal Regulations, 2012 CFR
2012-07-01
...-control periods, or required monitoring system quality assurance or control activities in calculations... 40 Protection of Environment 7 2012-07-01 2012-07-01 false Is there a minimum amount of monitoring..., 2001 Monitoring § 60.2170 Is there a minimum amount of monitoring data I must obtain? (a) Except for...
40 CFR 60.2170 - Is there a minimum amount of monitoring data I must obtain?
Code of Federal Regulations, 2011 CFR
2011-07-01
...-control periods, or required monitoring system quality assurance or control activities in calculations... 40 Protection of Environment 6 2011-07-01 2011-07-01 false Is there a minimum amount of monitoring..., 2001 Monitoring § 60.2170 Is there a minimum amount of monitoring data I must obtain? (a) Except for...
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Labson, Victor F.; Clark, Roger N.; Swayze, Gregg A.; Hoefen, Todd M.; Kokaly, Raymond F.; Livo, K. Eric; Powers, Michael H.; Plumlee, Geoffrey S.; Meeker, Gregory P.
2010-01-01
All of the calculations and results in this report are preliminary and intended for the purpose, and only for the purpose, of aiding the incident team in assessing the extent of the spilled oil for ongoing response efforts. Other applications of this report are not authorized and are not considered valid. Because of time constraints and limitations of data available to the experts, many of their estimates are approximate, are subject to revision, and certainly should not be used as the Federal Government's final values for assessing volume of the spill or its impact to the environment or to coastal communities. Each expert that contributed to this report reserves the right to alter his conclusions based upon further analysis or additional information. An estimated minimum total oil discharge was determined by calculations of oil volumes measured as of May 17, 2010. This included oil on the ocean surface measured with satellite and airborne images and with spectroscopic data (129,000 barrels to 246,000 barrels using less and more aggressive assumptions, respectively), oil skimmed off the surface (23,500 barrels from U.S. Coast Guard [USCG] estimates), oil burned off the surface (11,500 barrels from USCG estimates), dispersed subsea oil (67,000 to 114,000 barrels), and oil evaporated or dissolved (109,000 to 185,000 barrels). Sedimentation (oil captured from Mississippi River silt and deposited on the ocean bottom), biodegradation, and other processes may indicate significant oil volumes beyond our analyses, as will any subsurface volumes such as suspended tar balls or other emulsions that are not included in our estimates. The lower bounds of total measured volumes are estimated to be within the range of 340,000 to 580,000 barrels as of May 17, 2010, for an estimated average minimum discharge rate of 12,500 to 21,500 barrels per day for 27 days from April 20 to May 17, 2010.
The Use of the Nelder-Mead Method in Determining Projection Parameters for Globe Photographs
NASA Astrophysics Data System (ADS)
Gede, M.
2009-04-01
A photo of a terrestrial or celestial globe can be handled as a map. The only hard issue is its projection: the so-called Tilted Perspective Projection which, if the optical axis of the photo intersects the globe's centre, is simplified to the Vertical Near-Side Perspective Projection. When georeferencing such a photo, the exact parameters of the projections are also needed. These parameters depend on the position of the viewpoint of the camera. Several hundreds of globe photos had to be georeferenced during the Virtual Globes Museum project, which made necessary to automatize the calculation of the projection parameters. The author developed a program for this task which uses the Nelder-Mead Method in order to find the optimum parameters when a set of control points are given as input. The Nelder-Mead method is a numerical algorithm for minimizing a function in a many-dimensional space. The function in the present application is the average error of the control points calculated from the actual values of parameters. The parameters are the geographical coordinates of the projection centre, the image coordinates of the same point, the rotation of the projection, the height of the perspective point and the scale of the photo (calculated in pixels/km). The program reads the Global Mappers Ground Control Point (.GCP) file format as input and creates projection description files (.PRJ) for the same software. The initial values of the geographical coordinates of the projection centre are calculated as the average of the control points, while the other parameters are set to experimental values which represent the most common circumstances of taking a globe photograph. The algorithm runs until the change of the parameters sinks below a pre-defined limit. The minimum search can be refined by using the previous result parameter set as new initial values. This paper introduces the calculation mechanism and examples of the usage. Other possible other usages of the method are also discussed.
21 CFR 177.1680 - Polyurethane resins.
Code of Federal Regulations, 2012 CFR
2012-04-01
...′,α″-1,2,3-Propanetriyltris [omega-hydroxypoly (oxypropylene) (15-18 moles)], average molecular weight 3,000. Propylene glycol. α,α′,α″-[Propylidynetris (methylene)] tris [omega-hydroxypoly (oxypropylene) (minimum 1.5 moles)], minimum molecular weight 400. α-[ρ(1,1,3,3-Tetramethylbutyl) - phenyl]-omega...
21 CFR 177.1680 - Polyurethane resins.
Code of Federal Regulations, 2011 CFR
2011-04-01
...′,α″-1,2,3-Propanetriyltris [omega-hydroxypoly (oxypropylene) (15-18 moles)], average molecular weight 3,000. Propylene glycol. α,α′,α″-[Propylidynetris (methylene)] tris [omega-hydroxypoly (oxypropylene) (minimum 1.5 moles)], minimum molecular weight 400. α-[ρ(1,1,3,3-Tetramethylbutyl) - phenyl]-omega...
21 CFR 177.1680 - Polyurethane resins.
Code of Federal Regulations, 2010 CFR
2010-04-01
...′,α″-1,2,3-Propanetriyltris [omega-hydroxypoly (oxypropylene) (15-18 moles)], average molecular weight 3,000. Propylene glycol. α,α′,α″-[Propylidynetris (methylene)] tris [omega-hydroxypoly (oxypropylene) (minimum 1.5 moles)], minimum molecular weight 400. α-[ρ(1,1,3,3-Tetramethylbutyl) - phenyl]-omega...
21 CFR 177.1680 - Polyurethane resins.
Code of Federal Regulations, 2013 CFR
2013-04-01
...′,α″-1,2,3-Propanetriyltris [omega-hydroxypoly (oxypropylene) (15-18 moles)], average molecular weight 3,000. Propylene glycol. α,α′,α″-[Propylidynetris (methylene)] tris [omega-hydroxypoly (oxypropylene) (minimum 1.5 moles)], minimum molecular weight 400. α-[ρ(1,1,3,3-Tetramethylbutyl) - phenyl]-omega...
Ehelepola, N D B; Ariyaratne, Kusalika; Buddhadasa, W M N P; Ratnayake, Sunil; Wickramasinghe, Malani
2015-09-24
Weather variables affect dengue transmission. This study aimed to identify a dengue weather correlation pattern in Kandy, Sri Lanka, compare the results with results of similar studies, and establish ways for better control and prevention of dengue. We collected data on reported dengue cases in Kandy and mid-year population data from 2003 to 2012, and calculated weekly incidences. We obtained daily weather data from two weather stations and converted it into weekly data. We studied correlation patterns between dengue incidence and weather variables using the wavelet time series analysis, and then calculated cross-correlation coefficients to find magnitudes of correlations. We found a positive correlation between dengue incidence and rainfall in millimeters, the number of rainy and wet days, the minimum temperature, and the night and daytime, as well as average, humidity, mostly with a five- to seven-week lag. Additionally, we found correlations between dengue incidence and maximum and average temperatures, hours of sunshine, and wind, with longer lag periods. Dengue incidences showed a negative correlation with wind run. Our results showed that rainfall, temperature, humidity, hours of sunshine, and wind are correlated with local dengue incidence. We have suggested ways to improve dengue management routines and to control it in these times of global warming. We also noticed that the results of dengue weather correlation studies can vary depending on the data analysis.
On Using a Space Telescope to Detect Weak-lensing Shear
NASA Astrophysics Data System (ADS)
Tung, Nathan; Wright, Edward
2017-11-01
Ignoring redshift dependence, the statistical performance of a weak-lensing survey is set by two numbers: the effective shape noise of the sources, which includes the intrinsic ellipticity dispersion and the measurement noise, and the density of sources that are useful for weak-lensing measurements. In this paper, we provide some general guidance for weak-lensing shear measurements from a “generic” space telescope by looking for the optimum wavelength bands to maximize the galaxy flux signal-to-noise ratio (S/N) and minimize ellipticity measurement error. We also calculate an effective galaxy number per square degree across different wavelength bands, taking into account the density of sources that are useful for weak-lensing measurements and the effective shape noise of sources. Galaxy data collected from the ultra-deep UltraVISTA Ks-selected and R-selected photometric catalogs (Muzzin et al. 2013) are fitted to radially symmetric Sérsic galaxy light profiles. The Sérsic galaxy profiles are then stretched to impose an artificial weak-lensing shear, and then convolved with a pure Airy Disk PSF to simulate imaging of weak gravitationally lensed galaxies from a hypothetical diffraction-limited space telescope. For our model calculations and sets of galaxies, our results show that the peak in the average galaxy flux S/N, the minimum average ellipticity measurement error, and the highest effective galaxy number counts all lie around the K-band near 2.2 μm.
Master, Hiral; Thoma, Louise M; Christiansen, Meredith B; Polakowski, Emily; Schmitt, Laura A; White, Daniel K
2018-07-01
Evidence of physical function difficulties, such as difficulty rising from a chair, may limit daily walking for people with knee osteoarthritis (OA). The purpose of this study was to identify minimum performance thresholds on clinical tests of physical function predictive to walking ≥6,000 steps/day. This benchmark is known to discriminate people with knee OA who develop functional limitation over time from those who do not. Using data from the Osteoarthritis Initiative, we quantified daily walking as average steps/day from an accelerometer (Actigraph GT1M) worn for ≥10 hours/day over 1 week. Physical function was quantified using 3 performance-based clinical tests: 5 times sit-to-stand test, walking speed (tested over 20 meters), and 400-meter walk test. To identify minimum performance thresholds for daily walking, we calculated physical function values corresponding to high specificity (80-95%) to predict walking ≥6,000 steps/day. Among 1,925 participants (mean ± SD age 65.1 ± 9.1 years, mean ± SD body mass index 28.4 ± 4.8 kg/m 2 , and 55% female) with valid accelerometer data, 54.9% walked ≥6,000 steps/day. High specificity thresholds of physical function for walking ≥6,000 steps/day ranged 11.4-14.0 seconds on the 5 times sit-to-stand test, 1.13-1.26 meters/second for walking speed, or 315-349 seconds on the 400-meter walk test. Not meeting these minimum performance thresholds on clinical tests of physical function may indicate inadequate physical ability to walk ≥6,000 steps/day for people with knee OA. Rehabilitation may be indicated to address underlying impairments limiting physical function. © 2017, American College of Rheumatology.
NASA Technical Reports Server (NTRS)
Corbett, Lee B.; Bierman, Paul R.; Graly, Joseph A.; Neumann, Thomas A.; Rood, Dylan H.
2013-01-01
High-latitude landscape evolution processes have the potential to preserve old, relict surfaces through burial by cold-based, nonerosive glacial ice. To investigate landscape history and age in the high Arctic, we analyzed in situ cosmogenic Be(sup 10) and Al (sup 26) in 33 rocks from Upernavik, northwest Greenland. We sampled adjacent bedrock-boulder pairs along a 100 km transect at elevations up to 1000 m above sea level. Bedrock samples gave significantly older apparent exposure ages than corresponding boulder samples, and minimum limiting ages increased with elevation. Two-isotope calculations Al(sup26)/B(sup 10) on 20 of the 33 samples yielded minimum limiting exposure durations up to 112 k.y., minimum limiting burial durations up to 900 k.y., and minimum limiting total histories up to 990 k.y. The prevalence of BE(sup 10) and Al(sup 26) inherited from previous periods of exposure, especially in bedrock samples at high elevation, indicates that these areas record long and complex surface exposure histories, including significant periods of burial with little subglacial erosion. The long total histories suggest that these high elevation surfaces were largely preserved beneath cold-based, nonerosive ice or snowfields for at least the latter half of the Quaternary. Because of high concentrations of inherited nuclides, only the six youngest boulder samples appear to record the timing of ice retreat. These six samples suggest deglaciation of the Upernavik coast at 11.3 +/- 0.5 ka (average +/- 1 standard deviation). There is no difference in deglaciation age along the 100 km sample transect, indicating that the ice-marginal position retreated rapidly at rates of approx.120 m yr(sup-1).
Minimum distraction gap: how much ankle joint space is enough in ankle distraction arthroplasty?
Fragomen, Austin T; McCoy, Thomas H; Meyers, Kathleen N; Rozbruch, S Robert
2014-02-01
The success of ankle distraction arthroplasty relies on the separation of the tibiotalar articular surfaces. The purpose of this study was to find the minimum distraction gap needed to ensure that the tibiotalar joint surfaces would not contact each other with full weight-bearing while under distraction. Circular external fixators were mounted to nine cadaver ankle specimens. Each specimen was then placed into a custom-designed load chamber. Loads of 0, 350, and 700N were applied to the specimen. Radiographic joint space was measured and joint contact pressure was monitored under each load. The external fixator was then sequentially distracted, and the radiographic joint space was measured under the three different loads. The experiment was stopped when there was no joint contact under 700N of load. The radiographic joint space was measured and the initial (undistracted) radiographic joint space was subtracted from it yielding the distraction gap. The minimum distraction gap (mDG) that would provide total unloading was calculated. The average mDG was 2.4 mm (range, 1.6 to 4.0 mm) at 700N of load, 4.4 mm (range, 3.7 to 5.8 mm) at 350N of load, and 4.9 mm (range, 3.7 to 7.0 mm) at 0N of load. These results suggest that if the radiographic joint space of on a standing X-ray of an ankle undergoing distraction arthroplasty shows a minimum of 5.8 mm of DG, then there will be no contact between joint surfaces during full weight-bearing. Therefore, 5 mm of radiographic joint space, as recommended historically, may not be adequate to prevent contact of the articular surfaces during weight-bearing.
QUIET-TIME SUPRATHERMAL (∼0.1–1.5 keV) ELECTRONS IN THE SOLAR WIND
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Jiawei; Wang, Linghua; Zong, Qiugang
2016-03-20
We present a statistical survey of the energy spectrum of solar wind suprathermal (∼0.1–1.5 keV) electrons measured by the WIND 3DP instrument at 1 AU during quiet times at the minimum and maximum of solar cycles 23 and 24. After separating (beaming) strahl electrons from (isotropic) halo electrons according to their different behaviors in the angular distribution, we fit the observed energy spectrum of both strahl and halo electrons at ∼0.1–1.5 keV to a Kappa distribution function with an index κ and effective temperature T{sub eff}. We also calculate the number density n and average energy E{sub avg} of strahl andmore » halo electrons by integrating the electron measurements between ∼0.1 and 1.5 keV. We find a strong positive correlation between κ and T{sub eff} for both strahl and halo electrons, and a strong positive correlation between the strahl n and halo n, likely reflecting the nature of the generation of these suprathermal electrons. In both solar cycles, κ is larger at solar minimum than at solar maximum for both strahl and halo electrons. The halo κ is generally smaller than the strahl κ (except during the solar minimum of cycle 23). The strahl n is larger at solar maximum, but the halo n shows no difference between solar minimum and maximum. Both the strahl n and halo n have no clear association with the solar wind core population, but the density ratio between the strahl and halo roughly anti-correlates (correlates) with the solar wind density (velocity)« less
Code of Federal Regulations, 2011 CFR
2011-07-01
... upon which your application for a modification is based: —BOD5 ___ mg/L —Suspended solids ___ mg/L —pH... dry weather —average wet weather —maximum —annual average BOD5 (mg/L) for the following plant flows: —minimum —average dry weather —average wet weather —maximum —annual average Suspended solids (mg/L) for the...
Code of Federal Regulations, 2010 CFR
2010-07-01
... upon which your application for a modification is based: —BOD5 ___ mg/L —Suspended solids ___ mg/L —pH... dry weather —average wet weather —maximum —annual average BOD5 (mg/L) for the following plant flows: —minimum —average dry weather —average wet weather —maximum —annual average Suspended solids (mg/L) for the...
Inflight fuel tank temperature survey data
NASA Technical Reports Server (NTRS)
Pasion, A. J.
1979-01-01
Statistical summaries of the fuel and air temperature data for twelve different routes and for different aircraft models (B747, B707, DC-10 and DC-8), are given. The minimum fuel, total air and static air temperature expected for a 0.3% probability were summarized in table form. Minimum fuel temperature extremes agreed with calculated predictions and the minimum fuel temperature did not necessarily equal the minimum total air temperature even for extreme weather, long range flights.
40 CFR 180.960 - Polymers; exemptions from the requirement of a tolerance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 9014-92-026401-47-8 1, 2-Ethanediamine, polymer with methyl oxirane and oxirane, minimum number average...(oxyethylene) content averages 30 moles None α-(p-Nonylphenyl)-ω-hydroxypoly(oxyethylene) sulfate, and its...
Proposal for Support of Miami Inner City Marine Summer Intern Program, Dade County.
1987-12-21
employer NUMBER OF POSITIONS ONE MINIMUM AGE 16 SPECIAL REQUIREMENTS * General Science * Basic knowledge of library orncedures, an interest in library ... science in helpful * Minimum Grade Point Average 3.0 DRESS REQUIREMENTS Discuss with employer JOB DESCRIPTION p. * Catalogs and files new sets of
21 CFR 177.1680 - Polyurethane resins.
Code of Federal Regulations, 2014 CFR
2014-04-01
...′-(Isopropylidenedi-p-phenylene)bis[omega-hydroxypoly (oxy-pro-pylene)(3-4 moles)], average molecular weight 675... propylene oxide). Polypropylene glycol. α,α′,α″-1,2,3-Propanetriyltris [omega-hydroxypoly (oxypropylene) (15...)] tris [omega-hydroxypoly (oxypropylene) (minimum 1.5 moles)], minimum molecular weight 400. α-[ρ(1,1,3,3...
40 CFR 1065.546 - Validation of minimum dilution ratio for PM batch sampling.
Code of Federal Regulations, 2010 CFR
2010-07-01
... flows and/or tracer gas concentrations for transient and ramped modal cycles to validate the minimum... mode-average values instead of continuous measurements for discrete mode steady-state duty cycles... molar flow data. This involves determination of at least two of the following three quantities: Raw...
Sigma Routing Metric for RPL Protocol.
Sanmartin, Paul; Rojas, Aldo; Fernandez, Luis; Avila, Karen; Jabba, Daladier; Valle, Sebastian
2018-04-21
This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX). However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption.
Lee, Yun-Keun; Ju, Young-Su; Lee, Won Jin; Hwang, Seung Sik; Yim, Sang-Hyuk; Yoo, Sang-Chul; Lee, Jieon; Choi, Kyung-Hwa; Burm, Eunae; Ha, Mina
2015-01-01
We aimed to assess the radiation exposure for epidemiologic investigation in residents exposed to radiation from roads that were accidentally found to be contaminated with radioactive cesium-137 ((137)Cs) in Seoul. Using information regarding the frequency and duration of passing via the (137)Cs contaminated roads or residing/working near the roads from the questionnaires that were obtained from 8875 residents and the measured radiation doses reported by the Nuclear Safety and Security Commission, we calculated the total cumulative dose of radiation exposure for each person. Sixty-three percent of the residents who responded to the questionnaire were considered as ever-exposed and 1% of them had a total cumulative dose of more than 10 mSv. The mean (minimum, maximum) duration of radiation exposure was 4.75 years (0.08, 11.98) and the geometric mean (minimum, maximum) of the total cumulative dose was 0.049 mSv (<0.001, 35.35) in the exposed. An individual exposure assessment was performed for an epidemiological study to estimate the health risk among residents living in the vicinity of (137)Cs contaminated roads. The average exposure dose in the exposed people was less than 5% of the current guideline.
Payton, Gardner W.; Susong, D.D.; Kip, Solomon D.; Heasler, H.
2010-01-01
Snowmelt hydrograph analysis and groundwater age dates of cool water springs on the Yellowstone volcanic plateau provide evidence of high volumes of groundwater circulation in watersheds comprised of quaternary Yellowstone volcanics. Ratios of maximum to minimum mean daily discharge and average recession indices are calculated for watersheds within and surrounding the Yellowstone volcanic plateau. A model for snowmelt recession is used to separate groundwater discharge from overland runoff, and compare groundwater systems. Hydrograph signal interpretation is corroborated with chlorofluorocarbon (CFC) and tritium concentrations in cool water springs on the Yellowstone volcanic plateau. Hydrograph parameters show a spatial pattern correlated with watershed geology. Watersheds comprised dominantly of quaternary Yellowstone volcanics are characterized by slow streamflow recession, low maximum to minimum flow ratios. Cool springs sampled within the Park contain CFC's and tritium and have apparent CFC age dates that range from about 50 years to modern. Watersheds comprised of quaternary Yellowstone volcanics have a large volume of active groundwater circulation. A large, advecting groundwater field would be the dominant mechanism for mass and energy transport in the shallow crust of the Yellowstone volcanic plateau, and thus control the Yellowstone hydrothermal system. ?? 2009 Elsevier B.V.
NASA Astrophysics Data System (ADS)
Schaperow, J.; Cooper, M. G.; Cooley, S. W.; Alam, S.; Smith, L. C.; Lettenmaier, D. P.
2017-12-01
As climate regimes shift, streamflows and our ability to predict them will change, as well. Elasticity of summer minimum streamflow is estimated for 138 unimpaired headwater river basins across the maritime western US mountains to better understand how climatologic variables and geologic characteristics interact to determine the response of summer low flows to winter precipitation (PPT), spring snow water equivalent (SWE), and summertime potential evapotranspiration (PET). Elasticities are calculated using log log linear regression, and linear reservoir storage coefficients are used to represent basin geology. Storage coefficients are estimated using baseflow recession analysis. On average, SWE, PET, and PPT explain about 1/3 of the summertime low flow variance. Snow-dominated basins with long timescales of baseflow recession are least sensitive to changes in SWE, PPT, and PET, while rainfall-dominated, faster draining basins are most sensitive. There are also implications for the predictability of summer low flows. The R2 between streamflow and SWE drops from 0.62 to 0.47 from snow-dominated to rain-dominated basins, while there is no corresponding increase in R2 between streamflow and PPT.
Sigma Routing Metric for RPL Protocol
Rojas, Aldo; Fernandez, Luis
2018-01-01
This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX). However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption. PMID:29690524
NASA Astrophysics Data System (ADS)
Jaiswal, P.; van Westen, C. J.; Jetten, V.
2011-06-01
A quantitative procedure for estimating landslide risk to life and property is presented and applied in a mountainous area in the Nilgiri hills of southern India. Risk is estimated for elements at risk located in both initiation zones and run-out paths of potential landslides. Loss of life is expressed as individual risk and as societal risk using F-N curves, whereas the direct loss of properties is expressed in monetary terms. An inventory of 1084 landslides was prepared from historical records available for the period between 1987 and 2009. A substantially complete inventory was obtained for landslides on cut slopes (1042 landslides), while for natural slopes information on only 42 landslides was available. Most landslides were shallow translational debris slides and debris flowslides triggered by rainfall. On natural slopes most landslides occurred as first-time failures. For landslide hazard assessment the following information was derived: (1) landslides on natural slopes grouped into three landslide magnitude classes, based on landslide volumes, (2) the number of future landslides on natural slopes, obtained by establishing a relationship between the number of landslides on natural slopes and cut slopes for different return periods using a Gumbel distribution model, (3) landslide susceptible zones, obtained using a logistic regression model, and (4) distribution of landslides in the susceptible zones, obtained from the model fitting performance (success rate curve). The run-out distance of landslides was assessed empirically using landslide volumes, and the vulnerability of elements at risk was subjectively assessed based on limited historic incidents. Direct specific risk was estimated individually for tea/coffee and horticulture plantations, transport infrastructures, buildings, and people both in initiation and run-out areas. Risks were calculated by considering the minimum, average, and maximum landslide volumes in each magnitude class and the corresponding minimum, average, and maximum run-out distances and vulnerability values, thus obtaining a range of risk values per return period. The results indicate that the total annual minimum, average, and maximum losses are about US 44 000, US 136 000 and US 268 000, respectively. The maximum risk to population varies from 2.1 × 10-1 for one or more lives lost to 6.0 × 10-2 yr-1 for 100 or more lives lost. The obtained results will provide a basis for planning risk reduction strategies in the Nilgiri area.
Implications of Liebig’s law of the minimum for tree-ring reconstructions of climate
NASA Astrophysics Data System (ADS)
Stine, A. R.; Huybers, P.
2017-11-01
A basic principle of ecology, known as Liebig’s Law of the Minimum, is that plant growth reflects the strongest limiting environmental factor. This principle implies that a limiting environmental factor can be inferred from historical growth and, in dendrochronology, such reconstruction is generally achieved by averaging collections of standardized tree-ring records. Averaging is optimal if growth reflects a single limiting factor and noise but not if growth also reflects locally variable stresses that intermittently limit growth. In this study a collection of Arctic tree ring records is shown to follow scaling relationships that are inconsistent with the signal-plus-noise model of tree growth but consistent with Liebig’s Law acting at the local level. Also consistent with law-of-the-minimum behavior is that reconstructions based on the least-stressed trees in a given year better-follow variations in temperature than typical approaches where all tree-ring records are averaged. Improvements in reconstruction skill occur across all frequencies, with the greatest increase at the lowest frequencies. More comprehensive statistical-ecological models of tree growth may offer further improvement in reconstruction skill.
Rock climbing: A local-global algorithm to compute minimum energy and minimum free energy pathways.
Templeton, Clark; Chen, Szu-Hua; Fathizadeh, Arman; Elber, Ron
2017-10-21
The calculation of minimum energy or minimum free energy paths is an important step in the quantitative and qualitative studies of chemical and physical processes. The computations of these coordinates present a significant challenge and have attracted considerable theoretical and computational interest. Here we present a new local-global approach to study reaction coordinates, based on a gradual optimization of an action. Like other global algorithms, it provides a path between known reactants and products, but it uses a local algorithm to extend the current path in small steps. The local-global approach does not require an initial guess to the path, a major challenge for global pathway finders. Finally, it provides an exact answer (the steepest descent path) at the end of the calculations. Numerical examples are provided for the Mueller potential and for a conformational transition in a solvated ring system.
Distribution of cataract surgical rate and its economic inequality in Iran.
Hashemi, Hassan; Rezvan, Farhad; Fotouhi, Akbar; Khabazkhoob, Mehdi; Gilasi, Hamidreza; Etemad, Koroush; Mahdavi, Alireza; Mehravaran, Shiva; Asgari, Soheila
2015-06-01
To determine the distribution of the cataract surgical number per million population per year (CSR), the CSR in the population older than 50 years (CSR 50+) in the provinces of Iran, and their economic inequality in 2010. As part of the cross-sectional 2011 CSR survey, the provincial CSR and CSR 50+ were calculated as the total number of surgeries in major and minor centers divided by the total population and the population older than 50 years in each province. Economic inequality was determined using the average province income, the average urban and rural household incomes, and the percentage of urban and rural population in each province. Tehran and Ilam provinces had the highest and lowest CSR (12,465 vs. 359), respectively. Fars and Ilam provinces had the highest and lowest CSR 50+ (71,381 vs. 2481), respectively. Low CSR (<3000) was detected in 9 provinces where 2.4 to 735.7% increase is needed to reach the minimum required. High CSR (>5000) was observed in 14 provinces (45.2%) where rates were 0.6 to 59.9% higher than the global target. Cataract surgical rate increased at higher economic quintiles. Differences between the first, second, and fifth (poorest) quintiles were statistically significant. The CSR concentration index was 0.1964 (95% confidence interval, 0.0964 to 0.2964). In line with the goals of the Vision 2020 initiative to eliminate cataract blindness, more than 70% of geographic areas in Iran have achieved the minimum CSR of 3000 or more. However, a large gap still exists in less than 30% of areas, mainly attributed to the economic status.
Suicide and meteorological factors in São Paulo, Brazil, 1996-2011: a time series analysis.
Bando, Daniel H; Teng, Chei T; Volpe, Fernando M; Masi, Eduardo de; Pereira, Luiz A; Braga, Alfésio L
2017-01-01
Considering the scarcity of reports from intertropical latitudes and the Southern Hemisphere, we aimed to examine the association between meteorological factors and suicide in São Paulo. Weekly suicide records stratified by sex were gathered. Weekly averages for minimum, mean, and maximum temperature (°C), insolation (hours), irradiation (MJ/m2), relative humidity (%), atmospheric pressure (mmHg), and rainfall (mm) were computed. The time structures of explanatory variables were modeled by polynomial distributed lag applied to the generalized additive model. The model controlled for long-term trends and selected meteorological factors. The total number of suicides was 6,600 (5,073 for men), an average of 6.7 suicides per week (8.7 for men and 2.0 for women). For overall suicides and among men, effects were predominantly acute and statistically significant only at lag 0. Weekly average minimum temperature had the greatest effect on suicide; there was a 2.28% increase (95%CI 0.90-3.69) in total suicides and a 2.37% increase (95%CI 0.82-3.96) among male suicides with each 1 °C increase. This study suggests that an increase in weekly average minimum temperature has a short-term effect on suicide in São Paulo.
On the Relation Between Spotless Days and the Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2005-01-01
Spotless days are examined as a predictor for the size and timing of a sunspot cycle. For cycles 16-23 the first spotless day for a new cycle, which occurs during the decline of the old cycle, is found to precede minimum amplitude for the new cycle by about approximately equal to 34 mo, having a range of 25-40 mo. Reports indicate that the first spotless day for cycle 24 occurred in January 2004, suggesting that minimum amplitude for cycle 24 should be expected before April 2007, probably sometime during the latter half of 2006. If true, then cycle 23 will be classified as a cycle of shorter period, inferring further that cycle 24 likely will be a cycle of larger than average minimum and maximum amplitudes and faster than average rise, peaking sometime in 2010.
Flow convergence caused by a salinity minimum in a tidal channel
Warner, John C.; Schoellhamer, David H.; Burau, Jon R.; Schladow, S. Geoffrey
2006-01-01
Residence times of dissolved substances and sedimentation rates in tidal channels are affected by residual (tidally averaged) circulation patterns. One influence on these circulation patterns is the longitudinal density gradient. In most estuaries the longitudinal density gradient typically maintains a constant direction. However, a junction of tidal channels can create a local reversal (change in sign) of the density gradient. This can occur due to a difference in the phase of tidal currents in each channel. In San Francisco Bay, the phasing of the currents at the junction of Mare Island Strait and Carquinez Strait produces a local salinity minimum in Mare Island Strait. At the location of a local salinity minimum the longitudinal density gradient reverses direction. This paper presents four numerical models that were used to investigate the circulation caused by the salinity minimum: (1) A simple one-dimensional (1D) finite difference model demonstrates that a local salinity minimum is advected into Mare Island Strait from the junction with Carquinez Strait during flood tide. (2) A three-dimensional (3D) hydrodynamic finite element model is used to compute the tidally averaged circulation in a channel that contains a salinity minimum (a change in the sign of the longitudinal density gradient) and compares that to a channel that contains a longitudinal density gradient in a constant direction. The tidally averaged circulation produced by the salinity minimum is characterized by converging flow at the bed and diverging flow at the surface, whereas the circulation produced by the constant direction gradient is characterized by converging flow at the bed and downstream surface currents. These velocity fields are used to drive both a particle tracking and a sediment transport model. (3) A particle tracking model demonstrates a 30 percent increase in the residence time of neutrally buoyant particles transported through the salinity minimum, as compared to transport through a constant direction density gradient. (4) A sediment transport model demonstrates increased deposition at the near-bed null point of the salinity minimum, as compared to the constant direction gradient null point. These results are corroborated by historically noted large sedimentation rates and a local maximum of selenium accumulation in clams at the null point in Mare Island Strait.
40 CFR 62.14455 - What if my HMIWI goes outside of a parameter limit?
Code of Federal Regulations, 2010 CFR
2010-07-01
... temperature (3-hour rolling average) simultaneously The PM, CO, and dioxin/furan emission limits. (c) Except..., daily average for batch HMIWI), and below the minimum dioxin/furan sorbent flow rate (3-hour rolling average) simultaneously The dioxin/furan emission limit. (3) Operates above the maximum charge rate (3...
40 CFR Table 2 to Subpart Dddd of... - Operating Requirements
Code of Federal Regulations, 2011 CFR
2011-07-01
... minimum temperature established during the performance test Maintain the 3-hour block average THC... representative sample of the catalyst at least every 12 months Maintain the 3-hour block average THC... established according to § 63.2262(m) Maintain the 24-hour block average THC concentration a in the biofilter...
40 CFR Table 2 to Subpart Dddd of... - Operating Requirements
Code of Federal Regulations, 2010 CFR
2010-07-01
... minimum temperature established during the performance test Maintain the 3-hour block average THC... representative sample of the catalyst at least every 12 months Maintain the 3-hour block average THC... established according to § 63.2262(m) Maintain the 24-hour block average THC concentration a in the biofilter...
Jaciw, Andrew P; Lin, Li; Ma, Boya
2016-10-18
Prior research has investigated design parameters for assessing average program impacts on achievement outcomes with cluster randomized trials (CRTs). Less is known about parameters important for assessing differential impacts. This article develops a statistical framework for designing CRTs to assess differences in impact among student subgroups and presents initial estimates of critical parameters. Effect sizes and minimum detectable effect sizes for average and differential impacts are calculated before and after conditioning on effects of covariates using results from several CRTs. Relative sensitivities to detect average and differential impacts are also examined. Student outcomes from six CRTs are analyzed. Achievement in math, science, reading, and writing. The ratio of between-cluster variation in the slope of the moderator divided by total variance-the "moderator gap variance ratio"-is important for designing studies to detect differences in impact between student subgroups. This quantity is the analogue of the intraclass correlation coefficient. Typical values were .02 for gender and .04 for socioeconomic status. For studies considered, in many cases estimates of differential impact were larger than of average impact, and after conditioning on effects of covariates, similar power was achieved for detecting average and differential impacts of the same size. Measuring differential impacts is important for addressing questions of equity, generalizability, and guiding interpretation of subgroup impact findings. Adequate power for doing this is in some cases reachable with CRTs designed to measure average impacts. Continuing collection of parameters for assessing differential impacts is the next step. © The Author(s) 2016.
Cho, Sung Youn; Chae, Soo-Won; Choi, Kui Won; Seok, Hyun Kwang; Han, Hyung Seop; Yang, Seok Jo; Kim, Young Yul; Kim, Jong Tac; Jung, Jae Young; Assad, Michel
2012-08-01
In this study, a newly developed Mg-Ca-Zn alloy for low degradation rate and surface erosion properties was evaluated. The compressive, tensile, and fatigue strength were measured before implantation. The degradation behavior was evaluated by analyzing the microstructure and local hardness of the explanted specimen. Mean and maximum degradation rates were measured using micro CT equipment from 4-, 8-, and 16- week explants, and the alloy was shown to display surface erosion properties. Based on these characteristics, the average and minimum load bearing capacities in tension, compression, and bending modes were calculated. According to the degradation rate and references of recommended dietary intakes (RDI), the Mg-Ca-Zn alloy appears to be safe for human use. Copyright © 2012 Wiley Periodicals, Inc.
Variability of higher order wavefront aberrations after blinks.
Hagyó, Krisztina; Csákány, Béla; Lang, Zsolt; Németh, János
2009-01-01
To investigate the rapid alterations in value and fluctuation of ocular wavefront aberrations during the interblink interval. Forty-two volunteers were examined with a WASCA Wavefront Analyzer (Carl Zeiss Meditec AG) using modified software. For each subject, 150 images (about 6 frames/second) were registered during an interblink period. The outcome measures were spherical and cylindrical refraction and root-mean-square (RMS) values for spherical, coma, and total higher order aberrations. Fifth order polynomials were fitted to the data and the fluctuation trends of the parameters were determined. We calculated the prevalence of the trends with an early local minimum (type 1). The tear production status (Schirmer test) and tear film break-up time (BUT) were also measured. Fluctuation trends with an early minimum (type 1) were significantly more frequent than trends with an early local maximum (type 2) for total higher order aberrations RMS (P=.036). The incidence of type 1 fluctuation trends was significantly greater for coma and total higher order aberrations RMS (P=.041 and P=.003, respectively) in subjects with normal results in the BUT or Schirmer test than in those with abnormal results. In the normal subjects, the first minimum of type 1 RMS fluctuation trends occurred, on average, between 3.8 and 5.1 seconds after blink. We suggest that wavefront aberrations can be measured most accurately at the time after blink when they exhibit a decreased degree of dispersion. We recommend that a snapshot of wavefront measurements be made 3 to 5 seconds after blink.
A soil water based index as a suitable agricultural drought indicator
NASA Astrophysics Data System (ADS)
Martínez-Fernández, J.; González-Zamora, A.; Sánchez, N.; Gumuzzio, A.
2015-03-01
Currently, the availability of soil water databases is increasing worldwide. The presence of a growing number of long-term soil moisture networks around the world and the impressive progress of remote sensing in recent years has allowed the scientific community and, in the very next future, a diverse group of users to obtain precise and frequent soil water measurements. Therefore, it is reasonable to consider soil water observations as a potential approach for monitoring agricultural drought. In the present work, a new approach to define the soil water deficit index (SWDI) is analyzed to use a soil water series for drought monitoring. In addition, simple and accurate methods using a soil moisture series solely to obtain soil water parameters (field capacity and wilting point) needed for calculating the index are evaluated. The application of the SWDI in an agricultural area of Spain presented good results at both daily and weekly time scales when compared to two climatic water deficit indicators (average correlation coefficient, R, 0.6) and to agricultural production. The long-term minimum, the growing season minimum and the 5th percentile of the soil moisture series are good estimators (coefficient of determination, R2, 0.81) for the wilting point. The minimum of the maximum value of the growing season is the best estimator (R2, 0.91) for field capacity. The use of these types of tools for drought monitoring can aid the better management of agricultural lands and water resources, mainly under the current scenario of climate uncertainty.
Droplet squeezing through a narrow constriction: Minimum impulse and critical velocity
NASA Astrophysics Data System (ADS)
Zhang, Zhifeng; Drapaca, Corina; Chen, Xiaolin; Xu, Jie
2017-07-01
Models of a droplet passing through narrow constrictions have wide applications in science and engineering. In this paper, we report our findings on the minimum impulse (momentum change) of pushing a droplet through a narrow circular constriction. The existence of this minimum impulse is mathematically derived and numerically verified. The minimum impulse happens at a critical velocity when the time-averaged Young-Laplace pressure balances the total minor pressure loss in the constriction. Finally, numerical simulations are conducted to verify these concepts. These results could be relevant to problems of energy optimization and studies of chemical and biomedical systems.
Minimum airflow reset of single-duct VAV terminal boxes
NASA Astrophysics Data System (ADS)
Cho, Young-Hum
Single duct Variable Air Volume (VAV) systems are currently the most widely used type of HVAC system in the United States. When installing such a system, it is critical to determine the minimum airflow set point of the terminal box, as an optimally selected set point will improve the level of thermal comfort and indoor air quality (IAQ) while at the same time lower overall energy costs. In principle, this minimum rate should be calculated according to the minimum ventilation requirement based on ASHRAE standard 62.1 and maximum heating load of the zone. Several factors must be carefully considered when calculating this minimum rate. Terminal boxes with conventional control sequences may result in occupant discomfort and energy waste. If the minimum rate of airflow is set too high, the AHUs will consume excess fan power, and the terminal boxes may cause significant simultaneous room heating and cooling. At the same time, a rate that is too low will result in poor air circulation and indoor air quality in the air-conditioned space. Currently, many scholars are investigating how to change the algorithm of the advanced VAV terminal box controller without retrofitting. Some of these controllers have been found to effectively improve thermal comfort, indoor air quality, and energy efficiency. However, minimum airflow set points have not yet been identified, nor has controller performance been verified in confirmed studies. In this study, control algorithms were developed that automatically identify and reset terminal box minimum airflow set points, thereby improving indoor air quality and thermal comfort levels, and reducing the overall rate of energy consumption. A theoretical analysis of the optimal minimum airflow and discharge air temperature was performed to identify the potential energy benefits of resetting the terminal box minimum airflow set points. Applicable control algorithms for calculating the ideal values for the minimum airflow reset were developed and applied to actual systems for performance validation. The results of the theoretical analysis, numeric simulations, and experiments show that the optimal control algorithms can automatically identify the minimum rate of heating airflow under actual working conditions. Improved control helps to stabilize room air temperatures. The vertical difference in the room air temperature was lower than the comfort value. Measurements of room CO2 levels indicate that when the minimum airflow set point was reduced it did not adversely affect the indoor air quality. According to the measured energy results, optimal control algorithms give a lower rate of reheating energy consumption than conventional controls.
Definition of hydraulic stability of KVGM-100 hot-water boiler and minimum water flow rate
NASA Astrophysics Data System (ADS)
Belov, A. A.; Ozerov, A. N.; Usikov, N. V.; Shkondin, I. A.
2016-08-01
In domestic power engineering, the methods of quantitative and qualitative-quantitative adjusting the load of the heat supply systems are widely distributed; furthermore, during the greater part of the heating period, the actual discharge of network water is less than estimated values when changing to quantitative adjustment. Hence, the hydraulic circuits of hot-water boilers should ensure the water velocities, minimizing the scale formation and excluding the formation of stagnant zones. The results of the calculations of hot-water KVGM-100 boiler and minimum water flow rate for the basic and peak modes at the fulfillment of condition of the lack of surface boil are presented in the article. The minimal flow rates of water at its underheating to the saturation state and the thermal flows in the furnace chamber were defined. The boiler hydraulic calculation was performed using the "Hydraulic" program, and the analysis of permissible and actual velocities of the water movement in the pipes of the heating surfaces was carried out. Based on the thermal calculations of furnace chamber and thermal- hydraulic calculations of heating surfaces, the following conclusions were drawn: the minimum velocity of water movement (by condition of boiling surface) at lifting movement of environment increases from 0.64 to 0.79 m/s; it increases from 1.14 to 1.38 m/s at down movement of environmental; the minimum water flow rate by the boiler in the basic mode (by condition of the surface boiling) increased from 887 t/h at the load of 20% up to 1074 t/h at the load of 100%. The minimum flow rate is 1074 t/h at nominal load and is achieved at the pressure at the boiler outlet equal to 1.1 MPa; the minimum water flow rate by the boiler in the peak mode by condition of surface boiling increases from 1669 t/h at the load of 20% up to 2021 t/h at the load of 100%.
Impact of the ozone monitoring instrument row anomaly on the long-term record of aerosol products
NASA Astrophysics Data System (ADS)
Torres, Omar; Bhartia, Pawan K.; Jethva, Hiren; Ahn, Changwoo
2018-05-01
Since about three years after the launch the Ozone Monitoring Instrument (OMI) on the EOS-Aura satellite, the sensor's viewing capability has been affected by what is believed to be an internal obstruction that has reduced OMI's spatial coverage. It currently affects about half of the instrument's 60 viewing positions. In this work we carry out an analysis to assess the effect of the reduced spatial coverage on the monthly average values of retrieved aerosol optical depth (AOD), single scattering albedo (SSA) and the UV Aerosol Index (UVAI) using the 2005-2007 three-year period prior to the onset of the row anomaly. Regional monthly average values calculated using viewing positions 1 through 30 were compared to similarly obtained values using positions 31 through 60, with the expectation of finding close agreement between the two calculations. As expected, mean monthly values of AOD and SSA obtained with these two scattering-angle dependent subsets of OMI observations agreed over regions where carbonaceous or sulphate aerosol particles are the predominant aerosol type. However, over arid regions, where desert dust is the main aerosol type, significant differences between the two sets of calculated regional mean values of AOD were observed. As it turned out, the difference in retrieved desert dust AOD between the scattering-angle dependent observation subsets was due to the incorrect representation of desert dust scattering phase function. A sensitivity analysis using radiative transfer calculations demonstrated that the source of the observed AOD bias was the spherical shape assumption of desert dust particles. A similar analysis in terms of UVAI yielded large differences in the monthly mean values for the two sets of calculations over cloudy regions. On the contrary, in arid regions with minimum cloud presence, the resulting UVAI monthly average values for the two sets of observations were in very close agreement. The discrepancy under cloudy conditions was found to be caused by the parameterization of clouds as opaque Lambertian reflectors. When properly accounting for cloud scattering effects using Mie theory, the observed UVAI angular bias was significantly reduced. The analysis discussed here has uncovered important algorithmic deficiencies associated with the model representation of the angular dependence of scattering effects of desert dust aerosols and cloud droplets. The resulting improvements in the handling of desert dust and cloud scattering have been incorporated in an improved version of the OMAERUV algorithm.
Tracking of white-tailed deer migration by Global Positioning System
Nelson, M.E.; Mech, L.D.; Frame, P.F.
2004-01-01
Based on global positioning system (GPS) radiocollars in northeastern Minnesota, deer migrated 23-45 km in spring during 31-356 h, deviating a maximum 1.6-4.0 km perpendicular from a straight line of travel between their seasonal ranges. They migrated a minimum of 2.1-18.6 km/day over 11-56 h during 2-14 periods of travel. Minimum travel during 1-h intervals averaged 1.5 km/h. Deer paused 1-12 times, averaging 24 h/pause. Deer migrated similar distances in autumn with comparable rates and patterns of travel.
The Maximums and Minimums of a Polnomial or Maximizing Profits and Minimizing Aircraft Losses.
ERIC Educational Resources Information Center
Groves, Brenton R.
1984-01-01
Plotting a polynomial over the range of real numbers when its derivative contains complex roots is discussed. The polynomials are graphed by calculating the minimums, maximums, and zeros of the function. (MNS)
Undergraduate paramedic students cannot do drug calculations
Eastwood, Kathryn; Boyle, Malcolm J; Williams, Brett
2012-01-01
BACKGROUND: Previous investigation of drug calculation skills of qualified paramedics has highlighted poor mathematical ability with no published studies having been undertaken on undergraduate paramedics. There are three major error classifications. Conceptual errors involve an inability to formulate an equation from information given, arithmetical errors involve an inability to operate a given equation, and finally computation errors are simple errors of addition, subtraction, division and multiplication. The objective of this study was to determine if undergraduate paramedics at a large Australia university could accurately perform common drug calculations and basic mathematical equations normally required in the workplace. METHODS: A cross-sectional study methodology using a paper-based questionnaire was administered to undergraduate paramedic students to collect demographical data, student attitudes regarding their drug calculation performance, and answers to a series of basic mathematical and drug calculation questions. Ethics approval was granted. RESULTS: The mean score of correct answers was 39.5% with one student scoring 100%, 3.3% of students (n=3) scoring greater than 90%, and 63% (n=58) scoring 50% or less, despite 62% (n=57) of the students stating they ‘did not have any drug calculations issues’. On average those who completed a minimum of year 12 Specialist Maths achieved scores over 50%. Conceptual errors made up 48.5%, arithmetical 31.1% and computational 17.4%. CONCLUSIONS: This study suggests undergraduate paramedics have deficiencies in performing accurate calculations, with conceptual errors indicating a fundamental lack of mathematical understanding. The results suggest an unacceptable level of mathematical competence to practice safely in the unpredictable prehospital environment. PMID:25215067
Asher, Anthony L; Kerezoudis, Panagiotis; Mummaneni, Praveen V; Bisson, Erica F; Glassman, Steven D; Foley, Kevin T; Slotkin, Jonathan; Potts, Eric A; Shaffrey, Mark E; Shaffrey, Christopher I; Coric, Domagoj; Knightly, John J; Park, Paul; Fu, Kai-Ming; Devin, Clinton J; Archer, Kristin R; Chotai, Silky; Chan, Andrew K; Virk, Michael S; Bydon, Mohamad
2018-01-01
OBJECTIVE Patient-reported outcomes (PROs) play a pivotal role in defining the value of surgical interventions for spinal disease. The concept of minimum clinically important difference (MCID) is considered the new standard for determining the effectiveness of a given treatment and describing patient satisfaction in response to that treatment. The purpose of this study was to determine the MCID associated with surgical treatment for degenerative lumbar spondylolisthesis. METHODS The authors queried the Quality Outcomes Database registry from July 2014 through December 2015 for patients who underwent posterior lumbar surgery for grade I degenerative spondylolisthesis. Recorded PROs included scores on the Oswestry Disability Index (ODI), EQ-5D, and numeric rating scale (NRS) for leg pain (NRS-LP) and back pain (NRS-BP). Anchor-based (using the North American Spine Society satisfaction scale) and distribution-based (half a standard deviation, small Cohen's effect size, standard error of measurement, and minimum detectable change [MDC]) methods were used to calculate the MCID for each PRO. RESULTS A total of 441 patients (80 who underwent laminectomies alone and 361 who underwent fusion procedures) from 11 participating sites were included in the analysis. The changes in functional outcome scores between baseline and the 1-year postoperative evaluation were as follows: 23.5 ± 17.4 points for ODI, 0.24 ± 0.23 for EQ-5D, 4.1 ± 3.5 for NRS-LP, and 3.7 ± 3.2 for NRS-BP. The different calculation methods generated a range of MCID values for each PRO: 3.3-26.5 points for ODI, 0.04-0.3 points for EQ-5D, 0.6-4.5 points for NRS-LP, and 0.5-4.2 points for NRS-BP. The MDC approach appeared to be the most appropriate for calculating MCID because it provided a threshold greater than the measurement error and was closest to the average change difference between the satisfied and not-satisfied patients. On subgroup analysis, the MCID thresholds for laminectomy-alone patients were comparable to those for the patients who underwent arthrodesis as well as for the entire cohort. CONCLUSIONS The MCID for PROs was highly variable depending on the calculation technique. The MDC seems to be a statistically and clinically sound method for defining the appropriate MCID value for patients with grade I degenerative lumbar spondylolisthesis. Based on this method, the MCID values are 14.3 points for ODI, 0.2 points for EQ-5D, 1.7 points for NRS-LP, and 1.6 points for NRS-BP.
NASA Astrophysics Data System (ADS)
Puc, Małgorzata
2012-03-01
Birch pollen is one of the main causes of allergy during spring and early summer in northern and central Europe. The aim of this study was to create a forecast model that can accurately predict daily average concentrations of Betula sp. pollen grains in the atmosphere of Szczecin, Poland. In order to achieve this, a novel data analysis technique—artificial neural networks (ANN)—was used. Sampling was carried out using a volumetric spore trap of the Hirst design in Szczecin during 2003-2009. Spearman's rank correlation analysis revealed that humidity had a strong negative correlation with Betula pollen concentrations. Significant positive correlations were observed for maximum temperature, average temperature, minimum temperature and precipitation. The ANN resulted in multilayer perceptrons 366 8: 2928-7-1:1, time series prediction was of quite high accuracy (SD Ratio between 0.3 and 0.5, R > 0.85). Direct comparison of the observed and calculated values confirmed good performance of the model and its ability to recreate most of the variation.
NASA Technical Reports Server (NTRS)
Sylvester, W. B.
1984-01-01
A series of SEASAT repeat orbits over a sequence of best Low center positions is simulated by using the Seatrak satellite calculator. These Low centers are, upon appropriate interpolation to hourly positions, Located at various times during the + or - 3 hour assimilation cycle. Error analysis for a sample of best cyclone center positions taken from the Atlantic and Pacific oceans reveals a minimum average error of 1.1 deg of Longitude and a standard deviation of 0.9 deg of Longitude. The magnitude of the average error seems to suggest that by utilizing the + or - 3 hour window in the assimilation cycle, the quality of the SASS data is degraded to the Level of the background. A further consequence of this assimilation scheme is the effect which is manifested as a result of the blending of two or more more juxtaposed vector winds, generally possessing different properties (vector quantity and time). The outcome of this is to reduce gradients in the wind field and to deform isobaric and frontal patterns of the intial field.
NASA Astrophysics Data System (ADS)
Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong
2018-01-01
In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.
Estimating watershed level nonagricultural pesticide use from golf courses using geospatial methods
Fox, G.A.; Thelin, G.P.; Sabbagh, G.J.; Fuchs, J.W.; Kelly, I.D.
2008-01-01
Limited information exists on pesticide use for nonagricultural purposes, making it difficult to estimate pesticide loadings from nonagricultural sources to surface water and to conduct environmental risk assessments. A method was developed to estimate the amount of pesticide use on recreational turf grasses, specifically golf course turf grasses, for watersheds located throughout the conterminous United States (U.S.). The approach estimates pesticide use: (1) based on the area of recreational turf grasses (used as a surrogate for turf associated with golf courses) within the watershed, which was derived from maps of land cover, and (2) from data on the location and average treatable area of golf courses. The area of golf course turf grasses determined from these two methods was used to calculate the percentage of each watershed planted in golf course turf grass (percent crop area, or PCA). Turf-grass PCAs derived from the two methods were used with recommended application rates provided on pesticide labels to estimate total pesticide use on recreational turf within 1,606 watersheds associated with surface-water sources of drinking water. These pesticide use estimates made from label rates and PCAs were compared to use estimates from industry sales data on the amount of each pesticide sold for use within the watershed. The PCAs derived from the land-cover data had an average value of 0.4% of a watershed with minimum of 0.01% and a maximum of 9.8%, whereas the PCA values that are based on the number of golf courses in a watershed had an average of 0.3% of a watershed with a minimum of <0.01% and a maximum of 14.2%. Both the land-cover method and the number of golf courses method produced similar PCA distributions, suggesting that either technique may be used to provide a PCA estimate for recreational turf. The average and maximum PCAs generally correlated to watershed size, with the highest PCAs estimated for small watersheds. Using watershed specific PCAs, combined with label rates, resulted in greater than two orders of magnitude over-estimation of the pesticide use compared to estimates from sales data. ?? 2008 American Water Resources Association.
NASA Astrophysics Data System (ADS)
Michel, Roberto; Andrade, André; Simas, Felipe; Silva, Tássio; Loureiro, Diego; Schaefer, Carlos
2017-04-01
Most global circulation models predict enhanced rates of climate change, particularly temperature increase, at higher latitudes witch are currently faced with rapid rates of regional climate change (Convey 2006, Vaughan et al. 2003, Quayle et al. 2002), Antarctic ecosystems are expected to show particular sensitivity and rapid responses (Freckman and Virginia 1997, Quayle et al. 2002, 2003). The active layer and permafrost are important components of the cryosphere due to their role in energy flux regulation and sensitivity to climate change (Kane et al., 2001; Smith and Brown, 2009). Compared with other regions of the globe, our understanding of Antarctic permafrost is poor, especially in relation to its thermal state and evolution, (Bockheim, 1995, Bockheim et al., 2008). The active layer monitoring site was installed in the summer of 2008, and consists of thermistors (accuracy ± 0.2 °C) arranged in a vertical array (Turbic Eutric Cryosol 60 m asl, 10.5 cm, 32.5 cm, 67.5 cm and 83.5 cm). All probes were connected to a Campbell Scientific CR 1000 data logger recording data at hourly intervals from March 1st 2008 until November 30th 2012. We calculated the thawing days (TD), freezing days (FD); thawing degree days (TDD) and freezing degree days (FDD); all according to Guglielmin et al. (2008). The active lawyer thickness was calculated as the 0 °C depth by extrapolating the thermal gradient from the two deepest temperature measurements (Guglielmin, 2006). The temperature at 10.5 cm reaches a maximum daily average (5.6 °C) in late January 2015, reaching a minimum (-9.6 °C) in in early August 2011, at 83.5 cm maximum daily average (0.6 °C) was reached in mid March 2009 and minimum (-5.5 °C) also in early August 2011. The years of 2008, 2009 and 2011 recorded thaw days at the bottom of the profile (62 and 49 in 2009 and 2011), and logged the highest soil moisture contents of the time series (62%, 59% and 63%). Seasonal variability of the active layer shows disparities between different years, especially in bottom most layer, where high summer temperatures trigger a increase in soil moisture content that can endure for several seasons. The winter of 2014 also deserves special attention, being the mildest winter recorded during the studied period; in July minimum monthly temperatures were -3.2 °C and -1.9 °C at 10.5 cm and 83.5 cm, it experienced 17 FD summing -0.61 FDD, average for the whole period was -7.5 °C, -3.9 °C, 27 FD and -55 FDD (2008 also had a mild winter but still hold 21 FD and -0,88 FDD at 83.5 cm in July). The summer of 2009 was the warmest facing 31 thawing days and summing 105 thawing degree days at 10.5 cm in January (28.7 thawing days and 66.3 thawing degree days average). The profile showed a increase in soil water content annual during warm summers, persisting for the following seasons, average is 44 % in 2008, 32 % in 2012 closing the time series with a annual average of 27 % in 2016, all values at 83.5 cm. Active layer thickness varied between 86 cm (max of 2015, March) and 117 cm (max of 2009, March). The active layer thermal regime over a 9 year period at Fildes Peninsula shows great variation between years, 2008, 2009 and 2011 presenting warm summers and 2014 being abnormally warm during Winter. Temperature fluctuations can affect the active layer in depth and the effects of warmer temperatures in the bottom of the profile can increase soil water content for several seasons.
Potential energy function for CH3+CH3 ⇆ C2H6: Attributes of the minimum energy path
NASA Astrophysics Data System (ADS)
Robertson, S. H.; Wardlaw, D. M.; Hirst, D. M.
1993-11-01
The region of the potential energy surface for the title reaction in the vicinity of its minimum energy path has been predicted from the analysis of ab initio electronic energy calculations. The ab initio procedure employs a 6-31G** basis set and a configuration interaction calculation which uses the orbitals obtained in a generalized valence bond calculation. Calculated equilibrium properties of ethane and of isolated methyl radical are compared to existing theoretical and experimental results. The reaction coordinate is represented by the carbon-carbon interatomic distance. The following attributes are reported as a function of this distance and fit to functional forms which smoothly interpolate between reactant and product values of each attribute: the minimum energy path potential, the minimum energy path geometry, normal mode frequencies for vibrational motion orthogonal to the reaction coordinate, a torsional potential, and a fundamental anharmonic frequency for local mode, out-of-plane CH3 bending (umbrella motion). The best representation is provided by a three-parameter modified Morse function for the minimum energy path potential and a two-parameter hyperbolic tangent switching function for all other attributes. A poorer but simpler representation, which may be satisfactory for selected applications, is provided by a standard Morse function and a one-parameter exponential switching function. Previous applications of the exponential switching function to estimate the reaction coordinate dependence of the frequencies and geometry of this system have assumed the same value of the range parameter α for each property and have taken α to be less than or equal to the ``standard'' value of 1.0 Å-1. Based on the present analysis this is incorrect: The α values depend on the property and range from ˜1.2 to ˜1.8 Å-1.
An Examination of Selected Geomagnetic Indices in Relation to the Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2006-01-01
Previous studies have shown geomagnetic indices to be useful for providing early estimates for the size of the following sunspot cycle several years in advance. Examined this study are various precursor methods for predicting the minimum and maximum amplitude of the following sunspot cycle, these precursors based on the aa and Ap geomagnetic indices and the number of disturbed days (NDD), days when the daily Ap index equaled or exceeded 25. Also examined is the yearly peak of the daily Ap index (Apmax), the number of days when Ap greater than or equal to 100, cyclic averages of sunspot number R, aa, Ap, NDD, and the number of sudden storm commencements (NSSC), as well the cyclic sums of NDD and NSSC. The analysis yields 90-percent prediction intervals for both the minimum and maximum amplitudes for cycle 24, the next sunspot cycle. In terms of yearly averages, the best regressions give Rmin = 9.8+/-2.9 and Rmax = 153.8+/-24.7, equivalent to Rm = 8.8+/-2.8 and RM = 159+/-5.5, based on the 12-mo moving average (or smoothed monthly mean sunspot number). Hence, cycle 24 is expected to be above average in size, similar to cycles 21 and 22, producing more than 300 sudden storm commencements and more than 560 disturbed days, of which about 25 will be Ap greater than or equal to 100. On the basis of annual averages, the sunspot minimum year for cycle 24 will be either 2006 or 2007.
Falcon versus grouse: flight adaptations of a predator and its prey
Pennycuick, C.J.; Fuller, M.R.; Oar, J.J.; Kirkpatrick, S.J.
1994-01-01
Several falcons were trained to fly along a 500 m course to a lure. The air speeds of the more consistent performers averaged about 1.5 times their calculated minimum power speeds, and occasionally reached 2.1 times the minimum power speed. Wing beat frequencies of all the falcons were above those estimated from earlier field observations, and the same was true of wild Sage Grouse Centrocercus urophasianus, a regular falconer's quarry in the study area. Measurements of grouse killed by falcons showed that their wings were short, with broad slotted tips, whereas the falcons' wings were longer in relation to their body mass, and tapered. The short wings of grouse result in fast flight, high power requirements, and reduced capacity for aerobic flight. Calculations indicated that the grouse should fly faster than the falcons, and had the large amount of flight muscle needed to do so, but that the falcons would be capable of prolonged aerobic flight, whereas the grouse probably would not. We surmise that Sage Grouse cannot fly continuously without incurring an oxygen debt, and are therefore not long-distance migrants, although this limitation is partly due to their large size, and would not apply to smaller galliform birds such as ptarmigan Lagopus spp. The wing action seen in video recordings of the falcons was not consistent with the maintenance of constant circulation. We call it 'chase mode' because it appears to be associated with a high level of muscular exertion, without special regard to fuel economy. It shows features in common with the 'bounding' flight of passerines.
Solid liquid interfacial free energies of benzene
NASA Astrophysics Data System (ADS)
Azreg-Aı¨nou, M.
2007-02-01
In this work we determine for the range of melting temperatures 284.6⩽T⩽306.7 K, corresponding to equilibrium pressures 20.6⩽P⩽102.9 MPa, the benzene solid-liquid interfacial free energy by a cognitive approach including theoretical and experimental physics, mathematics, computer algebra (MATLAB), and some results from molecular dynamics computer simulations. From a theoretical and mathematical points of view, we deal with the elaboration of an analytical expression for the internal energy derived from a unified solid-liquid-vapor equation of state and with the elaboration of an existing statistical model for the entropy drop of the melt near the solid-liquid interface. From an experimental point of view, we will use our results obtained in collaboration with colleagues concerning the supercooled liquid benzene. Of particular interest for this work is the existing center-of-mass radial distribution function of benzene at 298 K obtained by computer simulation. Crystal-orientation-independent and minimum interfacial free energies are calculated and shown to increase slightly with the above temperatures. Both crystal-orientation-independent and minimum free energies agree with existing calculations and with rare existing experimental data. Taking into account the fact that the extent of supercooling is generally admitted as a constant, we determine the limits of supercooling by which we explore the behavior of the critical nucleus radius which is shown to decrease in terms of the above temperatures. The radius of the, and the number of molecules per, critical nucleus are shown to assume the average values of 20.2 A˚ and 175 with standard deviations of 0.16 Å and 4.5, respectively.
7 CFR 51.2113 - Size requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... of range in count of whole almond kernels per ounce or in terms of minimum, or minimum and maximum diameter. When a range in count is specified, the whole kernels shall be fairly uniform in size, and the average count per ounce shall be within the range specified. Doubles and broken kernels shall not be used...
40 CFR 180.960 - Polymers; exemptions from the requirement of a tolerance.
Code of Federal Regulations, 2013 CFR
2013-07-01
...-hydroxypoly (oxypropylene) and/or poly (oxyethylene) polymers where the alkyl chain contains a minimum of six... (oxypropylene) poly(oxyethylene) block copolymer; the minimum poly(oxypropylene) content is 27 moles and the... number average molecular weight (in amu), 900,000 62386-95-2 Monophosphate ester of the block copolymer α...
Research on configuration of railway self-equipped tanker based on minimum cost maximum flow model
NASA Astrophysics Data System (ADS)
Yang, Yuefang; Gan, Chunhui; Shen, Tingting
2017-05-01
In the study of the configuration of the tanker of chemical logistics park, the minimum cost maximum flow model is adopted. Firstly, the transport capacity of the park loading and unloading area and the transportation demand of the dangerous goods are taken as the constraint condition of the model; then the transport arc capacity, the transport arc flow and the transport arc edge weight are determined in the transportation network diagram; finally, the software calculations. The calculation results show that the configuration issue of the tankers can be effectively solved by the minimum cost maximum flow model, which has theoretical and practical application value for tanker management of railway transportation of dangerous goods in the chemical logistics park.
Code of Federal Regulations, 2010 CFR
2010-07-01
... scrubber, maintain the daily average pressure drop across the venturi within the operating range value... . . . You must . . . 1. Scrubber a. Maintain the daily average scrubber inlet liquid flow rate above the minimum value established during the performance test. b. Maintain the daily average scrubber effluent pH...
Code of Federal Regulations, 2011 CFR
2011-07-01
... . . . You must . . . 1. Scrubber a. Maintain the daily average scrubber inlet liquid flow rate above the minimum value established during the performance test. b. Maintain the daily average scrubber effluent pH... scrubber, maintain the daily average pressure drop across the venturi within the operating range value...
Theis, Daniel; Ivanic, Joseph; Windus, Theresa L.; ...
2016-03-10
The metastable ring structure of the ozone 1 1A 1 ground state, which theoretical calculations have shown to exist, has so far eluded experimental detection. An accurate prediction for the energy difference between this isomer and the lower open structure is therefore of interest, as is a prediction for the isomerization barrier between them, which results from interactions between the lowest two 1A 1 states. In the present work, valence correlated energies of the 1 1A 1 state and the 2 1A 1 state were calculated at the 1 1A 1 open minimum, the 1 1A 1 ring minimum, themore » transition state between these two minima, the minimum of the 2 1A 1 state, and the conical intersection between the two states. The geometries were determined at the full-valence multi-configuration self-consistent-field level. Configuration interaction (CI) expansions up to quadruple excitations were calculated with triple-zeta atomic basis sets. The CI expansions based on eight different reference configuration spaces were explored. To obtain some of the quadruple excitation energies, the method of CorrelationEnergy Extrapolation by Intrinsic Scaling was generalized to the simultaneous extrapolation for two states. This extrapolation method was shown to be very accurate. On the other hand, none of the CI expansions were found to have converged to millihartree (mh) accuracy at the quadruple excitation level. The data suggest that convergence to mh accuracy is probably attained at the sextuple excitation level. On the 11A1 state, the present calculations yield the estimates of (ring minimum—open minimum) ~45–50 mh and (transition state—open minimum) ~85–90 mh. For the (2 1A 1– 1A 1) excitation energy, the estimate of ~130–170 mh is found at the open minimum and 270–310 mh at the ring minimum. At the transition state, the difference (2 1A 1– 1A 1) is found to be between 1 and 10 mh. The geometry of the transition state on the 11A1 surface and that of the minimum on the 2 1A 1 surface nearly coincide. More accurate predictions of the energydifferences also require CI expansions to at least sextuple excitations with respect to the valence space. Furthermore, for every wave function considered, the omission of the correlations of the 2s oxygen orbitals, which is a widely used approximation, was found to cause errors of about ±10 mh with respect to the energy differences.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theis, Daniel; Ivanic, Joseph; Windus, Theresa L.
The metastable ring structure of the ozone 1 1A 1 ground state, which theoretical calculations have shown to exist, has so far eluded experimental detection. An accurate prediction for the energy difference between this isomer and the lower open structure is therefore of interest, as is a prediction for the isomerization barrier between them, which results from interactions between the lowest two 1A 1 states. In the present work, valence correlated energies of the 1 1A 1 state and the 2 1A 1 state were calculated at the 1 1A 1 open minimum, the 1 1A 1 ring minimum, themore » transition state between these two minima, the minimum of the 2 1A 1 state, and the conical intersection between the two states. The geometries were determined at the full-valence multi-configuration self-consistent-field level. Configuration interaction (CI) expansions up to quadruple excitations were calculated with triple-zeta atomic basis sets. The CI expansions based on eight different reference configuration spaces were explored. To obtain some of the quadruple excitation energies, the method of CorrelationEnergy Extrapolation by Intrinsic Scaling was generalized to the simultaneous extrapolation for two states. This extrapolation method was shown to be very accurate. On the other hand, none of the CI expansions were found to have converged to millihartree (mh) accuracy at the quadruple excitation level. The data suggest that convergence to mh accuracy is probably attained at the sextuple excitation level. On the 11A1 state, the present calculations yield the estimates of (ring minimum—open minimum) ~45–50 mh and (transition state—open minimum) ~85–90 mh. For the (2 1A 1– 1A 1) excitation energy, the estimate of ~130–170 mh is found at the open minimum and 270–310 mh at the ring minimum. At the transition state, the difference (2 1A 1– 1A 1) is found to be between 1 and 10 mh. The geometry of the transition state on the 11A1 surface and that of the minimum on the 2 1A 1 surface nearly coincide. More accurate predictions of the energydifferences also require CI expansions to at least sextuple excitations with respect to the valence space. Furthermore, for every wave function considered, the omission of the correlations of the 2s oxygen orbitals, which is a widely used approximation, was found to cause errors of about ±10 mh with respect to the energy differences.« less
Chylek, Petr; Augustine, John A.; Klett, James D.; ...
2017-09-30
At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chylek, Petr; Augustine, John A.; Klett, James D.
At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less
NASA Astrophysics Data System (ADS)
Dawes, Richard; Van Der Avoird, Ad
2012-06-01
The conclusion from microwave spectra by Nelson, Fraser, and Klemperer that the ammonia dimer has a nearly cyclic structure led to much debate about the issue of whether (NH_3)_2 is hydrogen bonded. This structure was surprising because most {ab initio} calculations led to a classical, nearly linear, hydrogen-bonded structure. An obvious explanation of the discrepancy between the outcome of these calculations and the microwave data which led Nelson {et al.} to their ``surprising structure'' might be the effect of vibrational averaging: the electronic structure calculations focus on finding the minimum of the intermolecular potential, the experiment gives a vibrationally averaged structure. Isotope substitution studies seemed to indicate, however, that the complex is nearly rigid. Additional data became available from high-resolution molecular beam far-infrared spectroscopy in the Saykally group. These spectra, displaying large tunneling splittings, indicate that the complex is very floppy. The seemingly contradictory experimental data were explained when it became possible to calculate the vibration-rotation-tunneling (VRT) states of the complex on a six-dimensional intermolecular potential surface. The potential used was a simple model potential, with parameters fitted to the far-infrared data. Now, for the first time, a six-dimensional potential was computed by high level {ab initio} methods and this potential will be used in calculations of the VRT states of (NH_3)_2 and (ND_3)_2. So, we will finally be able to answer the question whether the conclusions from the model calculations are indeed a valid explanation of the experimental data. D. Nelson, G. T. Fraser, and W. Klemperer J. Chem. Phys. 83 6201 (1985) J. G. Loeser, C. A. Schmuttenmaer, R. C. Cohen, M. J. Elrod, D. W. Steyert, R. J. Saykally, R. E. Bumgarner, and G. A. Blake J. Chem. Phys. 97 4727 (1992) E. H. T. Olthof, A. van der Avoird, and P. E. S. Wormer J. Chem. Phys. 101 8430 (1994) E. H. T. Olthof, A. van der Avoird, P. E. S. Wormer, J. G. Loeser, and R. J. Saykally J. Chem. Phys. 101 8443 (1994)
2008-06-12
15 14 15 STANDARD DEVIATION (:1:) 3 5 4 4 4 MINIMUM VALUE 9 12 11 8 10 MAXIMUM VALUE 20 33 23 29 24 N" 56 19 14 38 28 IMm ~ ~ CORN Oil l:W...AVERAGE 9 9 10 8 9 STANDARD DEVIATION (:1:) 3 3 4 2 3 MINIMUM VAlUE 2 6 3 5 4 MAXIMUM VAlUE 20 16 23 12 15 N" 65 21 14 33 29 E.COLI DMSO ~ CORN ...21 33 25 21 23 N* 66 19 14 38 28 :wm ~ ACET CORN Oil !L!! SAUNE AVERAGE 9 10 10 9 8 STANDARD DEVlA.11ON (:I:) 3 3 4 3 2 MINIMUM VAWS . 4 6 6 6 3
Teleportation of a two-mode entangled coherent state encoded with two-qubit information
NASA Astrophysics Data System (ADS)
Mishra, Manoj K.; Prakash, Hari
2010-09-01
We propose a scheme to teleport a two-mode entangled coherent state encoded with two-qubit information, which is better than the two schemes recently proposed by Liao and Kuang (2007 J. Phys. B: At. Mol. Opt. Phys. 40 1183) and by Phien and Nguyen (2008 Phys. Lett. A 372 2825) in that our scheme gives higher value of minimum assured fidelity and minimum average fidelity without using any nonlinear interactions. For involved coherent states | ± αrang, minimum average fidelity in our case is >=0.99 for |α| >= 1.6 (i.e. |α|2 >= 2.6), while previously proposed schemes referred above report the same for |α| >= 5 (i.e. |α|2 >= 25). Since it is very challenging to produce superposed coherent states of high coherent amplitude (|α|), our teleportation scheme is at the reach of modern technology.
Troe, J; Ushakov, V G
2006-06-01
This work describes a simple method linking specific rate constants k(E,J) of bond fission reactions AB --> A + B with thermally averaged capture rate constants k(cap)(T) of the reverse barrierless combination reactions A + B --> AB (or the corresponding high-pressure dissociation or recombination rate constants k(infinity)(T)). Practical applications are given for ionic and neutral reaction systems. The method, in the first stage, requires a phase-space theoretical treatment with the most realistic minimum energy path potential available, either from reduced dimensionality ab initio or from model calculations of the potential, providing the centrifugal barriers E(0)(J). The effects of the anisotropy of the potential afterward are expressed in terms of specific and thermal rigidity factors f(rigid)(E,J) and f(rigid)(T), respectively. Simple relationships provide a link between f(rigid)(E,J) and f(rigid)(T) where J is an average value of J related to J(max)(E), i.e., the maximum J value compatible with E > or = E0(J), and f(rigid)(E,J) applies to the transitional modes. Methods for constructing f(rigid)(E,J) from f(rigid)(E,J) are also described. The derived relationships are adaptable and can be used on that level of information which is available either from more detailed theoretical calculations or from limited experimental information on specific or thermally averaged rate constants. The examples used for illustration are the systems C6H6+ <==> C6H5+ + H, C8H10+ --> C7H7+ + CH3, n-C9H12+ <==> C7H7+ + C2H5, n-C10H14+ <==> C7H7+ + C3H7, HO2 <==> H + O2, HO2 <==> HO + O, and H2O2 <==> 2HO.
Sources of Geomagnetic Activity during Nearly Three Solar Cycles (1972-2000)
NASA Technical Reports Server (NTRS)
Richardson, I. G.; Cane, H. V.; Cliver, E. W.; White, Nicholas E. (Technical Monitor)
2002-01-01
We examine the contributions of the principal solar wind components (corotating highspeed streams, slow solar wind, and transient structures, i.e., interplanetary coronal mass ejections (CMEs), shocks, and postshock flows) to averages of the aa geomagnetic index and the interplanetary magnetic field (IMF) strength in 1972-2000 during nearly three solar cycles. A prime motivation is to understand the influence of solar cycle variations in solar wind structure on long-term (e.g., approximately annual) averages of these parameters. We show that high-speed streams account for approximately two-thirds of long-term aa averages at solar minimum, while at solar maximum, structures associated with transients make the largest contribution (approx. 50%), though contributions from streams and slow solar wind continue to be present. Similarly, high-speed streams are the principal contributor (approx. 55%) to solar minimum averages of the IMF, while transient-related structures are the leading contributor (approx. 40%) at solar maximum. These differences between solar maximum and minimum reflect the changing structure of the near-ecliptic solar wind during the solar cycle. For minimum periods, the Earth is embedded in high-speed streams approx. 55% of the time versus approx. 35% for slow solar wind and approx. 10% for CME-associated structures, while at solar maximum, typical percentages are as follows: high-speed streams approx. 35%, slow solar wind approx. 30%, and CME-associated approx. 35%. These compositions show little cycle-to-cycle variation, at least for the interval considered in this paper. Despite the change in the occurrences of different types of solar wind over the solar cycle (and less significant changes from cycle to cycle), overall, variations in the averages of the aa index and IMF closely follow those in corotating streams. Considering solar cycle averages, we show that high-speed streams account for approx. 44%, approx. 48%, and approx. 40% of the solar wind composition, aa, and the IMF strength, respectively, with corresponding figures of approx. 22%, approx. 32%, and approx. 25% for CME-related structures, and approx. 33%, approx. 19%, and approx. 33% for slow solar wind.
Anticipating Cycle 24 Minimum and Its Consequences
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2007-01-01
On the basis of the 12-mo moving average of monthly mean sunspot number (R) through November 2006, cycle 23 has persisted for 126 mo, having had a minimum of 8.0 in May 1996, a peak of 120.8 in April 2000, and an ascent duration of 47 mo. In November 2006, the 12-mo moving average of monthly mean sunspot number was 12.7, a value just outside the upper observed envelope of sunspot minimum values for the most recent cycles 16-23 (range 3.4-12.3), but within the 90-percent prediction interval (7.8 +/- 6.7). The first spotless day during the decline of cycle 23 occurred in January 2004, and the first occurrence of 10 or more and 20 or more spotless days was February 2006 and April 2007, respectively, inferring that sunspot minimum for cycle 24 is imminent. Through May 2007, 121 spotless days have accumulated. In terms of the weighted mean latitude (weighed by spot area) (LAT) and the highest observed latitude spot (HLS) in November 2006, 12-mo moving averages of these parameters measured 7.9 and 14.6 deg, respectively, these values being the lowest values yet observed during the decline of cycle 23 and being below corresponding mean values found for cycles 16-23. As yet, no high-latitude new-cycle spots have been seen nor has there been an upturn in LAT and HLS, these conditions having always preceded new cycle minimum by several months for past cycles. Together, these findings suggest that cycle 24 s minimum amplitude still lies well beyond November 2006. This implies that cycle 23 s period either will lie in the period "gap" (127-134 mo), a first for a sunspot cycle, or it will be longer than 134 mo, thus making cycle 23 a long-period cycle (like cycle 20) and indicating that cycle 24 s minimum will occur after July 2007. Should cycle 23 prove to be a cycle of longer period, a consequence might be that the maximum amplitude for cycle 24 may be smaller than previously predicted.
Cisneros, Ricardo; Bytnerowicz, Andrzej; Schweizer, Donald; Zhong, Sharon; Traina, Samuel; Bennett, Deborah H
2010-10-01
Two-week average concentrations of ozone (O3), nitric acid vapor (HNO3) and ammonia (NH3) were measured with passive samplers during the 2002 summer season across the central Sierra Nevada Mountains, California, along the San Joaquin River drainage. Elevated concentrations of the pollutants were determined with seasonal means for individual sites ranging between 62 and 88 ppb for O3, 1.0-3.8 microg m(-3) for HNO3, and 2.6-5.2 microg m(-3) for NH3. Calculated O3 exposure indices were very high, reaching SUM00-191 ppm h, SUM60-151 ppm h, and W126-124 ppm h. Calculated nitrogen (N) dry deposition ranged from 1.4 to 15 kg N ha(-1) for maximum values, and 0.4-8 kg N ha(-1) for minimum values; potentially exceeding Critical Loads (CL) for nutritional N. The U.S., California, and European 8 h O3 human health standards were exceeded during 104, 108, and 114 days respectively, indicating high risk to humans from ambient O3.
Transport properties for a mixture of the ablation products C, C2, and C3
NASA Technical Reports Server (NTRS)
Biolsi, L.; Fenton, J.; Owenson, B.
1981-01-01
The ablation of carbon-phenolic heat shields upon entry into the atmosphere of one of the outer planets leads to the injection of large amounts of C, C2, and C3 into the shock layer. These species must be included in the calculation of transport properties in the shock layer. The kinetic theory of gases has been used to obtain accurate results for the transport properties of monatomic carbon. The Hulburt-Hirschelder potential, the most accurate general purpose atom-atom potential for states with an attractive minimum, was used to represent such states and repulsive states were represented by fitting quantum mechanical potential energy curves with the exponential repulsive potential. These results were orientation averaged according to the peripheral force model to obtain transport collision integrals for the C-C2 and C2-C2 interaction. Results for C3 were obtained by ignoring the presence of the central carbon atom. The thermal conductivity, viscosity, and diffusion coefficients for pure C, C2, and C3, and for mixtures of these gases, were then calculated from 1000 K - 25,000 K.
Ford, M.; Ferguson, C.C.
1985-01-01
In south-west Ireland, hydrothermally formed arsenopyrite crystals in a Devonian mudstone have responded to Variscan deformation by brittle extension fracture and fragment separation. The interfragment gaps and terminal extension zones of each crystal are infilled with fibrous quartz. Stretches within the cleavage plane have been calculated by the various methods available, most of which can be modified to incorporate terminal extension zones. The Strain Reversal Method is the most accurate currently available but still gives a minimum estimate of the overall strain. The more direct Hossain method, which gives only slightly lower estimates with this data, is more practical for field use. A strain ellipse can be estimated from each crystal rosette composed of three laths (assuming the original interlimb angles were all 60??) and, because actual rather than relative stretches are estimated, this provides a lower bound to the area increase in the plane of cleavage. Based on the average of our calculated strain ellipses this area increase is at least 114% and implies an average shortening across the cleavage of at least 53%. However, several lines of evidence suggest that the cleavage deformation was more intense and more oblate than that calculated, and we argue that a 300% area increase in the cleavage plane and 75% shortening across the cleavage are more realistic estimates of the true strain. Furthermore, the along-strike elongation indicated is at least 80%, which may be regionally significant. Estimates of orogenic contraction derived from balanced section construction should therefore take into account the possibility of a substantial strike elongation, and tectonic models that can accommodate such elongations need to be developed. ?? 1985.
Validation of a Solid Rocket Motor Internal Environment Model
NASA Technical Reports Server (NTRS)
Martin, Heath T.
2017-01-01
In a prior effort, a thermal/fluid model of the interior of Penn State University's laboratory-scale Insulation Test Motor (ITM) was constructed to predict both the convective and radiative heat transfer to the interior walls of the ITM with a minimum of empiricism. These predictions were then compared to values of total and radiative heat flux measured in a previous series of ITM test firings to assess the capabilities and shortcomings of the chosen modeling approach. Though the calculated fluxes reasonably agreed with those measured during testing, this exercise revealed means of improving the fidelity of the model to, in the case of the thermal radiation, enable direct comparison of the measured and calculated fluxes and, for the total heat flux, compute a value indicative of the average measured condition. By replacing the P1-Approximation with the discrete ordinates (DO) model for the solution of the gray radiative transfer equation, the radiation intensity field in the optically thin region near the radiometer is accurately estimated, allowing the thermal radiation flux to be calculated on the heat-flux sensor itself, which was then compared directly to the measured values. Though the fully coupling the wall thermal response with the flow model was not attempted due to the excessive computational time required, a separate wall thermal response model was used to better estimate the average temperature of the graphite surfaces upstream of the heat flux gauges and improve the accuracy of both the total and radiative heat flux computations. The success of this modeling approach increases confidence in the ability of state-of-the-art thermal and fluid modeling to accurately predict SRM internal environments, offers corrections to older methods, and supplies a tool for further studies of the dynamics of SRM interiors.
An Examination of Sunspot Number Rates of Growth and Decay in Relation to the Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2006-01-01
On the basis of annual sunspot number averages, sunspot number rates of growth and decay are examined relative to both minimum and maximum amplitudes and the time of their occurrences using cycles 12 through present, the most reliably determined sunspot cycles. Indeed, strong correlations are found for predicting the minimum and maximum amplitudes and the time of their occurrences years in advance. As applied to predicting sunspot minimum for cycle 24, the next cycle, its minimum appears likely to occur in 2006, especially if it is a robust cycle similar in nature to cycles 17-23.
The Einstein-Hilbert gravitation with minimum length
NASA Astrophysics Data System (ADS)
Louzada, H. L. C.
2018-05-01
We study the Einstein-Hilbert gravitation with the deformed Heisenberg algebra leading to the minimum length, with the intention to find and estimate the corrections in this theory, clarifying whether or not it is possible to obtain, by means of the minimum length, a theory, in D=4, which is causal, unitary and provides a massive graviton. Therefore, we will calculate and analyze the dispersion relationships of the considered theory.
NASA Astrophysics Data System (ADS)
Stooksbury, David E.; Idso, Craig D.; Hubbard, Kenneth G.
1999-05-01
Gaps in otherwise regularly scheduled observations are often referred to as missing data. This paper explores the spatial and temporal impacts that data gaps in the recorded daily maximum and minimum temperatures have on the calculated monthly mean maximum and minimum temperatures. For this analysis 138 climate stations from the United States Historical Climatology Network Daily Temperature and Precipitation Data set were selected. The selected stations had no missing maximum or minimum temperature values during the period 1951-80. The monthly mean maximum and minimum temperatures were calculated for each station for each month. For each month 1-10 consecutive days of data from each station were randomly removed. This was performed 30 times for each simulated gap period. The spatial and temporal impact of the 1-10-day data gaps were compared. The influence of data gaps is most pronounced in the continental regions during the winter and least pronounced in the southeast during the summer. In the north central plains, 10-day data gaps during January produce a standard deviation value greater than 2°C about the `true' mean. In the southeast, 10-day data gaps in July produce a standard deviation value less than 0.5°C about the mean. The results of this study will be of value in climate variability and climate trend research as well as climate assessment and impact studies.
Minimum emittance in TBA and MBA lattices
NASA Astrophysics Data System (ADS)
Xu, Gang; Peng, Yue-Mei
2015-03-01
For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 31/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design.
A New Method for Setting Calculation Sequence of Directional Relay Protection in Multi-Loop Networks
NASA Astrophysics Data System (ADS)
Haijun, Xiong; Qi, Zhang
2016-08-01
Workload of relay protection setting calculation in multi-loop networks may be reduced effectively by optimization setting calculation sequences. A new method of setting calculation sequences of directional distance relay protection in multi-loop networks based on minimum broken nodes cost vector (MBNCV) was proposed to solve the problem experienced in current methods. Existing methods based on minimum breakpoint set (MBPS) lead to more break edges when untying the loops in dependent relationships of relays leading to possibly more iterative calculation workloads in setting calculations. A model driven approach based on behavior trees (BT) was presented to improve adaptability of similar problems. After extending the BT model by adding real-time system characters, timed BT was derived and the dependency relationship in multi-loop networks was then modeled. The model was translated into communication sequence process (CSP) models and an optimization setting calculation sequence in multi-loop networks was finally calculated by tools. A 5-nodes multi-loop network was applied as an example to demonstrate effectiveness of the modeling and calculation method. Several examples were then calculated with results indicating the method effectively reduces the number of forced broken edges for protection setting calculation in multi-loop networks.
Minimum size limits for yellow perch (Perca flavescens) in western Lake Erie
Hartman, Wilbur L.; Nepszy, Stephen J.; Scholl, Russell L.
1980-01-01
During the 1960's yellow perch (Perca flavescens) of Lake Erie supported a commercial fishery that produced an average annual catch of 23 million pounds, as well as a modest sport fishery. Since 1969, the resource has seriously deteriorated. Commercial landings amounted to only 6 million pounds in 1976, and included proportionally more immature perch than in the 1960's. Moreover, no strong year classes were produced between 1965 and 1975. An interagency technical committee was appointed in 1975 by the Lake Erie Committee of the Great Lakes Fishery Commission to develop an interim management strategy that would provide for greater protection of perch in western Lake Erie, where declines have been the most severe. The committee first determined the age structure, growth and mortality rates, maturation schedule, and length-fecundity relationship for the population, and then applied Ricker-type equilibrium yield models to determine the effects of various minimum length limits on yield, production, average stock weight, potential egg deposition, and the Abrosov spawning frequency indicator (average number of spawning opportunities per female). The committee recommended increasing the minimum length limit of 5.0 inches to at least 8.5 inches. Theoretically, this change would increase the average stock weight by 36% and potential egg deposition by 44%, without significantly decreasing yield. Abrosov's spawning frequency indicator would rise from the existing 0.6 to about 1.2.
40 CFR Table 3 to Subpart Jjjjjj... - Operating Limits for Boilers With Emission Limits
Code of Federal Regulations, 2013 CFR
2013-07-01
... as defined in § 63.11237. 4. Dry sorbent or activated carbon injection control Maintain the 30-day rolling average sorbent or activated carbon injection rate at or above the minimum sorbent injection rate or minimum activated carbon injection rate as defined in § 63.11237. When your boiler operates at...
The Effects of Global Warming on Temperature and Precipitation Trends in Northeast America
NASA Astrophysics Data System (ADS)
Francis, F.
2013-12-01
The objective of this paper is to discuss the analysis of results in temperature and precipitation (rainfall) data and how they are affected by the theory of global warming in Northeast America. The topic was chosen because it will show the trends in temperature and precipitation and their relations to global warming. Data was collected from The Global Historical Climatology Network (GHCN). The data range from years of 1973 to 2012. We were able to calculate the yearly and monthly regress to estimate the relationship of variables found in the individual sources. With the use of specially designed software, analysis and manual calculations we are able to give a visualization of these trends in precipitation and temperature and to question if these trends are due to the theory of global warming. With the Calculation of the trends in slope we were able to interpret the changes in minimum and maximum temperature and precipitation. Precipitation had a 9.5 % increase over the past forty years, while maximum temperature increased 1.9 %, a greater increase is seen in minimum temperature of 3.3 % was calculated over the years. The trends in precipitation, maximum and minimum temperature is statistically significant at a 95% level.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Guide ropes. 75.1429 Section 75.1429 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY... minimum value calculated as follows: Minimum value=Static Load×5.0. ...
RHIC BPM system average orbit calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michnoff,R.; Cerniglia, P.; Degen, C.
2009-05-04
RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed justmore » prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.« less
On the Minimum Induced Drag of Wings
NASA Technical Reports Server (NTRS)
Bowers, Albion H.
2010-01-01
Of all the types of drag, induced drag is associated with the creation and generation of lift over wings. Induced drag is directly driven by the span load that the aircraft is flying at. The tools by which to calculate and predict induced drag we use were created by Ludwig Prandtl in 1903. Within a decade after Prandtl created a tool for calculating induced drag, Prandtl and his students had optimized the problem to solve the minimum induced drag for a wing of a given span, formalized and written about in 1920. This solution is quoted in textbooks extensively today. Prandtl did not stop with this first solution, and came to a dramatically different solution in 1932. Subsequent development of this 1932 solution solves several aeronautics design difficulties simultaneously, including maximum performance, minimum structure, minimum drag loss due to control input, and solution to adverse yaw without a vertical tail. This presentation lists that solution by Prandtl, and the refinements by Horten, Jones, Kline, Viswanathan, and Whitcomb
On the Minimum Induced Drag of Wings -or- Thinking Outside the Box
NASA Technical Reports Server (NTRS)
Bowers, Albion H.
2011-01-01
Of all the types of drag, induced drag is associated with the creation and generation of lift over wings. Induced drag is directly driven by the span load that the aircraft is flying at. The tools by which to calculate and predict induced drag we use were created by Ludwig Prandtl in 1903. Within a decade after Prandtl created a tool for calculating induced drag, Prandtl and his students had optimized the problem to solve the minimum induced drag for a wing of a given span, formalized and written about in 1920. This solution is quoted in textbooks extensively today. Prandtl did not stop with this first solution, and came to a dramatically different solution in 1932. Subsequent development of this 1932 solution solves several aeronautics design difficulties simultaneously, including maximum performance, minimum structure, minimum drag loss due to control input, and solution to adverse yaw without a vertical tail. This presentation lists that solution by Prandtl, and the refinements by Horten, Jones, Kline, Viswanathan, and Whitcomb.
On the Minimum Induced Drag of Wings
NASA Technical Reports Server (NTRS)
Bowers, Albion H.
2011-01-01
Of all the types of drag, induced drag is associated with the creation and generation of lift over wings. Induced drag is directly driven by the span load that the aircraft is flying at. The tools by which to calculate and predict induced drag we use were created by Ludwig Prandtl in 1903. Within a decade after Prandtl created a tool for calculating induced drag, Prandtl and his students had optimized the problem to solve the minimum induced drag for a wing of a given span, formalized and written about in 1920. This solution is quoted in textbooks extensively today. Prandtl did not stop with this first solution, and came to a dramatically different solution in 1932. Subsequent development of this 1932 solution solves several aeronautics design difficulties simultaneously, including maximum performance, minimum structure, minimum drag loss due to control input, and solution to adverse yaw without a vertical tail. This presentation lists that solution by Prandtl, and the refinements by Horten, Jones, Kline, Viswanathan, and Whitcomb.
Inertial effects on mechanically braked Wingate power calculations.
Reiser, R F; Broker, J P; Peterson, M L
2000-09-01
The standard procedure for determining subject power output from a 30-s Wingate test on a mechanically braked (friction-loaded) ergometer includes only the braking resistance and flywheel velocity in the computations. However, the inertial effects associated with accelerating and decelerating the crank and flywheel also require energy and, therefore, represent a component of the subject's power output. The present study was designed to determine the effects of drive-system inertia on power output calculations. Twenty-eight male recreational cyclists completed Wingate tests on a Monark 324E mechanically braked ergometer (resistance: 8.5% body mass (BM), starting cadence: 60 rpm). Power outputs were then compared using both standard (without inertial contribution) and corrected methods (with inertial contribution) of calculating power output. Relative 5-s peak power and 30-s average power for the corrected method (14.8 +/- 1.2 W x kg(-1) BM; 9.9 +/- 0.7 W x kg(-1) BM) were 20.3% and 3.1% greater than that of the standard method (12.3 +/- 0.7 W x kg(-1) BM; 9.6 +/- 0.7 W x kg(-1) BM), respectively. Relative 5-s minimum power for the corrected method (6.8 +/- 0.7 W x kg(-1) BM) was 6.8% less than that of the standard method (7.3 +/- 0.8 W x kg(-1) BM). The combined differences in the peak power and minimum power produced a fatigue index for the corrected method (54 +/- 5%) that was 31.7% greater than that of the standard method (41 +/- 6%). All parameter differences were significant (P < 0.01). The inertial contribution to power output was dominated by the flywheel; however, the contribution from the crank was evident. These results indicate that the inertial components of the ergometer drive system influence the power output characteristics, requiring care when computing, interpreting, and comparing Wingate results, particularly among different ergometer designs and test protocols.
Revesz, Kinga; Coplen, Tyler B.; Baedecker, Mary J.; Glynn, Pierre D.
1995-01-01
Stable isotopic ratios of C and H in dissolved CH4 and C in dissolved inorganic C in the ground water of a crude-oil spill near Bemidji, Minnesota, support the concept of CH4production by acetate fermentation with a contemporaneous increase in HCO3−concentration. Methane concentrations in the saturated zone decrease from 20.6 mg L−1 to less than 0.001 mg L−1 along the investigated flow path. Dissolved N2 and Ar concentrations in the ground water below the oil plume are 25 times lower than background; this suggests that gas exsolution is removing dissolved CH4 (along with other dissolved gases) from the ground water. Oxidation of dissolved CH4 along the flow path seems to be minimal because no measurable change in isotopic composition of CH4 occurs with distance from the oil body. However, CH4 is partly oxidized to CO2 as it diffuses upward from the ground water through a 5- to 7-m thick unsaturated zone; theδ13C of the remaining CH4 increases, theδ13C of the CO2 decreases, and the partial pressure of CO2 increases.Calculations of C fluxes in the saturated and unsaturated zones originating from the degradation of the oil plume lead to a minimum estimated life expectancy of 110 years. This is a minimum estimate because the degradation of the oil body should slow down with time as its more volatile and reactive components are leached out and preferentially oxidized. The calculated life expectancy is an order of magnitude estimate because of the uncertainty in the average linear ground-water velocities and because of the factor of 2 uncertainty in the calculation of the effective CO2 diffusion coefficient.
Van Dyke, Miriam E; Komro, Kelli A; Shah, Monica P; Livingston, Melvin D; Kramer, Michael R
2018-07-01
Despite substantial declines since the 1960's, heart disease remains the leading cause of death in the United States (US) and geographic disparities in heart disease mortality have grown. State-level socioeconomic factors might be important contributors to geographic differences in heart disease mortality. This study examined the association between state-level minimum wage increases above the federal minimum wage and heart disease death rates from 1980 to 2015 among 'working age' individuals aged 35-64 years in the US. Annual, inflation-adjusted state and federal minimum wage data were extracted from legal databases and annual state-level heart disease death rates were obtained from CDC Wonder. Although most minimum wage and health studies to date use conventional regression models, we employed marginal structural models to account for possible time-varying confounding. Quasi-experimental, marginal structural models accounting for state, year, and state × year fixed effects estimated the association between increases in the state-level minimum wage above the federal minimum wage and heart disease death rates. In models of 'working age' adults (35-64 years old), a $1 increase in the state-level minimum wage above the federal minimum wage was on average associated with ~6 fewer heart disease deaths per 100,000 (95% CI: -10.4, -1.99), or a state-level heart disease death rate that was 3.5% lower per year. In contrast, for older adults (65+ years old) a $1 increase was on average associated with a 1.1% lower state-level heart disease death rate per year (b = -28.9 per 100,000, 95% CI: -71.1, 13.3). State-level economic policies are important targets for population health research. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Singh, Priya; Sarkar, Subir K.; Bandyopadhyay, Pradipta
2014-07-01
We present the results of a high-statistics equilibrium study of the folding/unfolding transition for the 20-residue mini-protein Trp-cage (TC5b) in water. The ECEPP/3 force field is used and the interaction with water is treated by a solvent-accessible surface area method. A Wang-Landau type simulation is used to calculate the density of states and the conditional probabilities for the various values of the radius of gyration and the number of native contacts at fixed values of energy—along with a systematic check on their convergence. All thermodynamic quantities of interest are calculated from this information. The folding-unfolding transition corresponds to a peak in the temperature dependence of the computed specific heat. This is corroborated further by the structural signatures of folding in the distributions for radius of gyration and the number of native contacts as a function of temperature. The potentials of mean force are also calculated for these variables, both separately and jointly. A local free energy minimum, in addition to the global minimum, is found in a temperature range substantially below the folding temperature. The free energy at this second minimum is approximately 5 kBT higher than the value at the global minimum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, M; Kang, S; Lee, S
Purpose: Implant-supported dentures seem particularly appropriate for the predicament of becoming edentulous and cancer patients are no exceptions. As the number of people having dental implants increased in different ages, critical dosimetric verification of metal artifact effects are required for the more accurate head and neck radiation therapy. The purpose of this study is to verify the theoretical analysis of the metal(streak and dark) artifact, and to evaluate dosimetric effect which cause by dental implants in CT images of patients with the patient teeth and implants inserted humanoid phantom. Methods: The phantom comprises cylinder which is shaped to simulate themore » anatomical structures of a human head and neck. Through applying various clinical cases, made phantom which is closely allied to human. Developed phantom can verify two classes: (i)closed mouth (ii)opened mouth. RapidArc plans of 4 cases were created in the Eclipse planning system. Total dose of 2000 cGy in 10 fractions is prescribed to the whole planning target volume (PTV) using 6MV photon beams. Acuros XB (AXB) advanced dose calculation algorithm, Analytical Anisotropic Algorithm (AAA) and progressive resolution optimizer were used in dose optimization and calculation. Results: In closed and opened mouth phantom, because dark artifacts formed extensively around the metal implants, dose variation was relatively higher than that of streak artifacts. As the PTV was delineated on the dark regions or large streak artifact regions, maximum 7.8% dose error and average 3.2% difference was observed. The averaged minimum dose to the PTV predicted by AAA was about 5.6% higher and OARs doses are also 5.2% higher compared to AXB. Conclusion: The results of this study showed that AXB dose calculation involving high-density materials is more accurate than AAA calculation, and AXB was superior to AAA in dose predictions beyond dark artifact/air cavity portion when compared against the measurements.« less
Matthews, Leanna P; Parks, Susan E; Fournet, Michelle E H; Gabriele, Christine M; Womble, Jamie N; Klinck, Holger
2017-03-01
Source levels of harbor seal breeding vocalizations were estimated using a three-element planar hydrophone array near the Beardslee Islands in Glacier Bay National Park and Preserve, Alaska. The average source level for these calls was 144 dB RMS re 1 μPa at 1 m in the 40-500 Hz frequency band. Source level estimates ranged from 129 to 149 dB RMS re 1 μPa. Four call parameters, including minimum frequency, peak frequency, total duration, and pulse duration, were also measured. These measurements indicated that breeding vocalizations of harbor seals near the Beardslee Islands of Glacier Bay National Park are similar in duration (average total duration: 4.8 s, average pulse duration: 3.0 s) to previously reported values from other populations, but are 170-220 Hz lower in average minimum frequency (78 Hz).
NASA Astrophysics Data System (ADS)
Bonacci, Ognjen; Željković, Ivana; Trogrlić, Robert Šakić; Milković, Janja
2013-10-01
Differences between true mean daily, monthly and annual air temperatures T0 [Eq. (1)] and temperatures calculated with three different equations [(2), (3) and (4)] (commonly used in climatological practice) were investigated at three main meteorological Croatian stations from 1 January 1999 to 31 December 2011. The stations are situated in the following three climatically distinct areas: (1) Zagreb-Grič (mild continental climate), (2) Zavižan (cold mountain climate), and (3) Dubrovnik (hot Mediterranean climate). T1 [Eq. (2)] and T3 [Eq. (4)] mean temperatures are defined by the algorithms based on the weighted means of temperatures measured at irregularly spaced, yet fixed hours. T2 [Eq. (3)] is the mean temperature defined as the average of daily maximum and minimum temperature. The equation as well as the time of observations used introduces a bias into mean temperatures. The largest differences occur for mean daily temperatures. The calculated daily difference value from all three equations and all analysed stations varies from -3.73 °C to +3.56 °C, from -1.39 °C to +0.79 °C for monthly differences and from -0.76 °C to +0.30 °C for annual differences.
Energy Efficient and Stable Weight Based Clustering for Mobile Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Bouk, Safdar H.; Sasase, Iwao
Recently several weighted clustering algorithms have been proposed, however, to the best of our knowledge; there is none that propagates weights to other nodes without weight message for leader election, normalizes node parameters and considers neighboring node parameters to calculate node weights. In this paper, we propose an Energy Efficient and Stable Weight Based Clustering (EE-SWBC) algorithm that elects cluster heads without sending any additional weight message. It propagates node parameters to its neighbors through neighbor discovery message (HELLO Message) and stores these parameters in neighborhood list. Each node normalizes parameters and efficiently calculates its own weight and the weights of neighboring nodes from that neighborhood table using Grey Decision Method (GDM). GDM finds the ideal solution (best node parameters in neighborhood list) and calculates node weights in comparison to the ideal solution. The node(s) with maximum weight (parameters closer to the ideal solution) are elected as cluster heads. In result, EE-SWBC fairly selects potential nodes with parameters closer to ideal solution with less overhead. Different performance metrics of EE-SWBC and Distributed Weighted Clustering Algorithm (DWCA) are compared through simulations. The simulation results show that EE-SWBC maintains fewer average numbers of stable clusters with minimum overhead, less energy consumption and fewer changes in cluster structure within network compared to DWCA.
The Yearly Variation in Fall-Winter Arctic Winter Vortex Descent
NASA Technical Reports Server (NTRS)
Schoeberl, Mark R.; Newman, Paul A.
1999-01-01
Using the change in HALOE methane profiles from early September to late March, we have estimated the minimum amount of diabatic descent within the polar which takes place during Arctic winter. The year to year variations are a result in the year to year variations in stratospheric wave activity which (1) modify the temperature of the vortex and thus the cooling rate; (2) reduce the apparent descent by mixing high amounts of methane into the vortex. The peak descent amounts from HALOE methane vary from l0km -14km near the arrival altitude of 25 km. Using a diabatic trajectory calculation, we compare forward and backward trajectories over the course of the winter using UKMO assimilated stratospheric data. The forward calculation agrees fairly well with the observed descent. The backward calculation appears to be unable to produce the observed amount of descent, but this is only an apparent effect due to the density decrease in parcels with altitude. Finally we show the results for unmixed descent experiments - where the parcels are fixed in latitude and longitude and allowed to descend based on the local cooling rate. Unmixed descent is found to always exceed mixed descent, because when normal parcel motion is included, the path average cooling is always less than the cooling at a fixed polar point.
NASA Astrophysics Data System (ADS)
Volobuev, D. M.; Makarenko, N. G.
2014-12-01
Because of the small amplitude of insolation variations (1365.2-1366.6 W m-2 or 0.1%) from the 11-year solar cycle minimum to the cycle maximum and the structural complexity of the climatic dynamics, it is difficult to directly observe a solar signal in the surface temperature. The main difficulty is reduced to two factors: (1) a delay in the temperature response to external action due to thermal inertia, and (2) powerful internal fluctuations of the climatic dynamics suppressing the solar-driven component. In this work we take into account the first factor, solving the inverse problem of thermal conductivity in order to calculate the vertical heat flux from the measured temperature near the Earth's surface. The main model parameter—apparent thermal inertia—is calculated from the local seasonal extremums of temperature and albedo. We level the second factor by averaging mean annual heat fluxes in a latitudinal belt. The obtained mean heat fluxes significantly correlate with a difference between the insolation and optical depth of volcanic aerosol in the atmosphere, converted into a hindered heat flux. The calculated correlation smoothly increases with increasing latitude to 0.4-0.6, and the revealed latitudinal dependence is explained by the known effect of polar amplification.
Triple differential cross-sections of Ne (2s2) in coplanar to perpendicular plane geometry
NASA Astrophysics Data System (ADS)
Chen, L. Q.; Khajuria, Y.; Chen, X. J.; Xu, K. Z.
2003-10-01
The distorted wave Born approximation (DWBA) with the spin averaged static exchange potential has been used to calculate the triple differential cross-sections (TDCSs) for Ne (2s^2) ionization by electron impact in coplanar to perpendicular plane symmetric geometry at 110.5 eV incident electron energy. The present theoretical results at gun angles Psi = 0^circ (coplanar symmetric geometry) and Psi = 90^circ (perpendicular plane geometry) are in satisfactory agreement with the available experimental data. A deep interference minimum appears in the TDCS in the coplanar symmetric geometry and a strong peak at scattering angle xi = 90^circ caused by the single collision mechanism has been observed in the perpendicular plane geometry. The TDCSs at the gun angles Psi = 30^circ, and Psi = 60^circ are predicted.
Hughes-Jones, N C; Hunt, V A; Maycock, W D; Wesley, E D; Vallet, L
1978-01-01
An analysis of the assay of 28 preparations of anti-D immunoglobulin using a radioisotope method carried out at 6-montly intervals for 2--4.5 years showed an average fall in anti-D concentration of 10.6% each year, with 99% confidence limits of 6.8--14.7%. The fall in anti-D concentration after storage at 37 degrees C for 1 month was less than 8%, the minimum change that could be detected. No significant change in physical characteristics of the immunoglobulin were detected. The error of a single estimate of anti-D by the radioisotope method (125I-labelled anti-IgG) used here was calculated to be such that the true value probably (p = 0.95) lay between 66 and 150% of the estimated value.
NASA Astrophysics Data System (ADS)
Chung, Woon-Kwan; Park, Hyong-Hu; Im, In-Chul; Lee, Jae-Seung; Goo, Eun-Hoe; Dong, Kyung-Rae
2012-09-01
This paper proposes a computer-aided diagnosis (CAD) system based on texture feature analysis and statistical wavelet transformation technology to diagnose fatty liver disease with computed tomography (CT) imaging. In the target image, a wavelet transformation was performed for each lesion area to set the region of analysis (ROA, window size: 50 × 50 pixels) and define the texture feature of a pixel. Based on the extracted texture feature values, six parameters (average gray level, average contrast, relative smoothness, skewness, uniformity, and entropy) were determined to calculate the recognition rate for a fatty liver. In addition, a multivariate analysis of the variance (MANOVA) method was used to perform a discriminant analysis to verify the significance of the extracted texture feature values and the recognition rate for a fatty liver. According to the results, each texture feature value was significant for a comparison of the recognition rate for a fatty liver ( p < 0.05). Furthermore, the F-value, which was used as a scale for the difference in recognition rates, was highest in the average gray level, relatively high in the skewness and the entropy, and relatively low in the uniformity, the relative smoothness and the average contrast. The recognition rate for a fatty liver had the same scale as that for the F-value, showing 100% (average gray level) at the maximum and 80% (average contrast) at the minimum. Therefore, the recognition rate is believed to be a useful clinical value for the automatic detection and computer-aided diagnosis (CAD) using the texture feature value. Nevertheless, further study on various diseases and singular diseases will be needed in the future.
Adachi, Yasumoto; Makita, Kohei
2015-09-01
Mycobacteriosis in swine is a common zoonosis found in abattoirs during meat inspections, and the veterinary authority is expected to inform the producer for corrective actions when an outbreak is detected. The expected value of the number of condemned carcasses due to mycobacteriosis therefore would be a useful threshold to detect an outbreak, and the present study aims to develop such an expected value through time series modeling. The model was developed using eight years of inspection data (2003 to 2010) obtained at 2 abattoirs of the Higashi-Mokoto Meat Inspection Center, Japan. The resulting model was validated by comparing the predicted time-dependent values for the subsequent 2 years with the actual data for 2 years between 2011 and 2012. For the modeling, at first, periodicities were checked using Fast Fourier Transformation, and the ensemble average profiles for weekly periodicities were calculated. An Auto-Regressive Integrated Moving Average (ARIMA) model was fitted to the residual of the ensemble average on the basis of minimum Akaike's information criterion (AIC). The sum of the ARIMA model and the weekly ensemble average was regarded as the time-dependent expected value. During 2011 and 2012, the number of whole or partial condemned carcasses exceeded the 95% confidence interval of the predicted values 20 times. All of these events were associated with the slaughtering of pigs from three producers with the highest rate of condemnation due to mycobacteriosis.
Out of Pocket Payment for Obstetrical Complications: A Cost Analysis Study in Iran
Yavangi, Mahnaz; Sohrabi, Mohammad Reza; Riazi, Sahand
2013-01-01
Background: This study was conducted to determine the total expenditure and out of pocket payment on pregnancy complications in Tehran, the capital of Iran. Methods: A cross-sectional study conducted on 1172 patients who admitted in two general teaching referral Hospitals in Tehran. In this study, we calculated total and out of pocket inpatient costs for seven pregnancy complications including preeclampsia, intrauterine growth restriction (IUGR), abortion, ante-partum hemorrhage, preterm delivery, premature rupture of membranes and post-dated pregnancy. We used descriptive analysis and analysis of variance test to compare these pregnancy complications. Results: The average duration of hospitalization was 3.28 days and the number of visits by physicians for a patient was 9.79 on average. The average total cost for these pregnancy complications was 735.22 Unites States Dollars (USD) (standard deviation [SD] = 650.53). The average out of packet share was 277.08 USD (SD = 350.74), which was 37.69% of total expenditure. IUGR with payment of 398.76 USD (SD = 418.54) (52.06% of total expenditure) had the greatest amount of out of pocket expenditure in all complications. While, abortion had the minimum out of pocket amount that was 148.77 USD (SD = 244.05). Conclusions: Obstetrics complications had no catastrophic effect on families, but IUGR cost was about 30% of monthly household non-food costs in Tehran so more financial protection plans and insurances are recommended for these patients. PMID:24404365
Melt density and the average composition of basalt
NASA Technical Reports Server (NTRS)
Stolper, E.; Walker, D.
1980-01-01
Densities of residual liquids produced by low pressure fractionation of olivine-rich melts pass through a minimum when pyroxene and plagioclase joint the crystallization sequence. The observation that erupted basalt compositions cluster around the degree of fractionation from picritic liquids corresponding to the density minimum in the liquid line of descent may thus suggest that the earth's crust imposes a density fiber on the liquids that pass through it, favoring the eruption of the light liquids at the density minimum over the eruption of denser more fractionated and less fractionated liquids.
Use of dried blood spots for the determination of serum concentrations of tamoxifen and endoxifen.
Jager, N G L; Rosing, H; Schellens, J H M; Beijnen, J H; Linn, S C
2014-07-01
The anti-estrogenic effect of tamoxifen is suggested to be mainly attributable to its metabolite (Z)-endoxifen, and a minimum therapeutic threshold for (Z)-endoxifen in serum has been proposed. The objective of this research was to establish the relationship between dried blood spot (DBS) and serum concentrations of tamoxifen and (Z)-endoxifen to allow the use of DBS sampling, a simple and patient-friendly alternative to venous sampling, in clinical practice. Paired DBS and serum samples were obtained from 50 patients using tamoxifen and analyzed using HPLC-MS/MS. Serum concentrations were calculated from DBS concentrations using the formula calculated serum concentration = DBS concentration/([1-haematocrit (Hct)] + blood cell-to-serum ratio × Hct). The blood cell-to-serum ratio was determined ex vivo by incubating a batch of whole blood spiked with both analytes. The average Hct for female adults was imputed as a fixed value. Calculated and analyzed serum concentrations were compared using weighted Deming regression. Weighted Deming regression analysis comparing 44 matching pairs of DBS and serum samples showed a proportional bias for both analytes. Serum concentrations were calculated using [Tamoxifen] serum, calculated = [Tamoxifen] DBS /0.779 and [(Z)-Endoxifen] serum, calculated = [(Z)-Endoxifen] DBS /0.663. Calculated serum concentrations were within 20 % of analyzed serum concentrations in 84 and 100 % of patient samples for tamoxifen and (Z)-endoxifen, respectively. In conclusion, DBS concentrations of tamoxifen and (Z)-endoxifen were equal to serum concentrations after correction for Hct and blood cell-to-serum ratio. DBS sampling can be used in clinical practice.
Worldwide Report, Environmental Quality, No. 388, China Addresses Environmental Issues -- IV
1983-03-04
resources and collaborate in a joint effort, the large helping the small, and the strong leading the weak. The Hanxiang Plant which produces fermented bean...1981 the rain falling on Chongqing had an average pH of 4.64 and a minimum value of 3. A pH of 3 is similar to that of vinegar . This minimum value is
MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging
NASA Astrophysics Data System (ADS)
Chen, Lei; Kamel, Mohamed S.
2016-01-01
In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theis, Daniel; Windus, Theresa L.; Ruedenberg, Klaus
The metastable ring structure of the ozone 1{sup 1}A{sub 1} ground state, which theoretical calculations have shown to exist, has so far eluded experimental detection. An accurate prediction for the energy difference between this isomer and the lower open structure is therefore of interest, as is a prediction for the isomerization barrier between them, which results from interactions between the lowest two {sup 1}A{sub 1} states. In the present work, valence correlated energies of the 1{sup 1}A{sub 1} state and the 2{sup 1}A{sub 1} state were calculated at the 1{sup 1}A{sub 1} open minimum, the 1{sup 1}A{sub 1} ring minimum,more » the transition state between these two minima, the minimum of the 2{sup 1}A{sub 1} state, and the conical intersection between the two states. The geometries were determined at the full-valence multi-configuration self-consistent-field level. Configuration interaction (CI) expansions up to quadruple excitations were calculated with triple-zeta atomic basis sets. The CI expansions based on eight different reference configuration spaces were explored. To obtain some of the quadruple excitation energies, the method of Correlation Energy Extrapolation by Intrinsic Scaling was generalized to the simultaneous extrapolation for two states. This extrapolation method was shown to be very accurate. On the other hand, none of the CI expansions were found to have converged to millihartree (mh) accuracy at the quadruple excitation level. The data suggest that convergence to mh accuracy is probably attained at the sextuple excitation level. On the 1{sup 1}A{sub 1} state, the present calculations yield the estimates of (ring minimum—open minimum) ∼45–50 mh and (transition state—open minimum) ∼85–90 mh. For the (2{sup 1}A{sub 1}–{sup 1}A{sub 1}) excitation energy, the estimate of ∼130–170 mh is found at the open minimum and 270–310 mh at the ring minimum. At the transition state, the difference (2{sup 1}A{sub 1}–{sup 1}A{sub 1}) is found to be between 1 and 10 mh. The geometry of the transition state on the 1{sup 1}A{sub 1} surface and that of the minimum on the 2{sup 1}A{sub 1} surface nearly coincide. More accurate predictions of the energy differences also require CI expansions to at least sextuple excitations with respect to the valence space. For every wave function considered, the omission of the correlations of the 2s oxygen orbitals, which is a widely used approximation, was found to cause errors of about ±10 mh with respect to the energy differences.« less
30 CFR 57.19019 - Guide ropes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Guide ropes. 57.19019 Section 57.19019 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND... rope at installation shall meet the minimum value calculated as follows: Minimum value=Static Load×5.0. ...
Lee, Yun-Keun; Ju, Young-Su; Lee, Won Jin; Hwang, Seung Sik; Yim, Sang-Hyuk; Yoo, Sang-Chul; Lee, Jieon; Choi, Kyung-Hwa; Burm, Eunae; Ha, Mina
2015-01-01
Objectives We aimed to assess the radiation exposure for epidemiologic investigation in residents exposed to radiation from roads that were accidentally found to be contaminated with radioactive cesium-137 (137Cs) in Seoul. Methods Using information regarding the frequency and duration of passing via the 137Cs contaminated roads or residing/working near the roads from the questionnaires that were obtained from 8875 residents and the measured radiation doses reported by the Nuclear Safety and Security Commission, we calculated the total cumulative dose of radiation exposure for each person. Results Sixty-three percent of the residents who responded to the questionnaire were considered as ever-exposed and 1% of them had a total cumulative dose of more than 10 mSv. The mean (minimum, maximum) duration of radiation exposure was 4.75 years (0.08, 11.98) and the geometric mean (minimum, maximum) of the total cumulative dose was 0.049 mSv (<0.001, 35.35) in the exposed. Conclusions An individual exposure assessment was performed for an epidemiological study to estimate the health risk among residents living in the vicinity of 137Cs contaminated roads. The average exposure dose in the exposed people was less than 5% of the current guideline. PMID:26184047
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-19
...-0015] RIN 2132-AB01 Bus Testing: Calculation of Average Passenger Weight and Test Vehicle Weight, and... of proposed rulemaking (NPRM) regarding the calculation of average passenger weights and test vehicle... passenger weights and actual transit vehicle loads. Specifically, FTA proposed to change the average...
Laitila, Jussi; Moilanen, Atte; Pouzols, Federico M
2014-01-01
Biodiversity offsetting, which means compensation for ecological and environmental damage caused by development activity, has recently been gaining strong political support around the world. One common criticism levelled at offsets is that they exchange certain and almost immediate losses for uncertain future gains. In the case of restoration offsets, gains may be realized after a time delay of decades, and with considerable uncertainty. Here we focus on offset multipliers, which are ratios between damaged and compensated amounts (areas) of biodiversity. Multipliers have the attraction of being an easily understandable way of deciding the amount of offsetting needed. On the other hand, exact values of multipliers are very difficult to compute in practice if at all possible. We introduce a mathematical method for deriving minimum levels for offset multipliers under the assumption that offsetting gains must compensate for the losses (no net loss offsetting). We calculate absolute minimum multipliers that arise from time discounting and delayed emergence of offsetting gains for a one-dimensional measure of biodiversity. Despite the highly simplified model, we show that even the absolute minimum multipliers may easily be quite large, in the order of dozens, and theoretically arbitrarily large, contradicting the relatively low multipliers found in literature and in practice. While our results inform policy makers about realistic minimal offsetting requirements, they also challenge many current policies and show the importance of rigorous models for computing (minimum) offset multipliers. The strength of the presented method is that it requires minimal underlying information. We include a supplementary spreadsheet tool for calculating multipliers to facilitate application. PMID:25821578
47 CFR 24.53 - Calculation of height above average terrain (HAAT).
Code of Federal Regulations, 2012 CFR
2012-10-01
... height above mean sea level. (b) Average terrain elevation shall be calculated using elevation data from... Digital Chart of the World (DCW) may be used. (c) Radial average terrain elevation is calculated as the...
47 CFR 24.53 - Calculation of height above average terrain (HAAT).
Code of Federal Regulations, 2011 CFR
2011-10-01
... height above mean sea level. (b) Average terrain elevation shall be calculated using elevation data from... Digital Chart of the World (DCW) may be used. (c) Radial average terrain elevation is calculated as the...
47 CFR 24.53 - Calculation of height above average terrain (HAAT).
Code of Federal Regulations, 2010 CFR
2010-10-01
... height above mean sea level. (b) Average terrain elevation shall be calculated using elevation data from... Digital Chart of the World (DCW) may be used. (c) Radial average terrain elevation is calculated as the...
47 CFR 24.53 - Calculation of height above average terrain (HAAT).
Code of Federal Regulations, 2014 CFR
2014-10-01
... height above mean sea level. (b) Average terrain elevation shall be calculated using elevation data from... Digital Chart of the World (DCW) may be used. (c) Radial average terrain elevation is calculated as the...
47 CFR 24.53 - Calculation of height above average terrain (HAAT).
Code of Federal Regulations, 2013 CFR
2013-10-01
... height above mean sea level. (b) Average terrain elevation shall be calculated using elevation data from... Digital Chart of the World (DCW) may be used. (c) Radial average terrain elevation is calculated as the...
40 CFR 86.1370-2007 - Not-To-Exceed test procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... that include discrete regeneration events and that send a recordable electronic signal indicating the start and end of the regeneration event, determine the minimum averaging period for each NTE event that... averaging period is used to determine whether the individual NTE event is a valid NTE event. For engines...
40 CFR 69.41 - New exemptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... operating specifications. At a minimum, the wind direction data will be monitored, collected and reported as 1-hour averages, starting on the hour. If the average wind direction for a given hour is from within the designated sector, the wind will be deemed to have flowed from within the sector for that hour...
40 CFR 69.41 - New exemptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... operating specifications. At a minimum, the wind direction data will be monitored, collected and reported as 1-hour averages, starting on the hour. If the average wind direction for a given hour is from within the designated sector, the wind will be deemed to have flowed from within the sector for that hour...
Time-of-day effects on voice range profile performance in young, vocally untrained adult females.
van Mersbergen, M R; Verdolini, K; Titze, I R
1999-12-01
Time-of-day effects on voice range profile performance were investigated in 20 vocally healthy untrained women between the ages of 18 and 35 years. Each subject produced two complete voice range profiles: one in the morning and one in the evening, about 36 hours apart. The order of morning and evening trials was counterbalanced across subjects. Dependent variables were (1) average minimum and average maximum intensity, (2) Voice range profile area and (3) center of gravity (median semitone pitch and median intensity). In this study, the results failed to reveal any clear evidence of time-of-day effects on voice range profile performance, for any of the dependent variables. However, a reliable interaction of time-of-day and trial order was obtained for average minimum intensity. Investigation of other subject populations, in particular trained vocalists or those with laryngeal lesions, is required for any generalization of the results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onozato, Yusuke; Kadoya, Noriyuki, E-mail: kadoya.n@rad.med.tohoku.ac.jp; Fujita, Yukio
2014-06-01
Purpose: The purpose of this study was to estimate the accuracy of the dose calculation of On-Board Imager (Varian, Palo Alto, CA) cone beam computed tomography (CBCT) with deformable image registration (DIR), using the multilevel-threshold (MLT) algorithm and histogram matching (HM) algorithm in pelvic radiation therapy. Methods and Materials: One pelvis phantom and 10 patients with prostate cancer treated with intensity modulated radiation therapy were studied. To minimize the effect of organ deformation and different Hounsfield unit values between planning CT (PCT) and CBCT, we modified CBCT (mCBCT) with DIR by using the MLT (mCBCT{sub MLT}) and HM (mCBCT{sub HM})more » algorithms. To evaluate the accuracy of the dose calculation, we compared dose differences in dosimetric parameters (mean dose [D{sub mean}], minimum dose [D{sub min}], and maximum dose [D{sub max}]) for planning target volume, rectum, and bladder between PCT (reference) and CBCTs or mCBCTs. Furthermore, we investigated the effect of organ deformation compared with DIR and rigid registration (RR). We determined whether dose differences between PCT and mCBCTs were significantly lower than in CBCT by using Student t test. Results: For patients, the average dose differences in all dosimetric parameters of CBCT with DIR were smaller than those of CBCT with RR (eg, rectum; 0.54% for DIR vs 1.24% for RR). For the mCBCTs with DIR, the average dose differences in all dosimetric parameters were less than 1.0%. Conclusions: We evaluated the accuracy of the dose calculation in CBCT, mCBCT{sub MLT}, and mCBCT{sub HM} with DIR for 10 patients. The results showed that dose differences in D{sub mean}, D{sub min}, and D{sub max} in mCBCTs were within 1%, which were significantly better than those in CBCT, especially for the rectum (P<.05). Our results indicate that the mCBCT{sub MLT} and mCBCT{sub HM} can be useful for improving the dose calculation for adaptive radiation therapy.« less
On the use of Bayesian Monte-Carlo in evaluation of nuclear data
NASA Astrophysics Data System (ADS)
De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles
2017-09-01
As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.
High charge state carbon and oxygen ions in Earth's equatorial quasi-trapping region
NASA Technical Reports Server (NTRS)
Christon, S. P.; Hamilton, D. C.; Gloeckler, G.; Eastmann, T. E.
1994-01-01
Observations of energetic (1.5 - 300 keV/e) medium-to-high charge state (+3 less than or equal to Q less than or equal to +7) solar wind origin C and O ions made in the quasi-trapping region (QTR) of Earth's magnetosphere are compared to ion trajectories calculated in model equatorial magnetospheric magnetic and electric fields. These comparisons indicate that solar wind ions entering the QTR on the nightside as an energetic component of the plasma sheet exit the region on the dayside, experiencing little or no charge exchange on the way. Measurements made by the CHarge Energy Mass (CHEM) ion spectrometer on board the Active Magnetospheric Particle Tracer Explorer/Charge Composition Explorer (AMPTE/CCE) spacecraft at 7 less than L less than 9 from September 1984 to January 1989 are the source of the new results contained herein: quantitative long-term determination of number densities, average energies, energy spectra, local time distributions, and their variation with geomagnetic disturbance level as indexed by Kp. Solar wind primaries (ions with charge states unchanged) and their secondaries (ions with generally lower charge states produced from primaries in the magnetosphere via charge exchange)are observed throughout the QTR and have distinctly different local time variations that persist over the entire 4-year analysis interval. During Kp larger than or equal to 3 deg intervals, primary ion (e.g., O(+6)) densities exhibit a pronounced predawn maximum with average energy minimum and a broad near-local-noon density minimum with average energy maximum. Secondary ion (e.g., O(+5)) densities do not have an identifiable predawn peak, rather they have a broad dayside maximum peaked in local morning and a nightside minimum. During Kp less than or equal to 2(-) intervals, primary ion density peaks are less intense, broader in local time extent, and centered near midnight, while secondary ion density local time variations diminish. The long-time-interval baseline helps to refine and extend previous observations; for example, we show that ionospheric contribution to O(+3)) is negligible. Through comparison with model ion trajectories, we interpret the lack of pronounced secondary ion density peaks colocated with the primary density peaks to indicate that: (1) negligible charge exchange occurs at L greater than 7, that is, solar wind secondaries are produced at L less than 7, and (2) solar wind secondaries do not form a significant portion of the plasma sheet population injected into the QTR. We conclude that little of the energetic solar wind secondary ion population is recirculated through the magnetosphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournier, Sean Donovan; Beall, Patrick S; Miller, Mark L
2014-08-01
Through the SNL New Mexico Small Business Assistance (NMSBA) program, several Sandia engineers worked with the Environmental Restoration Group (ERG) Inc. to verify and validate a novel algorithm used to determine the scanning Critical Level (L c ) and Minimum Detectable Concentration (MDC) (or Minimum Detectable Areal Activity) for the 102F scanning system. Through the use of Monte Carlo statistical simulations the algorithm mathematically demonstrates accuracy in determining the L c and MDC when a nearest-neighbor averaging (NNA) technique was used. To empirically validate this approach, SNL prepared several spiked sources and ran a test with the ERG 102F instrumentmore » on a bare concrete floor known to have no radiological contamination other than background naturally occurring radioactive material (NORM). The tests conclude that the NNA technique increases the sensitivity (decreases the L c and MDC) for high-density data maps that are obtained by scanning radiological survey instruments.« less
[Nitrogen balance assessment in burn patients].
Beça, Andreia; Egipto, Paula; Carvalho, Davide; Correia, Flora; Oliveira, Bruno; Rodrigues, Acácio; Amarante, José; Medina, J Luís
2010-01-01
The burn injury probably represents the largest stimulus for muscle protein catabolism. This state is characterized by an accelerated catabolism of the lean or skeletal mass that results in a clinical negative balance of nitrogen and muscle wasting. The determination of an appropriate value for protein intake is essential, since it is positively related to the nitrogen balance (NB) and accordingly several authors argue that a positive NB is the key parameter associated with nutritional improvement of a burn patient. Evaluation of the degree of protein catabolism by assessment of the Nitrogen Balance; Defining of nutritional support (protein needs) to implement in patients with burned surface area (BSA) = 10%. We prospectively evaluated the clinical files and scrutinized the clinical variables of interest. The NB was estimated according to three formulae. Each gram of nitrogen calculated by the NB was then converted into grams of protein, subtracted or added to protein intake (or administered enteric or parenterically) and divided by kg of reference Weight (kg Rweight), in an attempt to estimate the daily protein needs. The cohort consisted of 10 patients, 6 females, with average age of 58(23) years old, a mean of BSA of 21.4(8.4)%, ranging from a minimum of 10.0% and máximum of 35.0%. On average, patients were 58 (23) years old. The average number of days of hospitalization in the burn unit was 64.8(36.5) days. We observed significant differences between the 3 methods used for calculating the NB (p = 0.004), on average the NB was positive. When the formula A was used the average value of NB was higher. Regarding the attempt to estimate the needs of g prot/kg Rweight/day most of the values did not exceed, on average, 2.6 g Prot/kg Rweight/day and no significant differences between patients with a BSA% of 10-20% and with BSA% > 20% were found. Despite being able to estimate the protein catabolism through these formulas and verifying that most values were above zero, wide individual fluctuations were visible over time. Based on the sample reference that recommends a value of 1.5-2 g Prot/kg Rweight/day, we can conclude it to be underestimated, when comparing with the mean value of 2.6 g Prot/kg Rweight/day we established.
40 CFR 63.1975 - How do I calculate the 3-hour block average used to demonstrate compliance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 12 2010-07-01 2010-07-01 true How do I calculate the 3-hour block average used to demonstrate compliance? 63.1975 Section 63.1975 Protection of Environment ENVIRONMENTAL... block average used to demonstrate compliance? Averages are calculated in the same way as they are...
41 CFR 102-34.60 - How do we calculate the average fuel economy for Government motor vehicles?
Code of Federal Regulations, 2014 CFR
2014-01-01
... average fuel economy for Government motor vehicles? 102-34.60 Section 102-34.60 Public Contracts and... How do we calculate the average fuel economy for Government motor vehicles? You must calculate the average fuel economy for Government motor vehicles as follows: (a) Because there are so many motor vehicle...
41 CFR 102-34.60 - How do we calculate the average fuel economy for Government motor vehicles?
Code of Federal Regulations, 2013 CFR
2013-07-01
... average fuel economy for Government motor vehicles? 102-34.60 Section 102-34.60 Public Contracts and... How do we calculate the average fuel economy for Government motor vehicles? You must calculate the average fuel economy for Government motor vehicles as follows: (a) Because there are so many motor vehicle...
41 CFR 102-34.60 - How do we calculate the average fuel economy for Government motor vehicles?
Code of Federal Regulations, 2012 CFR
2012-01-01
... average fuel economy for Government motor vehicles? 102-34.60 Section 102-34.60 Public Contracts and... How do we calculate the average fuel economy for Government motor vehicles? You must calculate the average fuel economy for Government motor vehicles as follows: (a) Because there are so many motor vehicle...
41 CFR 102-34.60 - How do we calculate the average fuel economy for Government motor vehicles?
Code of Federal Regulations, 2011 CFR
2011-01-01
... average fuel economy for Government motor vehicles? 102-34.60 Section 102-34.60 Public Contracts and... How do we calculate the average fuel economy for Government motor vehicles? You must calculate the average fuel economy for Government motor vehicles as follows: (a) Because there are so many motor vehicle...
Granato, Gregory E.; Barlow, Paul M.
2005-01-01
Transient numerical ground-water-flow simulation and optimization techniques were used to evaluate potential effects of instream-flow criteria and water-supply demands on ground-water development options and resultant streamflow depletions in the Big River Area, Rhode Island. The 35.7 square-mile (mi2) study area includes three river basins, the Big River Basin (30.9 mi2), the Carr River Basin (which drains to the Big River Basin and is 7.33 mi2 in area), the Mishnock River Basin (3.32 mi2), and a small area that drains directly to the Flat River Reservoir. The overall objective of the simulations was to determine the amount of ground water that could be withdrawn from the three basins when constrained by streamflow requirements at four locations in the study area and by maximum rates of withdrawal at 13 existing and hypothetical well sites. The instream-flow requirement for the outlet of each basin and the outfall of Lake Mishnock were the primary variables that limited the amount of ground water that could be withdrawn. A requirement to meet seasonal ground-water-demand patterns also limits the amount of ground water that could be withdrawn by up to about 50 percent of the total withdrawals without the demand-pattern constraint. Minimum water-supply demands from a public water supplier in the Mishnock River Basin, however, did not have a substantial effect on withdrawals in the Big River Basin. Hypothetical dry-period instream-flow requirements and the effects of artificial recharge also affected the amount of ground water that could be withdrawn. Results of simulations indicate that annual average ground-water withdrawal rates that range up to 16 million gallons per day (Mgal/d) can be withdrawn from the study area under simulated average hydrologic conditions depending on instream-flow criteria and water-supply demand patterns. Annual average withdrawals of 10 to 12 Mgal/d are possible for proposed demands of 3.4 Mgal/d in the Mishnock Basin, and for a constant annual instream-flow criterion of 0.5 cubic foot per second per square mile (ft3/s/mi2) at the four streamflow-constraint locations. An average withdrawal rate of 10 Mgal/d can meet estimates of future (2020) water-supply needs of surrounding communities in Rhode Island. This withdrawal rate represents about 13 percent of the average 2002 daily withdrawal from the Scituate Reservoir (76 Mgal/d), the State?s largest water supply. Average annual withdrawal rates of 6 to 7 Mgal/d are possible for more stringent instream-flow criteria that might be used during dry-period hydrologic conditions. Two example scenarios of dry-period instream-flow constraints were evaluated: first, a minimum instream flow of 0.1 cubic foot per second at any of the four constraint locations; and second, a minimum instream flow of 10 percent of the minimum monthly streamflow estimate for each streamflow-constraint location during the period 1961?2000. The State of Rhode Island is currently (2004) considering methods for establishing instream-flow criteria for streams within the State. Twelve alternative annual, seasonal, or monthly instream-flow criteria that have been or are being considered for application in southeastern New England were used as hypothetical constraints on maximum ground-water-withdrawal rates in management-model calculations. Maximum ground-water-withdrawal rates ranged from 5 to 16 Mgal/d under five alternative annual instream-flow criteria. Maximum ground-water-withdrawal rates ranged from 0 to 13.6 Mgal/d under seven alternative seasonal or monthly instream-flow criteria. The effect of ground-water withdrawals on seasonal variations in monthly average streamflows under each criterion also were compared. Evaluation of management-model results indicates that a single annual instream-flowcriterion may be sufficient to preserve seasonal variations in monthly average streamflows and meet water-supply demands in the Big River Area, because withdrawals from wells in the Big
Link, Karl-Heinrich; Coy, Peter; Roitman, Mark; Link, Carola; Kornmann, Marko; Staib, Ludger
2017-01-01
Background To answer the question whether minimum caseloads need to be stipulated in the German S3 (or any other) guidelines for colorectal cancer, we analyzed the current representative literature. The question is important regarding medical quality as well as health economics and policy. Methods A literature research was conducted in PubMed for papers concerning ‘colon cancer’ (CC), ‘rectal cancer’ (RC), and ‘colorectal cancer’ (CRC), with ‘results', ‘quality’, and ‘mortality’ between the years 2000 and 2016 being relevant factors. We graded the recommendations as ‘pro’, ‘maybe’, or ‘contra’ in terms of a significant correlation between hospital volume (HV) or surgeon volume (SV) and treatment quality. We also listed the recommended numbers suggested for HV or SV as minimum caseloads and calculated and discussed the socio-economic impact of setting minimum caseloads for CRC. Results The correlations of caseloads of hospitals or surgeons turned out to be highly controversial concerning the influence of HV or SV on short- and long-term surgical treatment quality of CRC. Specialized statisticians made the point that the reports in the literature might not use the optimal biometrical analytical/reporting methods. A Dutch analysis showed that if a decision towards minimum caseloads, e.g. >50 for CRC resections, would be made, this would exclude a lot of hospitals with proven good treatment quality and include hospitals with a treatment quality below average. Our economic analysis envisioned that a yearly loss of EUR <830,000 might ensue for hospitals with volumes <50 per year. Conclusions Caseload (HV, SV) definitely is an inconsistent surrogate parameter for treatment quality in the surgery of CC, RC, or CRC. If used at all, the lowest tolerable numbers but the highest demands for structural, process and result quality in the surgical/interdisciplinary treatment of CC and RC must be imposed and independently controlled. Hospitals fulfilling these demands should be medically and socio-economically preferred concerning the treatment of CC and RC patients. PMID:28560230
Link, Karl-Heinrich; Coy, Peter; Roitman, Mark; Link, Carola; Kornmann, Marko; Staib, Ludger
2017-05-01
To answer the question whether minimum caseloads need to be stipulated in the German S3 (or any other) guidelines for colorectal cancer, we analyzed the current representative literature. The question is important regarding medical quality as well as health economics and policy. A literature research was conducted in PubMed for papers concerning 'colon cancer' (CC), 'rectal cancer' (RC), and 'colorectal cancer' (CRC), with 'results', 'quality', and 'mortality' between the years 2000 and 2016 being relevant factors. We graded the recommendations as 'pro', 'maybe', or 'contra' in terms of a significant correlation between hospital volume (HV) or surgeon volume (SV) and treatment quality. We also listed the recommended numbers suggested for HV or SV as minimum caseloads and calculated and discussed the socio-economic impact of setting minimum caseloads for CRC. The correlations of caseloads of hospitals or surgeons turned out to be highly controversial concerning the influence of HV or SV on short- and long-term surgical treatment quality of CRC. Specialized statisticians made the point that the reports in the literature might not use the optimal biometrical analytical/reporting methods. A Dutch analysis showed that if a decision towards minimum caseloads, e.g. >50 for CRC resections, would be made, this would exclude a lot of hospitals with proven good treatment quality and include hospitals with a treatment quality below average. Our economic analysis envisioned that a yearly loss of EUR <830,000 might ensue for hospitals with volumes <50 per year. Caseload (HV, SV) definitely is an inconsistent surrogate parameter for treatment quality in the surgery of CC, RC, or CRC. If used at all, the lowest tolerable numbers but the highest demands for structural, process and result quality in the surgical/interdisciplinary treatment of CC and RC must be imposed and independently controlled. Hospitals fulfilling these demands should be medically and socio-economically preferred concerning the treatment of CC and RC patients.
Serra-Guillén, C; Llombart, B; Nagore, E; Guillén, C; Requena, C; Traves, V; Kindem, S; Alcalá, R; Rivas, N; Sanmartín, O
2015-01-01
Dermatofibrosarcoma protuberans (DFSP) is an uncommon skin tumour with aggressive local growth. Whether DFSP should be treated with conventional surgery (CS) or Mohs micrographic surgery (MMS) has long been a topic of debate. To calculate, in a large series of DFSP treated by MMS, the minimum margin that would have been needed to achieve complete clearance by CS. Secondly, to calculate the percentage of healthy tissue that was preserved by MMS rather than CS with 2- and 3-cm margins. The minimum margin was calculated by measuring the largest distance from the visible edge of the tumour to the edge of the definitive surgical defect. Tumour and surgical defect areas for hypothetical CS with 2- and 3-cm margins were calculated using AutoCAD for Windows. A mean minimum margin of 1·34 cm was required to achieve complete clearance for the 74 tumours analysed. The mean percentages of skin spared using MMS rather than CS with 2- and 3-cm margins were 49·4% and 67·9%, respectively. MMS can achieve tumour clearance with smaller margins and greater preservation of healthy tissue than CS. © 2014 British Association of Dermatologists.
Radiographic Findings in Revision Anterior Cruciate Ligament Reconstructions from the MARS Cohort
2013-01-01
The Multicenter ACL (anterior cruciate ligament) Revision Study (MARS) group was developed to investigate revision ACL reconstruction outcomes. An important part of this is obtaining and reviewing radiographic studies. The goal for this radiographic analysis is to establish radiographic findings for a large revision ACL cohort to allow comparison with future studies. The study was designed as a cohort study. Various established radiographic parameters were measured by three readers. These included sagittal and coronal femoral and tibial tunnel position, joint space narrowing, and leg alignment. Inter- and intraobserver comparisons were performed. Femoral sagittal position demonstrated 42% were more than 40% anterior to the posterior cortex. On the sagittal tibia tunnel position, 49% demonstrated some impingement on full-extension lateral radiographs. Limb alignment averaged 43% medial to the medial edge of the tibial plateau. On the Rosenberg view (45-degree flexion view), the minimum joint space in the medial compartment averaged 106% of the opposite knee, but it ranged down to a minimum of 4.6%. Lateral compartment narrowing at its minimum on the Rosenberg view averaged 91.2% of the opposite knee, but it ranged down to a minimum of 0.0%. On the coronal view, verticality as measured by the angle from the center of the tibial tunnel aperture to the center of the femoral tunnel aperture measured 15.8 degree ± 6.9% from vertical. This study represents the radiographic findings in the largest revision ACL reconstruction series ever assembled. Findings were generally consistent with those previously demonstrated in the literature. PMID:23404491
Protein side chain rotational isomerization: A minimum perturbation mapping study
NASA Astrophysics Data System (ADS)
Haydock, Christopher
1993-05-01
A theory of the rotational isomerization of the indole side chain of tryptophan-47 of variant-3 scorpion neurotoxin is presented. The isomerization potential energy, entropic part of the isomerization free energy, isomer probabilities, transition state theory reaction rates, and indole order parameters are calculated from a minimum perturbation mapping over tryptophan-47 χ1×χ2 torsion space. A new method for calculating the fluorescence anisotropy from molecular dynamics simulations is proposed. The method is based on an expansion that separates transition dipole orientation from chromophore dynamics. The minimum perturbation potential energy map is inverted and applied as a bias potential for a 100 ns umbrella sampling simulation. The entropic part of the isomerization free energy as calculated by minimum perturbation mapping and umbrella sampling are in fairly close agreement. Throughout, the approximation is made that two glutamine and three tyrosine side chains neighboring tryptophan-47 are truncated at the Cβ atom. Comparison with the previous combination thermodynamic perturbation and umbrella sampling study suggests that this truncated neighbor side chain approximation leads to at least a qualitatively correct theory of tryptophan-47 rotational isomerization in the wild type variant-3 scorpion neurotoxin. Analysis of van der Waals interactions in a transition state region indicates that for the simulation of barrier crossing trajectories a linear combination of three specially defined dihedral angles will be superior to a simple side chain dihedral reaction coordinate.
Does Change in the Arctic Sea Ice Indicate Climate Change? A Lesson Using Geospatial Technology
ERIC Educational Resources Information Center
Bock, Judith K.
2011-01-01
The Arctic sea ice has not since melted to the 2007 extent, but annual summer melt extents do continue to be less than the decadal average. Climate fluctuations are well documented by geologic records. Averages are usually based on a minimum of 10 years of averaged data. It is typical for fluctuations to occur from year to year and season to…
NASA Technical Reports Server (NTRS)
Walch, Stephen P.
1995-01-01
We report calculations of the minimum energy pathways connecting CH2 + N2 to diazomethane and diazirine, for the rearrangement of diazirine to diazomethane, for the dissociation of diazirine to HCN2+H, and of diazomethane to CH2N+N. The calculations use Complete Active Space Self-Consistent Field (CASSCF) derivative methods to characterize the stationary points and Internally Contracted Configuration Interaction (ICCI) to determine the energetics. The calculations suggest a potential new source of prompt NO from the reaction CH2 with N2 to give diazirine, and subsequent reaction of diazirine with hydrogen abstracters to form doublet HCN2, which leads to HCN+N(S-4) on the previously studied CH+N2 surface. The calculations also predict accurate 0 K heats of formation of 77.7 kcal/mol and 68.0 kcal/mol for diazirine and diazomethane, respectively.
Influence of air temperature on the first flowering date of Prunus yedoensis Matsum
Shi, Peijian; Chen, Zhenghong; Yang, Qingpei; Harris, Marvin K; Xiao, Mei
2014-01-01
Climate change is expected to have a significant effect on the first flowering date (FFD) in plants flowering in early spring. Prunus yedoensis Matsum is a good model plant for analyzing this effect. In this study, we used a degree day model to analyze the effect of air temperatures on the FFDs of P. yedoensis at Wuhan University from a long-time series from 1951 to 2012. First, the starting date (=7 February) is determined according to the lowest correlation coefficient between the FFD and the daily average accumulated degree days (ADD). Second, the base temperature (=−1.2°C) is determined according to the lowest root mean square error (RMSE) between the observed and predicted FFDs based on the mean of 62-year ADDs. Finally, based on this combination of starting date and base temperature, the daily average ADD of every year was calculated. Performing a linear fit of the daily average ADD to year, we find that there is an increasing trend that indicates climate warming from a biological climatic indicator. In addition, we find that the minimum annual temperature also has a significant effect on the FFD of P. yedoensis using the generalized additive model. This study provides a method for analyzing the climate change on the FFD in plants' flowering in early spring. PMID:24558585
Wartmann, Flurina M; Purves, Ross S; van Schaik, Carel P
2010-04-01
Quantification of the spatial needs of individuals and populations is vitally important for management and conservation. Geographic information systems (GIS) have recently become important analytical tools in wildlife biology, improving our ability to understand animal movement patterns, especially when very large data sets are collected. This study aims at combining the field of GIS with primatology to model and analyse space-use patterns of wild orang-utans. Home ranges of female orang-utans in the Tuanan Mawas forest reserve in Central Kalimantan, Indonesia were modelled with kernel density estimation methods. Kernel results were compared with minimum convex polygon estimates, and were found to perform better, because they were less sensitive to sample size and produced more reliable estimates. Furthermore, daily travel paths were calculated from 970 complete follow days. Annual ranges for the resident females were approximately 200 ha and remained stable over several years; total home range size was estimated to be 275 ha. On average, each female shared a third of her home range with each neighbouring female. Orang-utan females in Tuanan built their night nest on average 414 m away from the morning nest, whereas average daily travel path length was 777 m. A significant effect of fruit availability on day path length was found. Sexually active females covered longer distances per day and may also temporarily expand their ranges.
NASA Astrophysics Data System (ADS)
Kumar, Gulshan; Kumari, Punam; Kumar, Mukesh; Kumar, Arvind; Prasher, Sangeeta; Dhar, Sunil
2017-07-01
The present study deals with the radon estimation in 40 water samples collected from different natural resources and radium content in the soils of Mandi-Dharamshala Region. Radon concentration is determined by using RAD-7 detector and radium contents of the soil in vicinity of water resources is as well measured by using LR-115 type - II detector, which is further correlated with radon concentration in water samples. The potential health risks related with 222Rn have also been estimated. The results show that the radon concentrations within the range of 1.51 to 22.7Bq/l with an average value of 5.93 Bq/l for all type of water samples taken from study area. The radon concentration in water samples is found lower than 100Bq/l, the exposure limit of radon in water recommended by the World Health Organization. The calculated average effective dose of radon received by the people of study area is 0.022 mSv/y with maximum of 0.083 mSv/y and minimum 0.0056 mSv/y. The total effective dose in all sites of the studied area is found to be within the safe limit (0.1 mSv/year) recommended by World Health Organization. The average value of radium content in the soil of study area is 6.326 Bq/kg.
van Rossum, Huub H; Kemperman, Hans
2017-02-01
To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-13
... settled (minimum 4 MW) of metered load settled using WACM hourly pricing with no using WACM hourly pricing... than 7.5% (minimum pricing in no-penalty band. Customer 10 MW) of metered load settled using imbalance... or equal to 0.5 percent of its hourly average load, no Regulation Service charges will be assessed by...
J. N. Kochenderfer; G. W. Wendel; H. Clay Smith
1984-01-01
A "minimum-standard" forest truck road that provides efficient and environmentally acceptable access for several forest activities is described. Cost data are presented for eight of these roads constructed in the central Appalachians. The average cost per mile excluding gravel was $8,119. The range was $5,048 to $14,424. Soil loss was measured from several...
EnviroAtlas - Minimum Temperature 1950 - 2099 for the Conterminous United States
The EnviroAtlas Climate Scenarios were generated from NASA Earth Exchange (NEX) Downscaled Climate Projections (NEX-DCP30) ensemble averages (the average of over 30 available climate models) for each of the four representative concentration pathways (RCP) for the contiguous U.S. at 30 arc-second (approx. 800 m2) spatial resolution. NEX-DCP30 mean monthly minimum temperature for the 4 RCPs (2.6, 4.5, 6.0, 8.5) were organized by season (Winter, Spring, Summer, and Fall) and annually for the years 2006 00e2?? 2099. Additionally, mean monthly minimum temperature for the ensemble average of all historic runs is organized similarly for the years 1950 00e2?? 2005. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
Archer, Roger J.
1978-01-01
Minimum average 7-day, 10-year flow at 67 gaging stations and 173 partial-record stations in the Hudson River basin are given in tabular form. Variation of the 7-day, 10-year low flow from point to point in selected reaches, and the corresponding times of travel, are shown graphically for Wawayanda Creek, Wallkill River, Woodbury-Moodna Creek, and the Fishkill Creek basins. The 7-day, 10-year low flow for the Saw Kill basin, and estimates of the 7-day, 10-year low flow of the Roeliff Jansen Kill at Ancram and of Birch Creek at Pine Hill, are given. Summaries of discharge from Rondout and Ashokan Reservoirs, in Ulster County, are also included. Minimum average 7-day, 10-year flow for gaging stations with 10 years or more of record were determined by log-Pearson Type III computation; those for partial-record stations were developed by correlation of discharge measurements made at the partial-record stations with discharge data from appropriate long-term gaging stations. The variation in low flows from point to point within the selected subbasins were estimated from available data and regional regression formula. Time of travel at these flows in the four subbasins was estimated from available data and Boning's equations.
NASA Technical Reports Server (NTRS)
Kasper, J. C.; Stenens, M. L.; Stevens, M. L.; Lazarus, A. J.; Steinberg, J. T.; Ogilvie, Keith W.
2006-01-01
We present a study of the variation of the relative abundance of helium to hydrogen in the solar wind as a function of solar wind speed and heliographic latitude over the previous solar cycle. The average values of A(sub He), the ratio of helium to hydrogen number densities, are calculated in 25 speed intervals over 27-day Carrington rotations using Faraday Cup observations from the Wind spacecraft between 1995 and 2005. The higher speed and time resolution of this study compared to an earlier work with the Wind observations has led to the discovery of three new aspects of A(sub He), modulation during solar minimum from mid-1995 to mid-1997. First, we find that for solar wind speeds between 350 and 415 km/s, A(sub He), varies with a clear six-month periodicity, with a minimum value at the heliographic equatorial plane and a typical gradient of 0.01 per degree in latitude. For the slow wind this is a 30% effect. We suggest that the latitudinal gradient may be due to an additional dependence of coronal proton flux on coronal field strength or the stability of coronal loops. Second, once the gradient is subtracted, we find that A(sub He), is a remarkably linear function of solar wind speed. Finally, we identify a vanishing speed, at which A(sub He), is zero, is 259 km/s and note that this speed corresponds to the minimum solar wind speed observed at one AU. The vanishing speed may be related to previous theoretical work in which enhancements of coronal helium lead to stagnation of the escaping proton flux. During solar maximum the A(sub He), dependences on speed and latitude disappear, and we interpret this as evidence of two source regions for slow solar wind in the ecliptic plane, one being the solar minimum streamer belt and the other likely being active regions.
NASA Astrophysics Data System (ADS)
Yamazaki, M.; Nakayama, S.; Zhu, C. Y.; Takahashi, M.
2017-11-01
We report on theoretical progress in time-resolved (e, 2e) electron momentum spectroscopy of photodissociation dynamics of the deuterated acetone molecule at 195 nm. We have examined the predicted minimum energy reaction path to investigate whether associated (e, 2e) calculations meet the experimental results. A noticeable difference between the experiment and calculations has been found at around binding energy of 10 eV, suggesting that the observed difference may originate, at least partly, in ever-unconsidered non-minimum energy paths.
A New Potential Energy Surface for N+O2: Is There an NOO Minimum?
NASA Technical Reports Server (NTRS)
Walch, Stephen P.
1995-01-01
We report a new calculation of the N+02 potential energy surface using complete active space self-consistent field internally contracted configuration interaction with the Dunning correlation consistent basis sets. The peroxy isomer of N02 is found to be a very shallow minimum separated from NO+O by a barrier of only 0.3 kcal/mol (excluding zero-point effects). The entrance channel barrier height is estimated to be 8.6 kcal/mol for ICCI+Q calculations correlating all but the Ols and N1s electrons with a cc-p VQZ basis set.
NASA Technical Reports Server (NTRS)
Abe, K.; Fuke, H.; Haino, S.; Hams, T.; Hasegawa, M.; Horikoshi, A.; Kim, K. C.; Kusumoto, A.; Lee, M. H.; Makida, Y.;
2011-01-01
The energy spectrum of cosmic-ray antiprotons (p(raised bar)'s) collected by the BESS-Polar II instrument during a long-duration flight over Antarctica in the solar minimum period of December 2007 through January 2008. The p(raised bar) spectrum measured by BESS-Polar II shows good consistency with secondary p(raised bar) calculations. Cosmologically primary p(raised bar)'s have been searched for by comparing the observed and calculated p(raised bar) spectra. The BESSPolar II result shows no evidence of primary p(raised bar)'s originating from the evaporation of PBH.
NASA Technical Reports Server (NTRS)
Abe, K.; Fuke, H.; Haino, S.; Hams, T.; Hasegawa, M.; Horikoshi, A.; Kim, K. C.; Kusumoto, A.; Lee, M. H.; Makida, Y.;
2012-01-01
The energy spectrum of cosmic-ray antiprotons (p-bar's) from 0.17 to 3.5 GeV has been measured using 7886 p-bar's detected by BESS-Polar II during a long-duration flight over Antarctica near solar minimum in December 2007 and January 2008. This shows good consistency with secondary p-bar calculations. Cosmologically primary p-bar's have been investigated by comparing measured and calculated p-bar spectra. BESS-Polar II data.show no evidence of primary p-bar's from the evaporation of primordial black holes.
VehiHealth: An Emergency Routing Protocol for Vehicular Ad Hoc Network to Support Healthcare System.
Bhoi, S K; Khilar, P M
2016-03-01
Survival of a patient depends on effective data communication in healthcare system. In this paper, an emergency routing protocol for Vehicular Ad hoc Network (VANET) is proposed to quickly forward the current patient status information from the ambulance to the hospital to provide pre-medical treatment. As the ambulance takes time to reach the hospital, ambulance doctor can provide sudden treatment to the patient in emergency by sending patient status information to the hospital through the vehicles using vehicular communication. Secondly, the experienced doctors respond to the information by quickly sending a treatment information to the ambulance. In this protocol, data is forwarded through that path which has less link breakage problem between the vehicles. This is done by calculating an intersection value I v a l u e for the neighboring intersections by using the current traffic information. Then the data is forwarded through that intersection which has minimum I v a l u e . Simulation results show VehiHealth performs better than P-GEDIR, GyTAR, A-STAR and GSR routing protocols in terms of average end-to-end delay, number of link breakage, path length, and average response time.
Self-avoiding walks on scale-free networks
NASA Astrophysics Data System (ADS)
Herrero, Carlos P.
2005-01-01
Several kinds of walks on complex networks are currently used to analyze search and navigation in different systems. Many analytical and computational results are known for random walks on such networks. Self-avoiding walks (SAW’s) are expected to be more suitable than unrestricted random walks to explore various kinds of real-life networks. Here we study long-range properties of random SAW’s on scale-free networks, characterized by a degree distribution P (k) ˜ k-γ . In the limit of large networks (system size N→∞ ), the average number sn of SAW’s starting from a generic site increases as μn , with μ= < k2 > /
García-Valiñas, Maria A; Martínez-Espiñeira, Roberto; González-Gómez, Francisco
2010-12-01
Using information on a basic or "lifeline" level of domestic water use obtained from a water demand function based on a Stone-Geary utility function, a minimum water threshold of 128 m(3) per household per year was estimated in a sample of municipalities in Southern Spain. As a second objective, water affordability indexes were then calculated that relate the cost of such lifeline to average municipal income levels. The analysis of the factors behind the differences in that ratio across Andalusian municipalities shows that the relative cost of purchasing the lifeline appears inversely related to average income levels, revealing an element of regressivity in the component of water tariffs affecting the least superfluous part of the household's consumption. The main policy recommendation would involve redesigning water tariffs in order to improve access for lower income households to an amount of water sufficient to cover their basic needs. The proposed methodology could be applied to other geographical areas, both from developed and from developing countries, in order to analyze the degree of progressivity of the water tariffs currently in effect and in order to guide the design of more equitable regulatory policies. Copyright © 2010 Elsevier Ltd. All rights reserved.
Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements
NASA Astrophysics Data System (ADS)
Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.
2012-12-01
To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.
40 CFR 60.2735 - Is there a minimum amount of monitoring data I must obtain?
Code of Federal Regulations, 2012 CFR
2012-07-01
... monitoring malfunctions, associated repairs, and required quality assurance or quality control activities for... periods, or required monitoring system quality assurance or control activities in calculations used to... 40 Protection of Environment 7 2012-07-01 2012-07-01 false Is there a minimum amount of monitoring...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-03
... zero or the lowest Minimum Trading Increment or (ii) the Expanded Quote Range has been calculated as zero. The proposal codifies existing functionality during the Exchange's Opening Process. Specifically... either zero or the lowest Minimum Trading Increment and market order sell interest has a quantity greater...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-11
... we include in concession contracts a franchise fee payable to the Government that is based upon... establish the required minimum franchise fee for the new contract, that fee will reflect speculative... concessioner offers to meet or exceed the minimum franchise fee that we would establish under the standard LSI...
NASA Technical Reports Server (NTRS)
Perovich, D.; Gerland, S.; Hendricks, S.; Meier, Walter N.; Nicolaus, M.; Richter-Menge, J.; Tschudi, M.
2013-01-01
During 2013, Arctic sea ice extent remained well below normal, but the September 2013 minimum extent was substantially higher than the record-breaking minimum in 2012. Nonetheless, the minimum was still much lower than normal and the long-term trend Arctic September extent is -13.7 per decade relative to the 1981-2010 average. The less extreme conditions this year compared to 2012 were due to cooler temperatures and wind patterns that favored retention of ice through the summer. Sea ice thickness and volume remained near record-low levels, though indications are of slightly thicker ice compared to the record low of 2012.
The volume-outcome relationship and minimum volume standards--empirical evidence for Germany.
Hentschker, Corinna; Mennicken, Roman
2015-06-01
For decades, there is an ongoing discussion about the quality of hospital care leading i.a. to the introduction of minimum volume standards in various countries. In this paper, we analyze the volume-outcome relationship for patients with intact abdominal aortic aneurysm and hip fracture. We define hypothetical minimum volume standards in both conditions and assess consequences for access to hospital services in Germany. The results show clearly that patients treated in hospitals with a higher case volume have on average a significant lower probability of death in both conditions. Furthermore, we show that the hypothetical minimum volume standards do not compromise overall access measured with changes in travel times. Copyright © 2014 John Wiley & Sons, Ltd.
Code of Federal Regulations, 2010 CFR
2010-07-01
... performance test. v. If you use a venturi scrubber, maintaining the daily average pressure drop across the.... Each new or reconstructed flame lamination affected source using a scrubber a. Maintain the daily average scrubber inlet liquid flow rate above the minimum value established during the performanceb...
43 CFR 418.18 - Diversions at Derby Dam.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Dam must be managed to maintain minimum terminal flow to Lahontan Reservoir or the Carson River except... achieve an average terminal flow of 20 cfs or less during times when diversions to Lahontan Reservoir are not allowed (the flows must be averaged over the total time diversions are not allowed in that...
Code of Federal Regulations, 2011 CFR
2011-07-01
.... Each new or reconstructed flame lamination affected source using a scrubber a. Maintain the daily average scrubber inlet liquid flow rate above the minimum value established during the performanceb. Maintain the daily average scrubber effluent pH within the operating range established during the...
Geomagnetism during solar cycle 23: Characteristics.
Zerbo, Jean-Louis; Amory-Mazaudier, Christine; Ouattara, Frédéric
2013-05-01
On the basis of more than 48 years of morphological analysis of yearly and monthly values of the sunspot number, the aa index, the solar wind speed and interplanetary magnetic field, we point out the particularities of geomagnetic activity during the period 1996-2009. We especially investigate the last cycle 23 and the long minimum which followed it. During this period, the lowest values of the yearly averaged IMF (3 nT) and yearly averaged solar wind speed (364 km/s) are recorded in 1996, and 2009 respectively. The year 2003 shows itself particular by recording the highest value of the averaged solar wind (568 km/s), associated to the highest value of the yearly averaged aa index (37 nT). We also find that observations during the year 2003 seem to be related to several coronal holes which are known to generate high-speed wind stream. From the long time (more than one century) study of solar variability, the present period is similar to the beginning of twentieth century. We especially present the morphological features of solar cycle 23 which is followed by a deep solar minimum.
Reactive Power Compensation Method Considering Minimum Effective Reactive Power Reserve
NASA Astrophysics Data System (ADS)
Gong, Yiyu; Zhang, Kai; Pu, Zhang; Li, Xuenan; Zuo, Xianghong; Zhen, Jiao; Sudan, Teng
2017-05-01
According to the calculation model of minimum generator reactive power reserve of power system voltage stability under the premise of the guarantee, the reactive power management system with reactive power compensation combined generator, the formation of a multi-objective optimization problem, propose a reactive power reserve is considered the minimum generator reactive power compensation optimization method. This method through the improvement of the objective function and constraint conditions, when the system load growth, relying solely on reactive power generation system can not meet the requirement of safe operation, increase the reactive power reserve to solve the problem of minimum generator reactive power compensation in the case of load node.
NASA Astrophysics Data System (ADS)
Oberhofer, Harald; Blumberger, Jochen
2010-12-01
We present a plane wave basis set implementation for the calculation of electronic coupling matrix elements of electron transfer reactions within the framework of constrained density functional theory (CDFT). Following the work of Wu and Van Voorhis [J. Chem. Phys. 125, 164105 (2006)], the diabatic wavefunctions are approximated by the Kohn-Sham determinants obtained from CDFT calculations, and the coupling matrix element calculated by an efficient integration scheme. Our results for intermolecular electron transfer in small systems agree very well with high-level ab initio calculations based on generalized Mulliken-Hush theory, and with previous local basis set CDFT calculations. The effect of thermal fluctuations on the coupling matrix element is demonstrated for intramolecular electron transfer in the tetrathiafulvalene-diquinone (Q-TTF-Q-) anion. Sampling the electronic coupling along density functional based molecular dynamics trajectories, we find that thermal fluctuations, in particular the slow bending motion of the molecule, can lead to changes in the instantaneous electron transfer rate by more than an order of magnitude. The thermal average, ( {< {| {H_ab } |^2 } > } )^{1/2} = 6.7 {mH}, is significantly higher than the value obtained for the minimum energy structure, | {H_ab } | = 3.8 {mH}. While CDFT in combination with generalized gradient approximation (GGA) functionals describes the intermolecular electron transfer in the studied systems well, exact exchange is required for Q-TTF-Q- in order to obtain coupling matrix elements in agreement with experiment (3.9 mH). The implementation presented opens up the possibility to compute electronic coupling matrix elements for extended systems where donor, acceptor, and the environment are treated at the quantum mechanical (QM) level.
Structure, vibrational spectrum, and ring puckering barrier of cyclobutane.
Blake, Thomas A; Xantheas, Sotiris S
2006-09-07
We present the results of high level ab initio calculations for the structure, harmonic and anharmonic spectroscopic constants, and ring puckering barrier of cyclobutane (C4H8) in an effort to establish the minimum theoretical requirements needed for their accurate description. We have found that accurate estimates for the barrier between the minimum (D(2d)) and transition state (D(4h)) configurations require both higher levels of electron correlation [MP4, CCSD(T)] and orbital basis sets of quadruple-zeta quality or larger. By performing CCSD(T) calculations with basis sets as large as cc-pV5Z, we were able to obtain, for the first time, a value for the puckering barrier that lies within 10 cm(-1) (or 2%) from experiment, whereas the best previously calculated values were in errors exceeding 40% of experiment. Our best estimate of 498 cm(-1) for the puckering barrier is within 10 cm(-1) of the experimental value proposed originally, but it lies approximately 50 cm(-1) higher than the revisited value, which was obtained more recently using different assumptions regarding the coupling between the various modes. It is therefore suggested that revisiting the analysis of the experimental data might be warranted. Our best computed values (at the CCSD(T)/aug-cc-pVTZ level of theory) for the equilibrium structural parameters of C4H8 are r(C-C) = 1.554 A, r(C-H(alpha)) = 1.093 A, r(C-H(beta)) = 1.091 A, phi(C-C-C) = 88.1 degrees , alpha(H(alpha)-C-H(beta)) = 109.15 degrees , and theta = 29.68 degrees for the puckering angle. We have found that the puckering angle theta is more sensitive to the level of electron correlation than to the size of the basis set for a given method. We furthermore present anharmonic calculations that are based on a second-order perturbative evaluation of rovibrational parameters and their effects on the vibrational spectra and average structure. We have found that the anharmonic calculations predict the experimentally measured fundamental band origins within 1% (< or =30 cm(-1)) for most vibrations. The results of the current study can serve as a guide for future calculations on the substituted four-member ring hydrocarbon compounds. To this end we present a method for estimating the puckering barrier height at higher levels of electron correlation [MP4, CCSD(T)] from the MP2 results that can be used in chemically similar compounds.
Praveen Pole, R P; Feroz Khan, M; Godwin Wesley, S
2017-04-01
The activity concentration of 210 Po in 26 species of marine macroalgae found along coast near to a nuclear installation in southeast coast of India was studied. Phaeophytes were found to accumulate the maximum 210 Po concentration and chlorophytes the minimum. The average 210 Po activity concentration values in the three groups were 6.2 ± 2.5 Bq kg -1 (Chlorophyta), 14.4 ± 5.2 Bq kg -1 (Phaeophyta) and 11.3 ± 3.9 Bq kg -1 (Rhodophyta). A statistically significant variation in accumulation was found between groups (p < 0.05). The un-weighted dose rate to these algae due to 210 Po was calculated to be well below the benchmark dose limit of 10 μGy h -1 . Copyright © 2017 Elsevier Ltd. All rights reserved.
Redistributing wealth to families: the advantages of the MYRIADE model.
Legendre, François; Lorgnet, Jean-Paul; Thibault, Florence
2005-10-01
This study aims to shed light on the main characteristics of the French system for redistributing wealth to families through tax revenues and social transfers. For the purposes of this exercise, the authors used the MYRIADE microsimulation model, which covers most of the redistribution system, though it is limited to monetary flows such as family benefits, housing allowances, minimum social welfare payments, income tax, and tax on furnished accommodation. The authors used a particular methodology to highlight the way this redistribution works; rather than calculate the difference between each family's disposable income and their gross primary income, they opted to isolate the variation in disposable income that could be attributed to the youngest member of each family where there is at least one child under the age of 25. The average increase in disposable income that this child contributes to his or her family amounts to in200 per month.
The importance of bulk density determination in gravity data processing for structure interpretation
NASA Astrophysics Data System (ADS)
Wildan, D.; Akbar, A. M.; Novranza, K. M. S.; Sobirin, R.; Permadi, A. N.; Supriyanto
2017-07-01
Gravity method use rock density variation for determining subsurface lithology and geological structure. In the "green area" where measurement of rock density has not been done, an attemp to find density is usually performed by calculating using Parasnis method, or by using using the average of rock density in the earth's crust (2,67 gr/cm3) or by using theoritical value of dominant rock density in the survey area (2,90 gr/cm3). Those three values of densities are applied to gravity data analysis in the hilly "X" area. And we have compared all together in order to observed which value has represented the structure better. The result showed that the higher value of rock density, the more obvious structure in the Bouguer anomaly profile. It is due to the contrast of maximum and minimum value of Bouguer anomaly that will affect the exageration in distance vs Bouguer anomaly graphic.
NASA Technical Reports Server (NTRS)
Wright, William B.; Chung, James
1999-01-01
Aerodynamic performance calculations were performed using WIND on ten experimental ice shapes and the corresponding ten ice shapes predicted by LEWICE 2.0. The resulting data for lift coefficient and drag coefficient are presented. The difference in aerodynamic results between the experimental ice shapes and the LEWICE ice shapes were compared to the quantitative difference in ice shape geometry presented in an earlier report. Correlations were generated to determine the geometric features which have the most effect on performance degradation. Results show that maximum lift and stall angle can be correlated to the upper horn angle and the leading edge minimum thickness. Drag coefficient can be correlated to the upper horn angle and the frequency-weighted average of the Fourier coefficients. Pitching moment correlated with the upper horn angle and to a much lesser extent to the upper and lower horn thicknesses.
NASA Astrophysics Data System (ADS)
Borys, Damian; Serafin, Wojciech; Gorczewski, Kamil; Kijonka, Marek; Frackiewicz, Mariusz; Palus, Henryk
2018-04-01
The aim of this work was to test the most popular and essential algorithms of the intensity nonuniformity correction of the breast MRI imaging. In this type of MRI imaging, especially in the proximity of the coil, the signal is strong but also can produce some inhomogeneities. Evaluated methods of signal correction were: N3, N3FCM, N4, Nonparametric, and SPM. For testing purposes, a uniform phantom object was used to obtain test images using breast imaging MRI coil. To quantify the results, two measures were used: integral uniformity and standard deviation. For each algorithm minimum, average and maximum values of both evaluation factors have been calculated using the binary mask created for the phantom. In the result, two methods obtained the lowest values in these measures: N3FCM and N4, however, for the second method visually phantom was the most uniform after correction.
PRODUCTION OF HELIUM IN IRON METEORITES BY THE ACTION OF COSMIC RAYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, J.H.; Nier, A.O.
1958-12-15
The helium distribution in a slice from the iron meteorite, Grant, was measured aud plotted in the form of contour maps. The contours of constant helium show a minimum helium content and isotopic ratio, He/sup 3//He/sup 4/, near the center of the slice, tbe isotopic ratio varying from 0.26 near the center to 0.30 at the surface. A cosmogenic helium production rate equation was fitted to the data giving a He/sup 3//He/sup 4/ production ratio by primary cosmic rays of 0.50 and by secondary particles of 0.14. Primary and secondary particle interaction cross sections were found to be 540 mbmore » and 720 mb, respectively. The ratio of the average post-atmospheric radius to the pre- atmospheric radius of Grant was calculated to be 0.65. (auth)« less
39 CFR 3010.21 - Calculation of annual limitation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...
39 CFR 3010.21 - Calculation of annual limitation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...
39 CFR 3010.21 - Calculation of annual limitation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...
Norris, Michelle; Anderson, Ross; Motl, Robert W; Hayes, Sara; Coote, Susan
2017-03-01
The purpose of this study was to examine the minimum number of days needed to reliably estimate daily step count and energy expenditure (EE), in people with multiple sclerosis (MS) who walked unaided. Seven days of activity monitor data were collected for 26 participants with MS (age=44.5±11.9years; time since diagnosis=6.5±6.2years; Patient Determined Disease Steps=≤3). Mean daily step count and mean daily EE (kcal) were calculated for all combinations of days (127 combinations), and compared to the respective 7-day mean daily step count or mean daily EE using intra-class correlations (ICC), the Generalizability Theory and Bland-Altman. For step count, ICC values of 0.94-0.98 and a G-coefficient of 0.81 indicate a minimum of any random 2-day combination is required to reliably calculate mean daily step count. For EE, ICC values of 0.96-0.99 and a G-coefficient of 0.83 indicate a minimum of any random 4-day combination is required to reliably calculate mean daily EE. For Bland-Altman analyses all combinations of days, bar single day combinations, resulted in a mean bias within ±10%, when expressed as a percentage of the 7-day mean daily step count or mean daily EE. A minimum of 2days for step count and 4days for EE, regardless of day type, is needed to reliably estimate daily step count and daily EE, in people with MS who walk unaided. Copyright © 2017 Elsevier B.V. All rights reserved.
Evaluation of seepage and discharge uncertainty in the middle Snake River, southwestern Idaho
Wood, Molly S.; Williams, Marshall L.; Evetts, David M.; Vidmar, Peter J.
2014-01-01
The U.S. Geological Survey, in cooperation with the State of Idaho, Idaho Power Company, and the Idaho Department of Water Resources, evaluated seasonal seepage gains and losses in selected reaches of the middle Snake River, Idaho, during November 2012 and July 2013, and uncertainty in measured and computed discharge at four Idaho Power Company streamgages. Results from this investigation will be used by resource managers in developing a protocol to calculate and report Adjusted Average Daily Flow at the Idaho Power Company streamgage on the Snake River below Swan Falls Dam, near Murphy, Idaho, which is the measurement point for distributing water to owners of hydropower and minimum flow water rights in the middle Snake River. The evaluated reaches of the Snake River were from King Hill to Murphy, Idaho, for the seepage studies and downstream of Lower Salmon Falls Dam to Murphy, Idaho, for evaluations of discharge uncertainty. Computed seepage was greater than cumulative measurement uncertainty for subreaches along the middle Snake River during November 2012, the non-irrigation season, but not during July 2013, the irrigation season. During the November 2012 seepage study, the subreach between King Hill and C J Strike Dam had a meaningful (greater than cumulative measurement uncertainty) seepage gain of 415 cubic feet per second (ft3/s), and the subreach between Loveridge Bridge and C J Strike Dam had a meaningful seepage gain of 217 ft3/s. The meaningful seepage gain measured in the November 2012 seepage study was expected on the basis of several small seeps and springs present along the subreach, regional groundwater table contour maps, and results of regional groundwater flow model simulations. Computed seepage along the subreach from C J Strike Dam to Murphy was less than cumulative measurement uncertainty during November 2012 and July 2013; therefore, seepage cannot be quantified with certainty along this subreach. For the uncertainty evaluation, average uncertainty in discharge measurements at the four Idaho Power Company streamgages in the study reach ranged from 4.3 percent (Snake River below Lower Salmon Falls Dam) to 7.8 percent (Snake River below C J Strike Dam) for discharges less than 7,000 ft3/s in water years 2007–11. This range in uncertainty constituted most of the total quantifiable uncertainty in computed discharge, represented by prediction intervals calculated from the discharge rating of each streamgage. Uncertainty in computed discharge in the Snake River below Swan Falls Dam near Murphy was 10.1 and 6.0 percent at the Adjusted Average Daily Flow thresholds of 3,900 and 5,600 ft3/s, respectively. All discharge measurements and records computed at streamgages have some level of uncertainty that cannot be entirely eliminated. Knowledge of uncertainty at the Adjusted Average Daily Flow thresholds is useful for developing a measurement and reporting protocol for purposes of distributing water to hydropower and minimum flow water rights in the middle Snake River.
Kriging analysis of mean annual precipitation, Powder River Basin, Montana and Wyoming
Karlinger, M.R.; Skrivan, James A.
1981-01-01
Kriging is a statistical estimation technique for regionalized variables which exhibit an autocorrelation structure. Such structure can be described by a semi-variogram of the observed data. The kriging estimate at any point is a weighted average of the data, where the weights are determined using the semi-variogram and an assumed drift, or lack of drift, in the data. Block, or areal, estimates can also be calculated. The kriging algorithm, based on unbiased and minimum-variance estimates, involves a linear system of equations to calculate the weights. Kriging variances can then be used to give confidence intervals of the resulting estimates. Mean annual precipitation in the Powder River basin, Montana and Wyoming, is an important variable when considering restoration of coal-strip-mining lands of the region. Two kriging analyses involving data at 60 stations were made--one assuming no drift in precipitation, and one a partial quadratic drift simulating orographic effects. Contour maps of estimates of mean annual precipitation were similar for both analyses, as were the corresponding contours of kriging variances. Block estimates of mean annual precipitation were made for two subbasins. Runoff estimates were 1-2 percent of the kriged block estimates. (USGS)
Reducible dictionaries for single image super-resolution based on patch matching and mean shifting
NASA Astrophysics Data System (ADS)
Rasti, Pejman; Nasrollahi, Kamal; Orlova, Olga; Tamberg, Gert; Moeslund, Thomas B.; Anbarjafari, Gholamreza
2017-03-01
A single-image super-resolution (SR) method is proposed. The proposed method uses a generated dictionary from pairs of high resolution (HR) images and their corresponding low resolution (LR) representations. First, HR images and the corresponding LR ones are divided into patches of HR and LR, respectively, and then they are collected into separate dictionaries. Afterward, when performing SR, the distance between every patch of the input LR image and those of available LR patches in the LR dictionary is calculated. The minimum distance between the input LR patch and those in the LR dictionary is taken, and its counterpart from the HR dictionary is passed through an illumination enhancement process. By this technique, the noticeable change of illumination between neighbor patches in the super-resolved image is significantly reduced. The enhanced HR patch represents the HR patch of the super-resolved image. Finally, to remove the blocking effect caused by merging the patches, an average of the obtained HR image and the interpolated image obtained using bicubic interpolation is calculated. The quantitative and qualitative analyses show the superiority of the proposed technique over the conventional and state-of-art methods.
MEG/EEG Source Reconstruction, Statistical Evaluation, and Visualization with NUTMEG
Dalal, Sarang S.; Zumer, Johanna M.; Guggisberg, Adrian G.; Trumpis, Michael; Wong, Daniel D. E.; Sekihara, Kensuke; Nagarajan, Srikantan S.
2011-01-01
NUTMEG is a source analysis toolbox geared towards cognitive neuroscience researchers using MEG and EEG, including intracranial recordings. Evoked and unaveraged data can be imported to the toolbox for source analysis in either the time or time-frequency domains. NUTMEG offers several variants of adaptive beamformers, probabilistic reconstruction algorithms, as well as minimum-norm techniques to generate functional maps of spatiotemporal neural source activity. Lead fields can be calculated from single and overlapping sphere head models or imported from other software. Group averages and statistics can be calculated as well. In addition to data analysis tools, NUTMEG provides a unique and intuitive graphical interface for visualization of results. Source analyses can be superimposed onto a structural MRI or headshape to provide a convenient visual correspondence to anatomy. These results can also be navigated interactively, with the spatial maps and source time series or spectrogram linked accordingly. Animations can be generated to view the evolution of neural activity over time. NUTMEG can also display brain renderings and perform spatial normalization of functional maps using SPM's engine. As a MATLAB package, the end user may easily link with other toolboxes or add customized functions. PMID:21437174
MEG/EEG source reconstruction, statistical evaluation, and visualization with NUTMEG.
Dalal, Sarang S; Zumer, Johanna M; Guggisberg, Adrian G; Trumpis, Michael; Wong, Daniel D E; Sekihara, Kensuke; Nagarajan, Srikantan S
2011-01-01
NUTMEG is a source analysis toolbox geared towards cognitive neuroscience researchers using MEG and EEG, including intracranial recordings. Evoked and unaveraged data can be imported to the toolbox for source analysis in either the time or time-frequency domains. NUTMEG offers several variants of adaptive beamformers, probabilistic reconstruction algorithms, as well as minimum-norm techniques to generate functional maps of spatiotemporal neural source activity. Lead fields can be calculated from single and overlapping sphere head models or imported from other software. Group averages and statistics can be calculated as well. In addition to data analysis tools, NUTMEG provides a unique and intuitive graphical interface for visualization of results. Source analyses can be superimposed onto a structural MRI or headshape to provide a convenient visual correspondence to anatomy. These results can also be navigated interactively, with the spatial maps and source time series or spectrogram linked accordingly. Animations can be generated to view the evolution of neural activity over time. NUTMEG can also display brain renderings and perform spatial normalization of functional maps using SPM's engine. As a MATLAB package, the end user may easily link with other toolboxes or add customized functions.
A recursive technique for adaptive vector quantization
NASA Technical Reports Server (NTRS)
Lindsay, Robert A.
1989-01-01
Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.
The self-association of acebutolol: Conductometry and light scattering
NASA Astrophysics Data System (ADS)
Ruso, Juan M.; López-Fontán, José L.; Prieto, Gerardo; Sarmiento, Félix
2003-04-01
The association characteristics of an amphiphilic beta-blocker drug, acebutolol hydrochloride, in aqueous solution containing high concentrations of electrolyte and at different temperatures have been examined by static and dynamic light scattering and electrical conductivity. Time averaged light scattering measurements on aqueous solutions of acebutolol at 298.15 K in the presence of added electrolyte (0.4-1.0 mol kg-1 NaCl) have shown discontinuities which reflect the appearance of aggregates. The critical micelle concentration, aggregation numbers, effective micelle charges, and degree of micellar ionization were calculated. Dynamic light scattering has shown an increase in micellar size with increase in concentration of added electrolyte. Data have been interpreted using the DLVO theory to quantify the interaction between the drug aggregates and the colloidal stability. Critical micelle concentrations in water have been calculated from conductivity measurements over the temperature range 288.15-313.15 K. The variation in critical concentration with temperature passes through a minimum close to 294 K. Thermodynamic parameters of aggregate formation (ΔGm0,ΔHm0,ΔSm0) were obtained from a variation of the mass action model applicable to systems of low aggregation number.
Study on the medical meteorological forecast of the number of hypertension inpatient based on SVR
NASA Astrophysics Data System (ADS)
Zhai, Guangyu; Chai, Guorong; Zhang, Haifeng
2017-06-01
The purpose of this study is to build a hypertension prediction model by discussing the meteorological factors for hypertension incidence. The research method is selecting the standard data of relative humidity, air temperature, visibility, wind speed and air pressure of Lanzhou from 2010 to 2012(calculating the maximum, minimum and average value with 5 days as a unit ) as the input variables of Support Vector Regression(SVR) and the standard data of hypertension incidence of the same period as the output dependent variables to obtain the optimal prediction parameters by cross validation algorithm, then by SVR algorithm learning and training, a SVR forecast model for hypertension incidence is built. The result shows that the hypertension prediction model is composed of 15 input independent variables, the training accuracy is 0.005, the final error is 0.0026389. The forecast accuracy based on SVR model is 97.1429%, which is higher than statistical forecast equation and neural network prediction method. It is concluded that SVR model provides a new method for hypertension prediction with its simple calculation, small error as well as higher historical sample fitting and Independent sample forecast capability.
Riehle, J.R.; Ager, T.A.; Reger, R.D.; Pinney, D.S.; Kaufman, D.S.
2008-01-01
Recently discovered Lethe tephra has been proposed as a latest Pleistocene marker bed in Bristol Bay lowland NE to the Cook Inlet region, Alaska, on the basis of correlations involving a single "Lethe average" glass composition. Type deposits in the Valley of Ten Thousand Smokes, however, are chemically heterogeneous-individual lapilli as well as aggregate ash deposits have glass compositions that range from the average mode to much higher SiO2 and K2O. Moreover, a lake-sediment core from the Cook Inlet region contains one ash deposit similar to "Lethe average" and other, closely underlying deposits that resemble a mixture of the average mode and high-Si high-K mode of proximal deposits. Synthesis of previously published radiocarbon ages indicates a major eruption mainly of "Lethe average" mode about 13,000 14C yr BP. As many as six deposits in the Cook Inlet region-five chiefly "Lethe average" mode-range from about 13,000 to 15-16,000 14C yr BP, and an early Holocene deposit in the Bristol Bay lowland extends the minimum age range of Lethe tephra throughout this region to 8000 14C yr BP. Because of the appearance of "Lethe average" composition in multiple deposits spanning thousands of years, we urge caution when using a Lethe-like composition as a basis for inferring a latest Pleistocene age of a tephra deposit in south-central Alaska. Linear variation plots suggest that magma mixing caused the Lethe heterogeneity; multiple magmas were involved as well in other large pyroclastic eruptions such as Katmai (Alaska) and Rotorua (New Zealand). Lethe is an example of a heterogeneous tephra that may be better compared with other tephras by use of plots of individual analytical points rather than by calculating similarity coefficients based on edited data. ?? 2006 Elsevier Ltd and INQUA.
Effects of tidal current phase at the junction of two straits
Warner, J.; Schoellhamer, D.; Burau, J.; Schladow, G.
2002-01-01
Estuaries typically have a monotonic increase in salinity from freshwater at the head of the estuary to ocean water at the mouth, creating a consistent direction for the longitudinal baroclinic pressure gradient. However, Mare Island Strait in San Francisco Bay has a local salinity minimum created by the phasing of the currents at the junction of Mare Island and Carquinez Straits. The salinity minimum creates converging baroclinic pressure gradients in Mare Island Strait. Equipment was deployed at four stations in the straits for 6 months from September 1997 to March 1998 to measure tidal variability of velocity, conductivity, temperature, depth, and suspended sediment concentration. Analysis of the measured time series shows that on a tidal time scale in Mare Island Strait, the landward and seaward baroclinic pressure gradients in the local salinity minimum interact with the barotropic gradient, creating regions of enhanced shear in the water column during the flood and reduced shear during the ebb. On a tidally averaged time scale, baroclinic pressure gradients converge on the tidally averaged salinity minimum and drive a converging near-bed and diverging surface current circulation pattern, forming a "baroclinic convergence zone" in Mare Island Strait. Historically large sedimentation rates in this area are attributed to the convergence zone.
Is the Oswestry Disability Index a valid measure of response to sacroiliac joint treatment?
Copay, Anne G; Cher, Daniel J
2016-02-01
Disease-specific measures of the impact of sacroiliac (SI) joint pain on back/pelvis function are not available. The Oswestry Disability Index (ODI) is a validated functional measure for lower back pain, but its responsiveness to SI joint treatment has yet to be established. We sought to assess the validity of ODI to capture disability caused by SI joint pain and the minimum clinically important difference (MCID) after SI joint treatment. Patients (n = 155) participating in a prospective clinical trial of minimally invasive SI joint fusion underwent baseline and follow-up assessments using ODI, visual analog scale (VAS) pain assessment, Short Form 36 (SF-36), EuroQoL-5D, and questions (at follow-up only) regarding satisfaction with the SI joint fusion and whether the patient would have the fusion surgery again. All outcomes were compared from baseline to 12 months postsurgery. The health transition item of the SF-36 and the satisfaction scale were used as external anchors to calculate MCID. MCID was estimated for ODI using four calculation methods: (1) minimum detectable change, (2) average ODI change of patients' subsets, (3) change difference between patients' subsets, and (4) receiver operating characteristic (ROC) curve. After SI fusion, patients improved significantly (p < .0001) on all measures: SI joint pain (48.8 points), ODI (23.8 points), EQ-5D (0.29 points), EQ-5D VAS (11.7 points), PCS (8.9 points), and MCS (9.2 points). The improvement in ODI was significantly correlated (p < .0001) with SI joint pain improvement (r = .48) and with the two external anchors: SF-36 health transition item (r = .49) and satisfaction level (r = .34). The MCID values calculated for ODI using the various methods ranged from 3.5 to 19.5 points. The ODI minimum detectable change was 15.5 with the health transition item as the anchor and 13.5 with the satisfaction scale as the anchor. ODI is a valid measure of change in SI joint health. Hence, researchers and clinicians may rely on ODI scores to measure disability caused by SI pain. We estimated the MCID for ODI to be 13-15 points, which falls within the range of that previously reported for lumbar back pain and indicates that an improvement in disability should be at least 15 % to be beyond random variation.
Considerations for applying VARSKIN mod 2 to skin dose calculations averaged over 10 cm2.
Durham, James S
2004-02-01
VARSKIN Mod 2 is a DOS-based computer program that calculates the dose to skin from beta and gamma contamination either directly on skin or on material in contact with skin. The default area for calculating the dose is 1 cm2. Recently, the U.S. Nuclear Regulatory Commission issued new guidelines for calculating shallow dose equivalent from skin contamination that requires the dose be averaged over 10 cm2. VARSKIN Mod 2 was not filly designed to calculate beta or gamma dose estimates averaged over 10 cm2, even though the program allows the user to calculate doses averaged over 10 cm2. This article explains why VARSKIN Mod 2 overestimates the beta dose when applied to 10 cm2 areas, describes a manual method for correcting the overestimate, and explains how to perform reasonable gamma dose calculations averaged over 10 cm2. The article also describes upgrades underway in Varskin 3.
Nazarian, Dalar; Ganesh, P.; Sholl, David S.
2015-09-30
We compiled a test set of chemically and topologically diverse Metal–Organic Frameworks (MOFs) with high accuracy experimentally derived crystallographic structure data. The test set was used to benchmark the performance of Density Functional Theory (DFT) functionals (M06L, PBE, PW91, PBE-D2, PBE-D3, and vdW-DF2) for predicting lattice parameters, unit cell volume, bonded parameters and pore descriptors. On average PBE-D2, PBE-D3, and vdW-DF2 predict more accurate structures, but all functionals predicted pore diameters within 0.5 Å of the experimental diameter for every MOF in the test set. The test set was also used to assess the variance in performance of DFT functionalsmore » for elastic properties and atomic partial charges. The DFT predicted elastic properties such as minimum shear modulus and Young's modulus can differ by an average of 3 and 9 GPa for rigid MOFs such as those in the test set. Moreover, we calculated the partial charges by vdW-DF2 deviate the most from other functionals while there is no significant difference between the partial charges calculated by M06L, PBE, PW91, PBE-D2 and PBE-D3 for the MOFs in the test set. We find that while there are differences in the magnitude of the properties predicted by the various functionals, these discrepancies are small compared to the accuracy necessary for most practical applications.« less
Arvela, H.; Holmgren, O.; Hänninen, P.
2016-01-01
The effect of soil moisture on seasonal variation in soil air and indoor radon is studied. A brief review of the theory of the effect of soil moisture on soil air radon has been presented. The theoretical estimates, together with soil moisture measurements over a period of 10 y, indicate that variation in soil moisture evidently is an important factor affecting the seasonal variation in soil air radon concentration. Partitioning of radon gas between the water and air fractions of soil pores is the main factor increasing soil air radon concentration. On two example test sites, the relative standard deviation of the calculated monthly average soil air radon concentration was 17 and 26 %. Increased soil moisture in autumn and spring, after the snowmelt, increases soil gas radon concentrations by 10–20 %. In February and March, the soil gas radon concentration is in its minimum. Soil temperature is also an important factor. High soil temperature in summer increased the calculated soil gas radon concentration by 14 %, compared with winter values. The monthly indoor radon measurements over period of 1 y in 326 Finnish houses are presented and compared with the modelling results. The model takes into account radon entry, climate and air exchange. The measured radon concentrations in autumn and spring were higher than expected and it can be explained by the seasonal variation in the soil moisture. The variation in soil moisture is a potential factor affecting markedly to the high year-to-year variation in the annual or seasonal average radon concentrations, observed in many radon studies. PMID:25899611
Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie
2013-08-01
The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
1999-01-01
Recently, Ahluwalia reviewed the solar and geomagnetic data for the last 6 decades and remarked that these data "indicate the existence of a three-solar-activity-cycle quasiperiodicity in them." Furthermore, on the basis of this inferred quasiperiodicity, he asserted that cycle 23 represents the initial cycle in a new three-cycle string, implying that it "will be more modest (a la cycle 17) with an annual mean sunspot number count of 119.3 +/- 30 at the maximum", a prediction that is considerably below the consensus prediction of 160 +/- 30 by Joselin et al. and of similar predictions by others based on a variety of predictive techniques. Several major sticking points of Ahluwalia's presentation, however, must be readdressed, and these issues form the basis of this comment. First, Ahluwalia appears to have based his analysis on a data set of Ap index values that is erroneous. For example, he depicts for the interval of 1932-1997 the variation of the Ap index in terms of annual averages, contrasting them against annual averages of sunspot number (SSN), and he lists for cycles 17-23 the minimum and maximum value of each, as well as the years in which they occur and a quantity which he calls "Amplitude" (defined as the numeric difference between the maximum and minimum values). In particular, he identifies the minimum Ap index (i.e., the minimum value of the Ap index in the vicinity of sunspot cycle minimum, which usually occurs in the year following sunspot minimum and which will be called hereafter, simply, Ap min) and the year in which it occur for cycles 17 - 23 respectively.
Prediction of obliteration after gamma knife surgery for cerebral arteriovenous malformations.
Karlsson, B; Lindquist, C; Steiner, L
1997-03-01
To define the factors of importance for the obliteration of cerebral arteriovenous malformations (AVMs), thus making a prediction of the probability for obliteration possible. In 945 AVMs of a series of 1319 patients treated with the gamma knife during 1970 to 1990, the relationship between patient, AVMs, and treatment parameters on the one hand and the obliteration of the nidus on the other was analyzed. The obliteration rate increased both with increased minimum (lowest periphery) and average dose and decreased with increased AVM volume. The minimum dose to the AVMs was the decisive dose factor for the treatment result. The higher the minimum dose, the higher the chance for total obliteration. The curve illustrating this relation increased logarithmically to a value of 87%. A higher average dose shortened the latency to AVM obliteration. For the obliterated cases, the larger the malformation, the lower the minimum dose used. This prompted us to relate the obliteration rate to the product minimum dose (AVM volume)1/3 (K index). The obliteration rate increased linearly with the K index up to a value of approximately 27, and for higher K values, the obliteration rate had a constant value of approximately 80%. For the group of 273 cases treated with a minimum dose of at least 25 Gy, the obliteration rate at the study end point (defined as 2-yr latency) was 80% (95% confidence interval = 75-85%). If obliterations that occurred beyond the end point are included, the obliteration rate increased to 85% (81-89%). The probability of obliteration of AVMs after gamma knife surgery is related both to the lowest dose to the AVMs and the AVM volume, and it can be predicted using the K index.
12 CFR Appendix M1 to Part 226 - Repayment Disclosures
Code of Federal Regulations, 2014 CFR
2014-01-01
... a fixed period of time, as set forth by the card issuer. (2) “Deferred interest or similar plan... calculating the minimum payment repayment estimate, card issuers must use the minimum payment formula(s) that... purchases, such as a “club plan purchase.” Also, assume that based on a consumer's balances in these...
12 CFR Appendix M1 to Part 226 - Repayment Disclosures
Code of Federal Regulations, 2013 CFR
2013-01-01
... a fixed period of time, as set forth by the card issuer. (2) “Deferred interest or similar plan... calculating the minimum payment repayment estimate, card issuers must use the minimum payment formula(s) that... purchases, such as a “club plan purchase.” Also, assume that based on a consumer's balances in these...
Slicing cluster mass functions with a Bayesian razor
NASA Astrophysics Data System (ADS)
Sealfon, C. D.
2010-08-01
We apply a Bayesian ``razor" to forecast Bayes factors between different parameterizations of the galaxy cluster mass function. To demonstrate this approach, we calculate the minimum size N-body simulation needed for strong evidence favoring a two-parameter mass function over one-parameter mass functions and visa versa, as a function of the minimum cluster mass.
Bogucki, Artur J
2014-01-01
The knee joint is a bicondylar hinge two-level joint with six degrees of freedom. The location of the functional axis of flexion-extension motion is still a subject of research and discussions. During the swing phase, the femoral condyles do not have direct contact with the tibial articular surfaces and the intra-articular space narrows with increasing weight bearing. The geometry of knee movements is determined by the shape of articular surfaces. A digital recording of the gait of a healthy volunteer was analysed. In the first experimental variant, the subject was wearing a knee orthosis controlling flexion and extension with a hinge-type single-axis joint. In the second variant, the examination involved a hinge-type double-axis orthosis. Statistical analysis involved mathematically calculated values of displacement P. Scatter graphs with a fourth-order polynomial trend line with a confidence interval of 0.95 due to noise were prepared for each experimental variant. In Variant 1, the average displacement was 15.1 mm, the number of tests was 43, standard deviation was 8.761, and the confidence interval was 2.2. The maximum value of displacement was 30.9 mm and the minimum value was 0.7 mm. In Variant 2, the average displacement was 13.4 mm, the number of tests was 44, standard deviation was 7.275, and the confidence interval was 1.8. The maximum value of displacement was 30.2 mm and the minimum value was 3.4 mm. An analysis of moving averages for both experimental variants revealed that displacement trends for both types of orthosis were compatible from the mid-stance to the mid-swing phase. 1. The method employed in the experiment allows for determining the alignment between the axis of the knee joint and that of shin and thigh orthoses. 2. Migration of the single and double-axis orthoses during the gait cycle exceeded 3 cm. 3. During weight bearing, the double-axis orthosis was positioned more correctly. 4. The study results may be helpful in designing new hinge-type knee joints.
Noncontact thermophysical property measurement by levitation of a thin liquid disk.
Lee, Sungho; Ohsaka, Kenichi; Rednikov, Alexei; Sadhal, Satwindar Singh
2006-09-01
The purpose of the current research program is to develop techniques for noncontact measurement of thermophysical properties of highly viscous liquids. The application would be for undercooled liquids that remain liquid even below the freezing point when suspended without a container. The approach being used here consists of carrying out thermocapillary flow and temperature measurements in a horizontally levitated, laser-heated thin glycerin disk. In a levitated state, the disk is flattened by an intense acoustic field. Such a disk has the advantage of a relatively low gravitational potential over the thickness, thus mitigating the buoyancy effects, and helping isolate the thermocapillary-driven flows. For the purpose of predicting the thermal properties from these measurements, it is necessary to develop a theoretical model of the thermal processes. Such a model has been developed, and, on the basis of the observed shape, the thickness is taken to be a minimum at the center with a gentle parabolic profile at both the top and the bottom surfaces. This minimum thickness is much smaller than the radius of disk drop and the ratio of thickness to radius becomes much less than unity. It is heated by laser beam in normal direction to the edge. A general three-dimensional momentum equation is transformed into a two-variable vorticity equation. For the highly viscous liquid, a few millimeters in size, Stokes equations adequately describe the flow. Additional approximations are made by considering average flow properties over the disk thickness in a manner similar to lubrication theory. In the same way, the three-dimensional energy equation is averaged over the disk thickness. With convection boundary condition at the surfaces, we integrate a general three-dimensional energy equation to get an averaged two-dimensional energy equation that has convection terms, conduction terms, and additional source terms corresponding to a Biot number. A finite-difference numerical approach is used to solve these steady-state governing equations in the cylindrical coordinate system. The calculations yield the temperature distribution and the thermally driven flow field. These results have been used to formulate a model that, in conjunction with experiments, has enabled the development of a method for the noncontact thermophysical property measurement of liquids.
The Advantages of Collimator Optimization for Intensity Modulated Radiation Therapy
NASA Astrophysics Data System (ADS)
Doozan, Brian
The goal of this study was to improve dosimetry for pelvic, lung, head and neck, and other cancers sites with aspherical planning target volumes (PTV) using a new algorithm for collimator optimization for intensity modulated radiation therapy (IMRT) that minimizes the x-jaw gap (CAX) and the area of the jaws (CAA) for each treatment field. A retroactive study on the effects of collimator optimization of 20 patients was performed by comparing metric results for new collimator optimization techniques in Eclipse version 11.0. Keeping all other parameters equal, multiple plans are created using four collimator techniques: CA 0, all fields have collimators set to 0°, CAE, using the Eclipse collimator optimization, CAA, minimizing the area of the jaws around the PTV, and CAX, minimizing the x-jaw gap. The minimum area and the minimum x-jaw angles are found by evaluating each field beam's eye view of the PTV with ImageJ and finding the desired parameters with a custom script. The evaluation of the plans included the monitor units (MU), the maximum dose of the plan, the maximum dose to organs at risk (OAR), the conformity index (CI) and the number of fields that are calculated to split. Compared to the CA0 plans, the monitor units decreased on average by 6% for the CAX method with a p-value of 0.01 from an ANOVA test. The average maximum dose remained within 1.1% difference between all four methods with the lowest given by CAX. The maximum dose to the most at risk organ was best spared by the CAA method, which decreased by 0.62% compared to the CA0. Minimizing the x-jaws significantly reduced the number of split fields from 61 to 37. In every metric tested the CAX optimization produced comparable or superior results compared to the other three techniques. For aspherical PTVs, CAX on average reduced the number of split fields, lowered the maximum dose, minimized the dose to the surrounding OAR, and decreased the monitor units. This is achieved while maintaining the same control of the PTV.
Reid, Scott A; Nyambo, Silver; Muzangwa, Lloyd; Uhler, Brandon
2013-12-19
Noncovalent interactions play an important role in many chemical and biochemical processes. Building upon our recent study of the homoclusters of chlorobenzene, where π-π stacking and CH/π interactions were identified as the most important binding motifs, in this work we present a study of bromobenzene (PhBr) and mixed bromobenzene-benzene clusters. Electronic spectra in the region of the PhBr monomer S0-S1 (ππ*) transition were obtained using resonant two-photon ionization (R2PI) methods combined with time-of-flight mass analysis. As previously found for related systems, the PhBr cluster spectra show a broad feature whose center is red-shifted from the monomer absorption, and electronic structure calculations indicate the presence of multiple isomers and Franck-Condon activity in low-frequency intermolecular modes. Calculations at the M06-2X/aug-cc-pVDZ level find in total eight minimum energy structures for the PhBr dimer: four π-stacked structures differing in the relative orientation of the Br atoms (denoted D1-D4), one T-shaped structure (D5), and three halogen bonded structures (D6-D8). The calculated binding energies of these complexes, corrected for basis set superposition error (BSSE) and zero-point energy (ZPE), are in the range of -6 to -24 kJ/mol. Time-dependent density functional theory (TDDFT) calculations predict that these isomers absorb over a range that is roughly consistent with the breadth of the experimental spectrum. To examine the influence of dipole-dipole interaction, R2PI spectra were also obtained for the mixed PhBr···benzene dimer, where the spectral congestion is reduced and clear vibrational structure is observed. This structure is well-simulated by Franck-Condon calculations that incorporate the lowest frequency intermolecular modes. Calculations find four minimum energy structures for the mixed dimer and predict that the binding energy of the global minimum is reduced by ~30% relative to the global minimum PhBr dimer structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M., E-mail: rthomson@physics.carleton.ca
Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with{sup 125}I, {sup 103}Pd, or {sup 131}Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom andmore » our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model simulation by up to 16%. In the full eye model simulations, the average dose to the lens is larger by 7%–9% than the dose to the center of the lens, and the maximum dose to the optic nerve is 17%–22% higher than the dose to the optic disk for all radionuclides. In general, when normalized to the same prescription dose at the tumor apex, doses delivered to all structures of interest in the full eye model are lowest for{sup 103}Pd and highest for {sup 131}Cs, except for the tumor where the average dose is highest for {sup 103}Pd and lowest for {sup 131}Cs. Conclusions : The eye is not radiologically water-equivalent, as doses from simulations of the plaque in the full eye model differ considerably from doses for the plaque in a water phantom and from simulated TG-43 calculated doses. This demonstrates the importance of model-based dose calculations for eye plaque brachytherapy, for which accurate elemental compositions of ocular media are necessary.« less
Wagenaar, Alexander C; Maldonado-Molina, Mildred M; Erickson, Darin J; Ma, Linan; Tobler, Amy L; Komro, Kelli A
2007-09-01
We examined effects of state statutory changes in DUI fine or jail penalties for firsttime offenders from 1976 to 2002. A quasi-experimental time-series design was used (n=324 monthly observations). Four outcome measures of drivers involved in alcohol-related fatal crashes are: single-vehicle nighttime, low BAC (0.01-0.07g/dl), medium BAC (0.08-0.14g/dl), high BAC (>/=0.15g/dl). All analyses of BAC outcomes included multiple imputation procedures for cases with missing data. Comparison series of non-alcohol-related crashes were included to efficiently control for effects of other factors. Statistical models include state-specific Box-Jenkins ARIMA models, and pooled general linear mixed models. Twenty-six states implemented mandatory minimum fine policies and 18 states implemented mandatory minimum jail penalties. Estimated effects varied widely from state to state. Using variance weighted meta-analysis methods to aggregate results across states, mandatory fine policies are associated with an average reduction in fatal crash involvement by drivers with BAC>/=0.08g/dl of 8% (averaging 13 per state per year). Mandatory minimum jail policies are associated with a decline in single-vehicle nighttime fatal crash involvement of 6% (averaging 5 per state per year), and a decline in low-BAC cases of 9% (averaging 3 per state per year). No significant effects were observed for the other outcome measures. The overall pattern of results suggests a possible effect of mandatory fine policies in some states, but little effect of mandatory jail policies.
Updating estimates of low streamflow statistics to account for possible trends
NASA Astrophysics Data System (ADS)
Blum, A. G.; Archfield, S. A.; Hirsch, R. M.; Vogel, R. M.; Kiang, J. E.; Dudley, R. W.
2017-12-01
Given evidence of both increasing and decreasing trends in low flows in many streams, methods are needed to update estimators of low flow statistics used in water resources management. One such metric is the 10-year annual low-flow statistic (7Q10) calculated as the annual minimum seven-day streamflow which is exceeded in nine out of ten years on average. Historical streamflow records may not be representative of current conditions at a site if environmental conditions are changing. We present a new approach to frequency estimation under nonstationary conditions that applies a stationary nonparametric quantile estimator to a subset of the annual minimum flow record. Monte Carlo simulation experiments were used to evaluate this approach across a range of trend and no trend scenarios. Relative to the standard practice of using the entire available streamflow record, use of a nonparametric quantile estimator combined with selection of the most recent 30 or 50 years for 7Q10 estimation were found to improve accuracy and reduce bias. Benefits of data subset selection approaches were greater for higher magnitude trends annual minimum flow records with lower coefficients of variation. A nonparametric trend test approach for subset selection did not significantly improve upon always selecting the last 30 years of record. At 174 stream gages in the Chesapeake Bay region, 7Q10 estimators based on the most recent 30 years of flow record were compared to estimators based on the entire period of record. Given the availability of long records of low streamflow, using only a subset of the flow record ( 30 years) can be used to update 7Q10 estimators to better reflect current streamflow conditions.
Muskellunge growth potential in northern Wisconsin: implications for trophy management
Faust, Matthew D.; Isermann, Daniel A.; Luehring, Mark A.; Hansen, Michael J.
2015-01-01
The growth potential of Muskellunge Esox masquinongy was evaluated by back-calculating growth histories from cleithra removed from 305 fish collected during 1995–2011 to determine whether it was consistent with trophy management goals in northern Wisconsin. Female Muskellunge had a larger mean asymptotic length (49.8 in) than did males (43.4 in). Minimum ultimate size of female Muskellunge (45.0 in) equaled the 45.0-in minimum length limit, but was less than the 50.0-in minimum length limit used on Wisconsin's trophy waters, while the minimum ultimate size of male Muskellunge (34.0 in) was less than the statewide minimum length limit. Minimum reproductive sizes for both sexes were less than Wisconsin's trophy minimum length limits. Mean growth potential of female Muskellunge in northern Wisconsin appears to be sufficient for meeting trophy management objectives and angler expectations. Muskellunge in northern Wisconsin had similar growth potential to those in Ontario populations, but lower growth potential than Minnesota's populations, perhaps because of genetic and environmental differences.
Sunspot variation and selected associated phenomena: A look at solar cycle 21 and beyond
NASA Technical Reports Server (NTRS)
Wilson, R. M.
1982-01-01
Solar sunspot cycles 8 through 21 are reviewed. Mean time intervals are calculated for maximum to maximum, minimum to minimum, minimum to maximum, and maximum to minimum phases for cycles 8 through 20 and 8 through 21. Simple cosine functions with a period of 132 years are compared to, and found to be representative of, the variation of smoothed sunspot numbers at solar maximum and minimum. A comparison of cycles 20 and 21 is given, leading to a projection for activity levels during the Spacelab 2 era (tentatively, November 1984). A prediction is made for cycle 22. Major flares are observed to peak several months subsequent to the solar maximum during cycle 21 and to be at minimum level several months after the solar minimum. Additional remarks are given for flares, gradual rise and fall radio events and 2800 MHz radio emission. Certain solar activity parameters, especially as they relate to the near term Spacelab 2 time frame are estimated.
Intelligent Hybrid Vehicle Power Control - Part 1: Machine Learning of Optimal Vehicle Power
2012-06-30
time window ),[ tWt DT : vave, vmax, vmin, ac, vst and vend, where the first four parameters are, respectively, the average speed, maximum speed...minimum speed and average acceleration, during the time period ),[ tWt DT , vst is the vehicle speed at )( DTWt , and vend is the vehicle
Code of Federal Regulations, 2013 CFR
2013-07-01
... record the desorption gas inlet temperature at least once every 15 minutes during each of the three runs... and record the average desorption gas inlet temperature. The minimum operating limit for the concentrator is 8 degrees Celsius (15 degrees Fahrenheit) below the average desorption gas inlet temperature...
Code of Federal Regulations, 2012 CFR
2012-07-01
... record the desorption gas inlet temperature at least once every 15 minutes during each of the three runs... and record the average desorption gas inlet temperature. The minimum operating limit for the concentrator is 8 degrees Celsius (15 degrees Fahrenheit) below the average desorption gas inlet temperature...
Code of Federal Regulations, 2014 CFR
2014-07-01
... record the desorption gas inlet temperature at least once every 15 minutes during each of the three runs... and record the average desorption gas inlet temperature. The minimum operating limit for the concentrator is 8 degrees Celsius (15 degrees Fahrenheit) below the average desorption gas inlet temperature...
Code of Federal Regulations, 2011 CFR
2011-07-01
... record the desorption gas inlet temperature at least once every 15 minutes during each of the three runs... and record the average desorption gas inlet temperature. The minimum operating limit for the concentrator is 8 degrees Celsius (15 degrees Fahrenheit) below the average desorption gas inlet temperature...
Code of Federal Regulations, 2010 CFR
2010-07-01
... monitoring data I must collect with my continuous emission monitoring systems and is the data collection... monitoring systems and is the data collection requirement enforceable? (a) Where continuous emission monitoring systems are required, obtain 1-hour arithmetic averages. Make sure the averages for sulfur dioxide...
Code of Federal Regulations, 2011 CFR
2011-07-01
... monitoring data I must collect with my continuous emission monitoring systems and is the data collection... monitoring systems and is the data collection requirement enforceable? (a) Where continuous emission monitoring systems are required, obtain 1-hour arithmetic averages. Make sure the averages for sulfur dioxide...
Vocal Parameters of Elderly Female Choir Singers
Aquino, Fernanda Salvatico de; Ferreira, Léslie Piccolotto
2015-01-01
Introduction Due to increased life expectancy among the population, studying the vocal parameters of the elderly is key to promoting vocal health in old age. Objective This study aims to analyze the profile of the extension of speech of elderly female choristers, according to age group. Method The study counted on the participation of 25 elderly female choristers from the Choir of Messianic Church of São Paulo, with ages varying between 63 and 82 years, and an average of 71 years (standard deviation of 5.22). The elders were divided into two groups: G1 aged 63 to 71 years and G2 aged 72 to 82. We asked that each participant count from 20 to 30 in weak, medium, strong, and very strong intensities. Their speech was registered by the software Vocalgrama that allows the evaluation of the profile of speech range. We then submitted the parameters of frequency and intensity to descriptive analysis, both in minimum and maximum levels, and range of spoken voice. Results The average of minimum and maximum frequencies were respectively 134.82–349.96 Hz for G1 and 137.28–348.59 Hz for G2; the average for minimum and maximum intensities were respectively 40.28–95.50 dB for G1 and 40.63–94.35 dB for G2; the vocal range used in speech was 215.14 Hz for G1 and 211.30 Hz for G2. Conclusion The minimum and maximum frequencies, maximum intensity, and vocal range presented differences in favor of the younger elder group. PMID:26722341
Low-flow characteristics of streams in South Carolina
Feaster, Toby D.; Guimaraes, Wladmir B.
2017-09-22
An ongoing understanding of streamflow characteristics of the rivers and streams in South Carolina is important for the protection and preservation of the State’s water resources. Information concerning the low-flow characteristics of streams is especially important during critical flow periods, such as during the historic droughts that South Carolina has experienced in the past few decades.Between 2008 and 2016, the U.S. Geological Survey, in cooperation with the South Carolina Department of Health and Environmental Control, updated low-flow statistics at 106 continuous-record streamgages operated by the U.S. Geological Survey for the eight major river basins in South Carolina. The low-flow frequency statistics included the annual minimum 1-, 3-, 7-, 14-, 30-, 60-, and 90-day mean flows with recurrence intervals of 2, 5, 10, 20, 30, and 50 years, depending on the length of record available at the streamflow-gaging station. Computations of daily mean flow durations for the 5-, 10-, 25-, 50-, 75-, 90-, and 95-percent probability of exceedance also were included.This report summarizes the findings from publications generated during the 2008 to 2016 investigations. Trend analyses for the annual minimum 7-day average flows are provided as well as trend assessments of long-term annual precipitation data. Statewide variability in the annual minimum 7-day average flow is assessed at eight long-term (record lengths from 55 to 78 years) streamgages. If previous low-flow statistics were available, comparisons with the updated annual minimum 7-day average flow, having a 10-year recurrence interval, were made. In addition, methods for estimating low-flow statistics at ungaged locations near a gaged location are described.
Electron affinity of perhalogenated benzenes: A theoretical DFT study
NASA Astrophysics Data System (ADS)
Volatron, François; Roche, Cécile
2007-10-01
The potential energy surfaces (PES) of unsubstituted and perhalogenated benzene anions ( CX6-, X = F, Cl, Br, and I) were explored by means of DFT-B3LYP calculations. In the F and Cl cases seven extrema were located and characterized. In the Br and I cases only one minimum and two extrema were found. In each case the minimum was recomputed at the CCSD(T) level. The electron affinities of C 6X 6 were calculated (ZPE included). The results obtained agree well with the experimental determinations when available. The values obtained in the X = Br and the X = I cases are expected to be valuable predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, H
Purpose: To evaluate the dosimetric metrics of HDR Ring and Tandem applicator Brachytherapy for primary cervical cancers. Methods: The dosimetric metrics of high-risk clinical target volumes (HDR-CTV) of 12 patients (in total 60 fractions/plans) treated with the HDR ring and tandem applicators were retrospectively analyzed. Ring diameter is from 2.6 to 3.4 cm, tandem length is from 4 to 6 cm, and the angle is either 45 or 60 degrees. The first fraction plan was MR guided, the MR images were then used as a reference for contouring the HR-CTV in CT images of following 4 fractions. The nominal prescriptionmore » dose was between 5.2 and 5.8 Gy at the point A. The plans were adjusted to cover at least 90% of the HR-CTV by 90% of the prescription dose and to reduce the doses to the bladder, rectum and bowel-bag. Minimum target dose of D100 and D90 were converted into the biologically equivalent EBRT dose D90-iso and D100-iso (using α/β=10 Gy, 2 Gy/fx). Equivalent uniform doses (EUD) based on the average cancer killing across the target volume were calculated by the modified linear quadratic model (MLQ) from the differential dose volume histogram (DVH) tables. Results: The average D90iso of all plans is 8.1 Gy (ranging from 6.2 to 15 Gy, median 7.8 Gy); the average D100iso is just 4.1 Gy (ranging from 1.8 to 7.8 Gy; median 3.9 Gy). The average EUD is 7.0 Gy (ranging from 6.1 to 9.6 Gy, median 6.9 Gy), which is 87% of the D90iso, and 170% of the D100iso. Conclusion: The EUDs is smaller than D90iso but greater than D100iso. Because the EUD takes into account the intensive cancer cell killing in the high dose zone of HR-CTV, MLQ calculated EUD apparently is more relevant than D90 and D100 to describe the HDR brachytherapy treatment quality.« less
Clancy, S A; Worrall, F; Davies, R J; Gluyas, J G
2018-03-15
We estimate the likely physical footprint of well pads if shale gas or oil developments were to go forward in Europe and used these estimates to understand their impact upon existing infrastructure (e.g. roads, buildings), the carrying capacity of the environment, and how the proportion of extractable resources maybe limited. Using visual imagery, we calculate the average conventional well site footprints to be 10,800m 2 in the UK, 44,600m 2 in The Netherlands and 3000m 2 in Poland. The average area per well is 541m 2 /well in the UK, 6370m 2 /well in The Netherlands, and 2870m 2 /well in Poland. Average access road lengths are 230m in the UK, 310m in The Netherlands and 250m in Poland. To assess the carrying capacity of the land surface, well pads of the average footprint, with recommended setbacks, were placed randomly into the licensed blocks covering the Bowland Shale, UK. The extent to which they interacted or disrupted existing infrastructure was then assessed. For the UK, the direct footprint would have a 33% probability of interacting with immovable infrastructure, but this would rise to 73% if a 152m setback was used, and 91% for a 609m setback. The minimum setbacks from a currently producing well in the UK were calculated to be 21m and 46m from a non-residential and residential property respectively, with mean setbacks of 329m and 447m, respectively. When the surface and sub-surface footprints were considered, the carrying capacity within the licensed blocks was between 5 and 42%, with a mean of 26%. Using previously predicted technically recoverable reserves of 8.5×10 11 m 3 for the Bowland Basin and a recovery factor of 26%, the likely maximum accessible gas reserves would be limited by the surface carrying capacity to 2.21×10 11 m 3 . Copyright © 2017 Elsevier B.V. All rights reserved.
OPERA models for predicting physicochemical properties and environmental fate endpoints.
Mansouri, Kamel; Grulke, Chris M; Judson, Richard S; Williams, Antony J
2018-03-08
The collection of chemical structure information and associated experimental data for quantitative structure-activity/property relationship (QSAR/QSPR) modeling is facilitated by an increasing number of public databases containing large amounts of useful data. However, the performance of QSAR models highly depends on the quality of the data and modeling methodology used. This study aims to develop robust QSAR/QSPR models for chemical properties of environmental interest that can be used for regulatory purposes. This study primarily uses data from the publicly available PHYSPROP database consisting of a set of 13 common physicochemical and environmental fate properties. These datasets have undergone extensive curation using an automated workflow to select only high-quality data, and the chemical structures were standardized prior to calculation of the molecular descriptors. The modeling procedure was developed based on the five Organization for Economic Cooperation and Development (OECD) principles for QSAR models. A weighted k-nearest neighbor approach was adopted using a minimum number of required descriptors calculated using PaDEL, an open-source software. The genetic algorithms selected only the most pertinent and mechanistically interpretable descriptors (2-15, with an average of 11 descriptors). The sizes of the modeled datasets varied from 150 chemicals for biodegradability half-life to 14,050 chemicals for logP, with an average of 3222 chemicals across all endpoints. The optimal models were built on randomly selected training sets (75%) and validated using fivefold cross-validation (CV) and test sets (25%). The CV Q 2 of the models varied from 0.72 to 0.95, with an average of 0.86 and an R 2 test value from 0.71 to 0.96, with an average of 0.82. Modeling and performance details are described in QSAR model reporting format and were validated by the European Commission's Joint Research Center to be OECD compliant. All models are freely available as an open-source, command-line application called OPEn structure-activity/property Relationship App (OPERA). OPERA models were applied to more than 750,000 chemicals to produce freely available predicted data on the U.S. Environmental Protection Agency's CompTox Chemistry Dashboard.
21 CFR 178.3860 - Release agents.
Code of Federal Regulations, 2011 CFR
2011-04-01
...-octadecylcarbamate) (CAS Reg. No. 70892-21-6) produced by the reaction between stoichiometrically equivalent amounts of octadecyl isocyanate and vinyl alcohol/vinyl acetate copolymer; minimum average molecular weight...
Quantifying the interplay effect in prostate IMRT delivery using a convolution-based method.
Li, Haisen S; Chetty, Indrin J; Solberg, Timothy D
2008-05-01
The authors present a segment-based convolution method to account for the interplay effect between intrafraction organ motion and the multileaf collimator position for each particular segment in intensity modulated radiation therapy (IMRT) delivered in a step-and-shoot manner. In this method, the static dose distribution attributed to each segment is convolved with the probability density function (PDF) of motion during delivery of the segment, whereas in the conventional convolution method ("average-based convolution"), the static dose distribution is convolved with the PDF averaged over an entire fraction, an entire treatment course, or even an entire patient population. In the case of IMRT delivered in a step-and-shoot manner, the average-based convolution method assumes that in each segment the target volume experiences the same motion pattern (PDF) as that of population. In the segment-based convolution method, the dose during each segment is calculated by convolving the static dose with the motion PDF specific to that segment, allowing both intrafraction motion and the interplay effect to be accounted for in the dose calculation. Intrafraction prostate motion data from a population of 35 patients tracked using the Calypso system (Calypso Medical Technologies, Inc., Seattle, WA) was used to generate motion PDFs. These were then convolved with dose distributions from clinical prostate IMRT plans. For a single segment with a small number of monitor units, the interplay effect introduced errors of up to 25.9% in the mean CTV dose compared against the planned dose evaluated by using the PDF of the entire fraction. In contrast, the interplay effect reduced the minimum CTV dose by 4.4%, and the CTV generalized equivalent uniform dose by 1.3%, in single fraction plans. For entire treatment courses delivered in either a hypofractionated (five fractions) or conventional (> 30 fractions) regimen, the discrepancy in total dose due to interplay effect was negligible.
Mayo, Lawrence R.; Trabant, Dennis C.; March, Rod S.
2004-01-01
Scientific measurements at Wolverine Glacier, on the Kenai Peninsula in south-central Alaska, began in April 1966. At three long-term sites in the research basin, the measurements included snow depth, snow density, heights of the glacier surface and stratigraphic summer surfaces on stakes, and identification of the surface materials. Calculations of the mass balance of the surface strata-snow, new firn, superimposed ice, and old firn and ice mass at each site were based on these measurements. Calculations of fixed-date annual mass balances for each hydrologic year (October 1 to September 30), as well as net balances and the dates of minimum net balance measured between time-transgressive summer surfaces on the glacier, were made on the basis of the strata balances augmented by air temperature and precipitation recorded in the basin. From 1966 through 1995, the average annual balance at site A (590 meters altitude) was -4.06 meters water equivalent; at site B (1,070 meters altitude), was -0.90 meters water equivalent; and at site C (1,290 meters altitude), was +1.45 meters water equivalent. Geodetic determination of displacements of the mass balance stake, and glacier surface altitudes was added to the data set in 1975 to detect the glacier motion responses to variable climate and mass balance conditions. The average surface speed from 1975 to 1996 was 50.0 meters per year at site A, 83.7 meters per year at site B, and 37.2 meters per year at site C. The average surface altitudes were 594 meters at site A, 1,069 meters at site B, and 1,293 meters at site C; the glacier surface altitudes rose and fell over a range of 19.4 meters at site A, 14.1 meters at site B, and 13.2 meters at site C.
A Content Validity Study of AIMIT (Assessing Interpersonal Motivation in Transcripts).
Fassone, Giovanni; Lo Reto, Floriana; Foggetti, Paola; Santomassimo, Chiara; D'Onofrio, Maria Rita; Ivaldi, Antonella; Liotti, Giovanni; Trincia, Valeria; Picardi, Angelo
2016-07-01
Multi-motivational theories of human relatedness state that different motivational systems with an evolutionary basis modulate interpersonal relationships. The reliable assessment of their dynamics may usefully inform the understanding of the therapeutic relationship. The coding system of the Assessing Interpersonal Motivation in Transcripts (AIMIT) allows to identify in the clinical the activity of five main interpersonal motivational systems (IMSs): attachment (care-seeking), caregiving, ranking, sexuality and peer cooperation. To assess whether the criteria currently used to score the AIMIT are consistently correlated with the conceptual formulation of the interpersonal multi-motivational theory, two different studies were designed. Study 1: Content validity as assessed by highly qualified independent raters. Study 2: Content validity as assessed by unqualified raters. Results of study 1 show that out of the total 60 AIMIT verbal criteria, 52 (86.7%) met the required minimum degree of correspondence. The average semantic correspondence scores between these items and the related IMSs were quite good (overall mean: 3.74, standard deviation: 0.61). In study 2, a group of 20 naïve raters had to identify each prevalent motivation (IMS) in a random sequence of 1000 utterances drawn from therapy sessions. Cohen's Kappa coefficient was calculated for each rater with reference to each IMS and then calculated the average Kappa for all raters for each IMS. All average Kappa values were satisfactory (>0.60) and ranged between 0.63 (ranking system) and 0.83 (sexuality system). Data confirmed the overall soundness of AIMIT's theoretical-applicative approach. Results are discussed, corroborating the hypothesis that the AIMIT possesses the required criteria for content validity. Copyright © 2015 John Wiley & Sons, Ltd. Assessing Interpersonal Motivations in psychotherapy transcripts as a useful tool to better understand links between motivational systems and intersubjectivity. A step forward in the knowledge of evolutionary cognitivism and a contribution to the bio-psycho-social model of human relatedness and interpersonal neurobiology. Copyright © 2015 John Wiley & Sons, Ltd.
[Opportunity cost for men who visit family medicine units in the city of Querétaro, Mexico].
Martínez Carranza, Edith Olimpia; Villarreal Ríos, Enrique; Vargas Daza, Emma Rosa; Galicia Rodríguez, Liliana; Martínez González, Lidia
2010-12-01
To determine the opportunity cost for men who seek care in the family medicine units (FMU) of the Mexican Social Security Institute (IMSS, Instituto Mexicano del Seguro Social) in the city of Querétaro. A sample was selected of 807 men, ages 20 to 59 years, who sought care through the family medicine, laboratory, and pharmacy services provided by the FMU at the IMSS in Querétaro. Patients referred for emergency services and those who left the facilities without receiving care were excluded. The sample (n = 807) was calculated using the averages for an infinite population formula, with a confidence interval of 95% (CI95%) and an average opportunity cost of US$5.5 for family medicine, US$3.1 for laboratory services, and US$2.3 for pharmacy services. Estimates included the amount of time spent on travel, waiting, and receiving care; the number of people accompanying the patient, and the cost per minute of paid and unpaid job activities. The opportunity cost was calculated using the estimated cost per minute for travel, waiting, and receiving care for patients and their companions. The opportunity cost for the patient travel was estimated at US$0.97 (CI95%: 0.81-1.15), while wait time was US$5.03 (CI95%: 4.08-6.09) for family medicine, US$0.06 (CI95%: 0.05-0.08) for pharmacy services, and US$1.89 (CI95%: 1.56-2.25) for laboratory services. The average opportunity cost for an unaccompanied patient visit varied between US$1.10 for pharmacy services alone and US$8.64 for family medicine, pharmacy, and laboratory services. The weighted opportunity cost for family medicine was US$6.24. Given that the opportunity cost for men who seek services in FMU corresponds to more than half of a minimum salary, it should be examined from an institutional perspective whether this is the best alternative for care.
Deng, Wei; Long, Long; Tang, Xian-Yan; Huang, Tian-Ren; Li, Ji-Lin; Rong, Min-Hua; Li, Ke-Zhi; Liu, Hai-Zhou
2015-01-01
Geographic information system (GIS) technology has useful applications for epidemiology, enabling the detection of spatial patterns of disease dispersion and locating geographic areas at increased risk. In this study, we applied GIS technology to characterize the spatial pattern of mortality due to liver cancer in the autonomous region of Guangxi Zhuang in southwest China. A database with liver cancer mortality data for 1971-1973, 1990-1992, and 2004-2005, including geographic locations and climate conditions, was constructed, and the appropriate associations were investigated. It was found that the regions with the highest mortality rates were central Guangxi with Guigang City at the center, and southwest Guangxi centered in Fusui County. Regions with the lowest mortality rates were eastern Guangxi with Pingnan County at the center, and northern Guangxi centered in Sanjiang and Rongshui counties. Regarding climate conditions, in the 1990s the mortality rate of liver cancer positively correlated with average temperature and average minimum temperature, and negatively correlated with average precipitation. In 2004 through 2005, mortality due to liver cancer positively correlated with the average minimum temperature. Regions of high mortality had lower average humidity and higher average barometric pressure than did regions of low mortality. Our results provide information to benefit development of a regional liver cancer prevention program in Guangxi, and provide important information and a reference for exploring causes of liver cancer.
NASA Technical Reports Server (NTRS)
Cliver, E. W.; Ling, A. G.; Richardson, I. G.
2003-01-01
Using a recent classification of the solar wind at 1 AU into its principal components (slow solar wind, high-speed streams, and coronal mass ejections (CMEs) for 1972-2000, we show that the monthly-averaged galactic cosmic ray intensity is anti-correlated with the percentage of time that the Earth is imbedded in CME flows. We suggest that this correlation results primarily from a CME related change in the tail of the distribution function of hourly-averaged values of the solar wind magnetic field (B) between solar minimum and solar maximum. The number of high-B (square proper subset 10 nT) values increases by a factor of approx. 3 from minimum to maximum (from 5% of all hours to 17%), with about two-thirds of this increase due to CMEs. On an hour-to-hour basis, average changes of cosmic ray intensity at Earth become negative for solar wind magnetic field values square proper subset 10 nT.
Canadian crop calendars in support of the early warning project
NASA Technical Reports Server (NTRS)
Trenchard, M. H.; Hodges, T. (Principal Investigator)
1980-01-01
The Canadian crop calendars for LACIE are presented. Long term monthly averages of daily maximum and daily minimum temperatures for subregions of provinces were used to simulate normal daily maximum and minimum temperatures. The Robertson (1968) spring wheat and Williams (1974) spring barley phenology models were run using the simulated daily temperatures and daylengths for appropriate latitudes. Simulated daily temperatures and phenology model outputs for spring wheat and spring barley are given.
Validation of the Kp Geomagnetic Index Forecast at CCMC
NASA Astrophysics Data System (ADS)
Frechette, B. P.; Mays, M. L.
2017-12-01
The Community Coordinated Modeling Center (CCMC) Space Weather Research Center (SWRC) sub-team provides space weather services to NASA robotic mission operators and science campaigns and prototypes new models, forecasting techniques, and procedures. The Kp index is a measure of geomagnetic disturbances for space weather in the magnetosphere such as geomagnetic storms and substorms. In this study, we performed validation on the Newell et al. (2007) Kp prediction equation from December 2010 to July 2017. The purpose of this research is to understand the Kp forecast performance because it's critical for NASA missions to have confidence in the space weather forecast. This research was done by computing the Kp error for each forecast (average, minimum, maximum) and each synoptic period. Then to quantify forecast performance we computed the mean error, mean absolute error, root mean square error, multiplicative bias and correlation coefficient. A contingency table was made for each forecast and skill scores were computed. The results are compared to the perfect score and reference forecast skill score. In conclusion, the skill score and error results show that the minimum of the predicted Kp over each synoptic period from the Newell et al. (2007) Kp prediction equation performed better than the maximum or average of the prediction. However, persistence (reference forecast) outperformed all of the Kp forecasts (minimum, maximum, and average). Overall, the Newell Kp prediction still predicts within a range of 1, even though persistence beats it.
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
40 CFR 600.510-08 - Calculation of average fuel economy.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 31 2012-07-01 2012-07-01 false Calculation of average fuel economy...) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Determining Manufacturer's Average Fuel Economy and Manufacturer's Average Carbon-Related Exhaust Emissions...
40 CFR 600.510-08 - Calculation of average fuel economy.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 31 2013-07-01 2013-07-01 false Calculation of average fuel economy...) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Determining Manufacturer's Average Fuel Economy and Manufacturer's Average Carbon-Related Exhaust Emissions...
49 CFR 178.50 - Specification 4B welded or brazed steel cylinders.
Code of Federal Regulations, 2011 CFR
2011-10-01
...)] / (D2 − d2) Where: S = wall stress in psi; P = minimum test pressure prescribed for water jacket test or... seams that are forged lap-welded or brazed and with water capacity (nominal) not over 1,000 pounds and a... calculated wall stress at minimum test pressure (paragraph (i)(4) of this section) may not exceed the...
49 CFR 178.50 - Specification 4B welded or brazed steel cylinders.
Code of Federal Regulations, 2010 CFR
2010-10-01
...)] / (D2 − d2) Where: S = wall stress in psi; P = minimum test pressure prescribed for water jacket test or... longitudinal seams that are forged lap-welded or brazed and with water capacity (nominal) not over 1,000 pounds... calculated wall stress at minimum test pressure (paragraph (i)(4) of this section) may not exceed the...
Leyde, Brian P; Klein, Sanford A; Nellis, Gregory F; Skye, Harrison
2017-03-01
This paper presents a new method called the Crossed Contour Method for determining the effective properties (borehole radius and ground thermal conductivity) of a vertical ground-coupled heat exchanger. The borehole radius is used as a proxy for the overall borehole thermal resistance. The method has been applied to both simulated and experimental borehole Thermal Response Test (TRT) data using the Duct Storage vertical ground heat exchanger model implemented in the TRansient SYstems Simulation software (TRNSYS). The Crossed Contour Method generates a parametric grid of simulated TRT data for different combinations of borehole radius and ground thermal conductivity in a series of time windows. The error between the average of the simulated and experimental bore field inlet and outlet temperatures is calculated for each set of borehole properties within each time window. Using these data, contours of the minimum error are constructed in the parameter space of borehole radius and ground thermal conductivity. When all of the minimum error contours for each time window are superimposed, the point where the contours cross (intersect) identifies the effective borehole properties for the model that most closely represents the experimental data in every time window and thus over the entire length of the experimental data set. The computed borehole properties are compared with results from existing model inversion methods including the Ground Property Measurement (GPM) software developed by Oak Ridge National Laboratory, and the Line Source Model.
Maisel, Sascha B; Höfler, Michaela; Müller, Stefan
2012-11-29
Any thermodynamically stable or metastable phase corresponds to a local minimum of a potentially very complicated energy landscape. But however complex the crystal might be, this energy landscape is of parabolic shape near its minima. Roughly speaking, the depth of this energy well with respect to some reference level determines the thermodynamic stability of the system, and the steepness of the parabola near its minimum determines the system's elastic properties. Although changing alloying elements and their concentrations in a given material to enhance certain properties dates back to the Bronze Age, the systematic search for desirable properties in metastable atomic configurations at a fixed stoichiometry is a very recent tool in materials design. Here we demonstrate, using first-principles studies of four binary alloy systems, that the elastic properties of face-centred-cubic intermetallic compounds obey certain rules. We reach two conclusions based on calculations on a huge subset of the face-centred-cubic configuration space. First, the stiffness and the heat of formation are negatively correlated with a nearly constant Spearman correlation for all concentrations. Second, the averaged stiffness of metastable configurations at a fixed concentration decays linearly with their distance to the ground-state line (the phase diagram of an alloy at zero Kelvin). We hope that our methods will help to simplify the quest for new materials with optimal properties from the vast configuration space available.
Fukuda, Shinichi; Beheregaray, Simone; Hoshi, Sujin; Yamanari, Masahiro; Lim, Yiheng; Hiraoka, Takahiro; Yasuno, Yoshiaki; Oshika, Tetsuro
2013-12-01
To evaluate the ability of parameters measured by three-dimensional (3D) corneal and anterior segment optical coherence tomography (CAS-OCT) and a rotating Scheimpflug camera combined with a Placido topography system (Scheimpflug camera with topography) to discriminate between normal eyes and forme fruste keratoconus. Forty-eight eyes of 48 patients with keratoconus, 25 eyes of 25 patients with forme fruste keratoconus and 128 eyes of 128 normal subjects were evaluated. Anterior and posterior keratometric parameters (steep K, flat K, average K), elevation, topographic parameters, regular and irregular astigmatism (spherical, asymmetry, regular and higher-order astigmatism) and five pachymetric parameters (minimum, minimum-median, inferior-superior, inferotemporal-superonasal, vertical thinnest location of the cornea) were measured using 3D CAS-OCT and a Scheimpflug camera with topography. The area under the receiver operating curve (AUROC) was calculated to assess the discrimination ability. Compatibility and repeatability of both devices were evaluated. Posterior surface elevation showed higher AUROC values in discrimination analysis of forme fruste keratoconus using both devices. Both instruments showed significant linear correlations (p<0.05, Pearson's correlation coefficient) and good repeatability (ICCs: 0.885-0.999) for normal and forme fruste keratoconus. Posterior elevation was the best discrimination parameter for forme fruste keratoconus. Both instruments presented good correlation and repeatability for this condition.
NASA Astrophysics Data System (ADS)
Kumar, Naresh; Jaswal, A. K.; Mohapatra, M.; Kore, P. A.
2017-08-01
Spatial and temporal variations in summer and winter extreme temperature indices are studied by using daily maximum and minimum temperatures data from 227 surface meteorological stations well distributed over India for the period 1969-2012. For this purpose, time series for six extreme temperature indices namely, hot days (HD), very hot days (VHD), extremely hot days (EHD), cold nights (CN), very cold nights (VCN), and extremely cold nights (ECN) are calculated for all the stations. In addition, time series for mean extreme temperature indices of summer and winter seasons are also analyzed. Study reveals high variability in spatial distribution of threshold temperatures of extreme temperature indices over the country. In general, increasing trends are observed in summer hot days indices and decreasing trends in winter cold night indices over most parts of the country. The results obtained in this study indicate warming in summer maximum and winter minimum temperatures over India. Averaged over India, trends in summer hot days indices HD, VHD, and EHD are significantly increasing (+1.0, +0.64, and +0.32 days/decade, respectively) and winter cold night indices CN, VCN, and ECN are significantly decreasing (-0.93, -0.47, and -0.15 days/decade, respectively). Also, it is observed that the impact of extreme temperature is higher along the west coast for summer and east coast for winter.
A wavelet-based adaptive fusion algorithm of infrared polarization imaging
NASA Astrophysics Data System (ADS)
Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang
2011-08-01
The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.
Austin, Melissa C; Smith, Christina; Pritchard, Colin C; Tait, Jonathan F
2016-02-01
Complex molecular assays are increasingly used to direct therapy and provide diagnostic and prognostic information but can require relatively large amounts of DNA. To provide data to pathologists to help them assess tissue adequacy and provide prospective guidance on the amount of tissue that should be procured. We used slide-based measurements to establish a relationship between processed tissue volume and DNA yield by A260 from 366 formalin-fixed, paraffin-embedded tissue samples submitted for the 3 most common molecular assays performed in our laboratory (EGFR, KRAS, and BRAF). We determined the average DNA yield per unit of tissue volume, and we used the distribution of DNA yields to calculate the minimum volume of tissue that should yield sufficient DNA 99% of the time. All samples with a volume greater than 8 mm(3) yielded at least 1 μg of DNA, and more than 80% of samples producing less than 1 μg were extracted from less than 4 mm(3) of tissue. Nine square millimeters of tissue should produce more than 1 μg of DNA 99% of the time. We conclude that 2 tissue cores, each 1 cm long and obtained with an 18-gauge needle, will almost always provide enough DNA for complex multigene assays, and our methodology may be readily extrapolated to individual institutional practice.
A Simple Approach for the Calculation of Energy Levels of Light Atoms
ERIC Educational Resources Information Center
Woodyard, Jack R., Sr.
1972-01-01
Describes a method for direct calculation of energy levels by using elementary techniques. Describes the limitations of the approach but also claims that with a minimum amount of labor a student can get greater understanding of atomic physics problems. (PS)
Neff, Michael; Rauhut, Guntram
2014-02-05
Multidimensional potential energy surfaces obtained from explicitly correlated coupled-cluster calculations and further corrections for high-order correlation contributions, scalar relativistic effects and core-correlation energy contributions were generated in a fully automated fashion for the double-minimum benchmark systems OH3(+) and NH3. The black-box generation of the potentials is based on normal coordinates, which were used in the underlying multimode expansions of the potentials and the μ-tensor within the Watson operator. Normal coordinates are not the optimal choice for describing double-minimum potentials and the question remains if they can be used for accurate calculations at all. However, their unique definition is an appealing feature, which removes remaining errors in truncated potential expansions arising from different choices of curvilinear coordinate systems. Fully automated calculations are presented, which demonstrate, that the proposed scheme allows for the determination of energy levels and tunneling splittings as a routine application. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Walch, Stephen P.
1995-01-01
We report calculations of the minimum energy pathways connecting (1)CH2+N2 to diazomethane and diazirine, for the rearrangement of diazirine to diazomethane, for the dissociation of diazirine to HCN2+H, and of diazomethane to CH2N+N. The calculations use complete active space self-consistent field (CASSCF) derivative methods to characterize the stationary points and internally contracted configuration interaction (ICCI) to determine the energetics. The calculations suggest a potential new source of prompt NO from the reaction of (1)CH2 with N2 to give diazirine, and subsequent reaction of diazirine with hydrogen abstracters to form doublet HCN2, which leads to HCN+N(S-4) on the previously studied CH+N2 Surface. The calculations also predict accurate 0 K heats of formation of 77.7 kcal/mol and 68.0 kcal/mol for diazirine and diazomethane, respectively.
Elío, J; Crowley, Q; Scanlon, R; Hodgson, J; Zgaga, L
2018-05-01
Radon is a naturally occurring gas, classified as a Class 1 human carcinogen, being the second most significant cause of lung cancer after tobacco smoking. A robust spatial definition of radon distribution in the built environment is therefore essential for understanding the relationship between radon exposure and its adverse health effects on the general population. Using Ireland as a case study, we present a methodology to estimate an average indoor radon concentration and calculate the expected radon-related lung cancer incidence. We use this approach to define Radon Priority Areas at the administrative level of Electoral Divisions (EDs). Geostatistical methods were applied to a data set of almost 32,000 indoor radon measurements, sampled in Ireland between 1992 and 2013. Average indoor radon concentrations by ED range from 21 to 338 Bq m -3 , corresponding to an effective dose ranging from 0.8 to 13.3 mSv y -1 respectively. Radon-related lung cancer incidence by ED was calculated using a dose-effect model giving between 15 and 239 cases per million people per year, depending on the ED. Based on these calculations, together with the population density, we estimate that of the approximately 2,300 lung cancer cases currently diagnosed in Ireland annually, about 280 may be directly linked to radon exposure. This figure does not account for the synergistic effect of radon exposure with other factors (e.g. tobacco smoking), so likely represents a minimum estimate. Our approach spatially defines areas with the expected highest incidence of radon-related lung cancer, even though indoor radon concentrations for these areas may be moderate or low. We therefore recommend that both indoor radon concentration and population density by small area are considered when establishing national radon action plans. Copyright © 2018 Elsevier Ltd. All rights reserved.
Determination of natural and artificial radioactivity in soil at North Lebanon province.
El Samad, O; Baydoun, R; Nsouli, B; Darwish, T
2013-11-01
The concentrations of natural and artificial radionuclides at 57 sampling locations along the North Province of Lebanon are reported. The samples were collected from uncultivated areas in a region not previously reported. The samples were analyzed by gamma spectrometers with High Purity Germanium detectors of 30% and 40% relative efficiency. The activity concentrations of primordial naturally occurring radionuclides of (238)U, (232)Th, and (40)K varied between 4-73 Bq kg(-1), 5-50 Bq kg(-1), and 57-554 Bq kg(-1) respectively. The surface activity concentrations due to the presence of these radionuclides were calculated and Kriging-geostatistical method was used to plot the obtained data on the Lebanese radioactive map. The results for (238)U, (232)Th, and (40)K ranged from 0.2 kBq m(-2) to 9 kBq m(-2), from 0.2 kBq m(-2) to 3 kBq m(-2), and from 3 kBq m(-2) to 29 kBq m(-2) respectively. For the anthropogenic radionuclides, the activity concentrations of (137)Cs founded in soil ranged from 2 Bq kg(-1) to 113 Bq kg(-1), and the surface activity concentration from 0.1 kBq m(-2) to 5 kBq m(-2). The total absorbed gamma dose rates in air from natural and artificial radionuclides in these locations were calculated. The minimum value was 6 nGy h(-1) and the highest one was 135 nGy h(-1) with an average of 55 nGy h(-1) in which the natural terrestrial radiation contributes in 99% and the artificial radionuclides mainly (137)Cs contributes only in 1%. The total effective dose calculated varied in the range of 7 μSv y(-1) and 166 μSv y(-1) while the average value was 69 μSv y(-1) which is below the permissible limit 1000 μSv y(-1). Copyright © 2013 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-10-01
... OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.2 Purpose. The purpose of this part is to increase the fuel economy of passenger automobiles by establishing minimum levels of...
Code of Federal Regulations, 2014 CFR
2014-10-01
... OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.2 Purpose. The purpose of this part is to increase the fuel economy of passenger automobiles by establishing minimum levels of...
Code of Federal Regulations, 2013 CFR
2013-10-01
... OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.2 Purpose. The purpose of this part is to increase the fuel economy of passenger automobiles by establishing minimum levels of...
Code of Federal Regulations, 2011 CFR
2011-10-01
... OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.2 Purpose. The purpose of this part is to increase the fuel economy of passenger automobiles by establishing minimum levels of...
Code of Federal Regulations, 2012 CFR
2012-10-01
... OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.2 Purpose. The purpose of this part is to increase the fuel economy of passenger automobiles by establishing minimum levels of...
Adjusted monthly temperature and precipitation values for Guinea Conakry (1941-2010) using HOMER.
NASA Astrophysics Data System (ADS)
Aguilar, Enric; Aziz Barry, Abdoul; Mestre, Olivier
2013-04-01
Africa is a data sparse region and there are very few studies presenting homogenized monthly records. In this work, we introduce a dataset consisting of 12 stations spread over Guinea Conakry containing daily values of maximum and minimum temperature and accumulated rainfall for the period 1941-2010. The daily values have been quality controlled using R-Climdex routines, plus other interactive quality control applications, coded by the authors. After applying the different tests, more than 200 daily values were flagged as doubtful and carefully checked against the statistical distribution of the series and the rest of the dataset. Finally, 40 values were modified or set to missing and the rest were validated. The quality controlled daily dataset was used to produce monthly means and homogenized with HOMER, a new R-pacakge which includes the relative methods that performed better in the experiments conducted in the framework of the COST-HOME action. A total number of 38 inhomogeneities were found for temperature. As a total of 788 years of data were analyzed, the average ratio was one break every 20.7 years. The station with a larger number of inhomogeneities was Conakry (5 breaks) and one station, Kissidougou, was identified as homogeneous. The average number of breaks/station was 3.2. The mean value of the monthly factors applied to maximum (minimum) temperature was 0.17 °C (-1.08 °C) . For precipitation, due to the demand of a denser network to correctly homogenize this variable, only two major inhomogeneities in Conakry (1941-1961, -12%) and Kindia (1941-1976, -10%) were corrected. The adjusted dataset was used to compute regional series for the three variables and trends for the 1941-2010 period. The regional mean has been computed by simply averaging anomalies to 1971-2000 of the 12 time series. Two different versions have been obtained: a first one (A) makes use of the missing values interpolation made by HOMER (so all annual values in the regional series are an average of 12 anomalies); the second one (B) removes the missing values, and each value of the regional series is an average of 5 to 12 anomalies. In this case, a variance stabilization factor has been applied. As a last step a trend analysis has been applied over the regional series. This has been done using two different approaches: standard least squares regression (LS) and the implementation by Zhang of the Sen slope estimator (SEN), applied using the zyp R-package. The results for the A & B series and the different trend calculations are very similar, in terms of slopes and signification. All the identified trends are significant at the 95% confidence level or better. Using the A series and the SEN slope, the annual regional mean of maximum temperatures has increased 0.135 °C/decade (95% confidence interval: 0.087 / 0.173) and the annual regional mean of minimum temperatures 0.092 °C/decade (0.050/0.135). Maximum temperatures present high values in the 1940s to 1950s and a large increase in the last decades. In contrast, minimum temperatures were relatively cooler in the 1940s and 1950s and the increase in the last decades is more moderate. Finally, the regional mean of annual accumulated precipitation decreased between 1941 and 2010 by -2.20 mm (-3.82/-0.64). The precipitation series are dominated by the high values before 1970, followed by a well known decrease in rainfall. This homogenized monthly series will improve future analysis over this portion of Western Africa.
Metastable Features of Economic Networks and Responses to Exogenous Shocks
Hosseiny, Ali; Bahrami, Mohammad; Palestrini, Antonio; Gallegati, Mauro
2016-01-01
It is well known that a network structure plays an important role in addressing a collective behavior. In this paper we study a network of firms and corporations for addressing metastable features in an Ising based model. In our model we observe that if in a recession the government imposes a demand shock to stimulate the network, metastable features shape its response. Actually we find that there exists a minimum bound where any demand shock with a size below it is unable to trigger the market out of recession. We then investigate the impact of network characteristics on this minimum bound. We surprisingly observe that in a Watts-Strogatz network, although the minimum bound depends on the average of the degrees, when translated into the language of economics, such a bound is independent of the average degrees. This bound is about 0.44ΔGDP, where ΔGDP is the gap of GDP between recession and expansion. We examine our suggestions for the cases of the United States and the European Union in the recent recession, and compare them with the imposed stimulations. While the stimulation in the US has been above our threshold, in the EU it has been far below our threshold. Beside providing a minimum bound for a successful stimulation, our study on the metastable features suggests that in the time of crisis there is a “golden time passage” in which the minimum bound for successful stimulation can be much lower. Hence, our study strongly suggests stimulations to arise within this time passage. PMID:27706166
Code of Federal Regulations, 2014 CFR
2014-07-01
... averages for sulfur dioxide, nitrogen oxides (Class I municipal waste combustion units only), and carbon monoxide are in parts per million by dry volume at 7 percent oxygen (or the equivalent carbon dioxide level). Use the 1-hour averages of oxygen (or carbon dioxide) data from your continuous emission monitoring...
Code of Federal Regulations, 2013 CFR
2013-07-01
... averages for sulfur dioxide, nitrogen oxides (Class I municipal waste combustion units only), and carbon monoxide are in parts per million by dry volume at 7 percent oxygen (or the equivalent carbon dioxide level). Use the 1-hour averages of oxygen (or carbon dioxide) data from your continuous emission monitoring...